Threads for Polochon_street

  1. 1

    Really great article! I can’t judge the quality of the explanations (though they seemed pretty accurate to me), but the animations really bring something to it. I can only wish we had more animations like that in other articles :)

    1. 4

      Do we really need an update only three months later?

      The article is talking about battery life three months after as if it had been under heavy use for years - I’m not sure there’s a point in making a follow-up so soon?

      1. 1

        Next follow-up in a year.

      1. 1

        This looks interesting indeed. However I am not sure whether I believe in the premise that songs that are “close” together make up for a good playlist. At least for me. There needs to be some contrast every now and then to keep it interesting. Also songs can be close together in a way that is completely undetectable for any algorithm. (Think: they both were featured on that mix tape that we had on continuous play on that one trip ten years ago. )

        In The Netherlands we had (probably still have) a radio station that had no DJ’s, only songs and occasional commercials. The idea behind it was not to try to create a playlist with songs a certain target demographic really liked, but to reverse it: to make a playlist that was the least annoying to the whole population. The idea being, I guess, that if people get annoyed, they change stations and don’t hear the commercials. This should make it an ideal station for workplaces etc.

        And they did it. It was a very smooth blend of songs and evergreens that would quickly dissipate into the background and it was a popular choice for a lot of those situations. It also totally drove me up against the wall if I had to listen to it for more than 30 minutes.

        1. 2

          That’s a tough problem indeed. I’ve chosen that premise for a couple of reasons:

          1. This software is mainly aimed at private people, who own small to medium-size libraries (I have around 7k songs and I consider that “small” if that can give you an idea). So, even if you only chose songs that are close (by whatever metric, here we’ll consider that they’re close if they “sound” the same), the playlist will eventually reach a point where it has to evolve, for the sheer lack of really close songs to the starting song. Of course, if you’re Spotify, then you need to implement some variety there, otherwise you’ll indeed circle in the same genre too much.

          2. It all boils down to the distance metric you chose to use to make playlists. Even if you take the very simple “I’m taking a starting song and want to make a playlist of the closest songs out of it”, you still have (at least) two ways to deal with that (I’ve written a bit about this here, section 4.2). You can either chose all the songs that are close to your first song (in that case, you probably won’t drift much from that song’s genre, but you might have rough transitions between songs themselves, see the drawings in the link). You can also just draw a “path” between close songs, in that case you might drift away from the starting genre very fast, depending on what your library looks like.

          3. You’re not limited to these two distance metrics (and I’ve tried to make it fairly easy to customize in bliss-rs, since you can just get the raw features and experiment with them). You could for example use cosine or similar distances, where you’d go “in a direction” - which would roughly translate to, if you chose a very calm song, “give me just calm songs, I don’t care if it’s acoustic guitar or just electronic ambient music”.

          4. And that’s without touching the weight of the distance metric - if you decide that the chroma features (roughly “songs with the same pitch”) are the most important ones and you attribute more weight to them, you’d probably end up with a playlist of very different songs, with transitions you’d perceive as somehow natural because the pitch class would be roughly the same. (I’m not a musicology expert though, so please correct me if I’m wrong!)

          TL;DR: You can basically tweak the meaning of “close” so that it fits your definition of a good playlist by either changing the distance metric or the weights of the distance metric :D

          1. 2

            Ah, I see it now. I was a bit focused on the word “close”, but your answer made me realize that one can get very creative with these ingredients. I hope I have some time this weekend to play with this. Thanks for linking your thesis!

        1. 4

          I’m unfamiliar with audio classification, but I’ve made a lot of mix “tapes”, and I pay a lot of attention to track ordering and transitions (usually crossfaded.)

          IMO it’s very important to consider the beginnings and endings of tracks, not just an average of the whole track. Many pieces of music have lengthy intros or outros that are distinct from the rest, and many just end in a different emplacements than they began. “Stairway to Heaven” and “Don’t Stand So Close To Me” are good examples, or Sonic Youth’s “Expressway To Yr Skull”. And how do you average out “Bohemian Rhapsody”, or King Crimson’s “Fracture”, which starts extremely softly, has some actual silence in the middle, and ends as frantic art-rock?

          1. 2

            I agree that it’s very important, and it all boils down to find a way to summarize a track’s features.

            Right now, we use mean and median for most of the features, because it gives good enough results, but we do store features throughout the whole song to be able to change that if it proves to be useful, and summarize them at the end (f.ex.

            One thing that is also frustrating is when a song from a gapless album comes, and it just transitions to something else. I’ve attempted to deal with that (no very successfully) a long time ago, but it’s also definitely on the long-term list of things to implement for bliss-rs.

          1. 7

            Very nice. I worked on a similar project a few years back. This is actually very similar to the MusicBox project by Anita Lillie back in 2008 (see demo at:, thesis at, which itself was build on top of the analysis provided by Echonest.

            Using the open-source aubio for the analysis and building playlist instead of working out a new player are very good decisions. When we tried to do something similar, this was also the direction we picked, and then had layers for sending playlists to various players.

            Now what’s missing here is an UI to “build” the playlist visually (check the demo in the link earlier). The principle for building such an UI is very simple: instead of just 1 distance between 2 songs, you have a set of N distances (corresponding to similarity to various parameters such as rhythm, loudness, pitch, but also tag metadata, etc) which is then reduced to 2 dimensions (using PCA), and you get a 2D map of all your music library. Then you can draw a path for building your playlist.

            This is in my opinion the only sane answer to “the music classifying nightmare” ( – author here).

            For a more traditional approach to solving the music classifying problem, I’d recommend looking at

            1. 2

              Thanks a lot for the very interesting references - I’ll take a look at implementing the PCA for blissify, to check how it performs.

              I’ve actually checked the landscape of tools like this before starting the project, and saw that there were a lot of music similarity thesis, but very few tools actually usable “out of the box”. So, instead of trying to make something really innovative, I’ve tried to aggregate the existing results to build a (somewhat) usable / maintainable “real-world” tool.

            1. 8

              I’m really under the impression of reading “from Vim to Notepad”, where the author only states that “Vim has too many features (?), so using notepad is better, hey, I even adjusted my workflow to accommodate for the lack of features!”.

              I don’t understand. Why not just use vim with the limited set of features you know?

              1. 3

                Solid case of procrastination. Developers get bored like anyone else. When the thing they are working on gets boring, they may compensate with something that feels personal, their editor, workflow, window manager…

                It’s a healthy behavior to some degree. A hour spent automating a two hour worth of repetitive tasks is great. Two hours would be okay. I would worry a bit if a developer in my teams wouldn’t even try or decided to spend a year in ed.

                1. 2

                  I agree, to me, to do a comparison with tennis, it’s a bit like saying that forbidding you to run to catch the ball, you need to get much better at predicting your opponents moves. OK, that’s an exercise, why not, but we can’t say it makes you better at playing tennis, you’re better at predicting, but you should still use vim^W^Wrun while you play tennis.

                1. 10

                  Nice article, but it’s too bad because I was really looking forward for the “Actually storing data forever” (or at least a very long time) part, and the author dodges the bullet by saying “oh well it’s a broad question”.

                  That would be really interesting to have a take on “how to store a particular music recording forever”, for example.

                  1. 18

                    Author here. I reckon that I did answer this, in the last section of the article. The answer is: you need to have a continuous chain of real human beings from here to your destination to look after the data. What use is it if the data is in a langauge which has been dead and forgotten for 10,000 years? Or if the contemporary stewards of the archive just stop caring about it? It’s not a sexy answer like “etch it in an asteroid” or “encode it in the DNA of a plant and let it spread”, but those solutions are just sexy - not effective. What happens if the asteroid is flung out of solar orbit, or if the plants go extinct? The data needs ongoing, intelligent maintenance.

                    1. 2

                      There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory mentioned, which states that this has already happened.

                    2. 3

                      Tape is considered the best. Lasts a long time, generations of product are designed for backward compatibility, and damage to data is more localized. This link mentions advantages and disadvantages. Apparently, there’s been some improvements in accessing data, too.

                      High-quality paper that is sealed from the elements works pretty well, too. I thought about going back to punch cards or using barcodes for low amounts of critical data. Turns out, there was a startup that was backing up people’s data on paper using giant rolls and printing press like newspapers use. It was a trip.

                      Lastly, you can use a mix of HD’s and optical media. The optical media is there because it’s not susceptible to electromagnetic interference in its storage. Just basically have multiple HD’s in multiple places. You script something that lists all the files with their hashes. Periodically, you check hashes on each one to see if anything screwed up. A voting algorithm fixes that. The logistics of checking and rotation is something I’m leaving to others to work out since it varies by use case. Just make sure they’re different kinds of HD’s to avoid similar failures.

                      You can also do the above with cloud vendors if you want to pay the storage fees instead of do the logistics. If sourcing ethical providers, I was looking at Tarsnap and Backblaze (for their open hardware). Remember that many VM companies sell storage with their VM’s. Could use a bunch of $5 VM’s scattered around the world on top of your local copies. Then, you can use good companies like Prgmr, DreamCompute, SiteGround, and Pair. The component connecting to those and checking them should itself be rock-solid (e.g. OpenBSD or FreeBSD).

                      1. 3

                        Gold punched tape. I’ve stored my gpg key on aluminum tape and read them back successfully.

                          1. 1

                            I have an anecdotal report that laser printed paper, run through a hard lamination machine, will reliably remain legible for about 100 years, provided it doesn’t burn. You can of course also laser engrave or plasma cut some sheet metal, which, depending on the depth of engraving and type of metal and storage environment, could probably be no-maintenance reliable for about 10-100x as long.

                            This doesn’t help with larger quantities of data, but could be used for something like a QR code. ddevault rightly raises the matter that 100 years from now there might not be QR decoder software handy, but that’s less of an issue, especially if you include some basic informational text about how to reconstruct a decoder.

                            1. 1

                              I have an anecdotal report that laser printed paper, run through a hard lamination machine, will reliably remain legible for about 100 years, provided it doesn’t burn.

                              Laser printers haven’t been around for 100 years.. so what is this anecdote based on?

                          1. 2

                            I believe a problem this doesn’t address is that you could also just be tortured, and then left into a steady state until your body indicators become normal. Then your torturer could just ask you to unlock whatever needs to be unlocked, threatening you of torturing you again if you don’t.

                            Basically, I think the assumption of “body indicators = normal” => “willingly unlocking device” is not so right