Threads for tuxie_

  1. 1

    I loved the article, so well written it takes no effort to read, very compelling.

    I realized that the syntax wasn’t the problem, it is the semantics, which is a much, much harder problem.

    I would love to know more about this, I’ve never wrote a compiler so I don’t really know what this means in the context of HTML.

    1. 1

      Oh wow Epiphany! I had forgotten about it. Thanks for the flashback :)

      1. 13

        Every programmer who is not a games dev is a failed games dev. :-)

        1. 12

          I’m not sure that ‘failed’ is the right framing. I found game development really fun when I was a child and it was a great way of learning to program. The best way of learning to program is to write the kind of software that you care about. As a child, games counted for close to 100% of what I wanted to do with a computer, and easily 100% of what I wanted to do with a computer and couldn’t with existing off-the-shelf software, and so were the main thing to drive my interest in programming. As I grew older, I didn’t stop enjoying games, but I started doing a lot more things with computers. The set of things that I wanted to do with a computer but can’t do with existing software still has games in there somewhere, but as a tiny percentage and so I’m much happier playing off-the-shelf games and writing bespoke software in other domains.

          1. 7

            I have never been interested in game dev, even as a kid, despite playing tons of video games. In elementary school a bunch of my friends were tinkering around with GameMaker, but I never was interested. I toyed around with it for less than an hour before getting bored.

            1. 3

              I don’t know what this means. I’m neither a games dev nor a failed games dev. So… am I not a programmer?

              1. 1

                Don’t take it personal, I think that OP just tried to make a joke based on his own experience that just fell flat.

              2. 2

                That’s just nonsense.

              1. 1

                the idiomatic way to do this is to use postgresql’s INSERT ON CONFLICT DO UPDATE which solves most (all?) problems outlined in the article.

                https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT

                1. 2

                  ON CONFLICT DO UPDATE updates the existing row that conflicts with the row proposed for insertion as its alternative action.

                  If I understand the documentation correctly, your suggestion would not try to generate a new key for target_url, as the author expects, but rather update the URL with the new value, which would result in a bug. Is that correct?

                  1. 1

                    sorry, i skimmed too quickly 🙃 thought it was about unique urls but it’s about clashing unique random identifiers.

                    anyway, various behaviours can be implemented with postgresql. there’s also ON CONFLICT DO NOTHING. and there’s RETURNING to retrieve inserted data, so the app can use this to detect a clash and try again.

                    and this will all be in a transaction and hence atomic. so even if a concurrent conflicting earlier transaction would try to claim that identifier but then ROLLBACK, for example, the other one would wait for it, and successfully claim that identifier after all.

                1. 16

                  In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs. Then Node started warning me about security problems in some of those libraries. I ended up taking some time finding alternative packages with fewer dependencies.

                  On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t. It’s cool to look at how tiny and efficient code can be — a Scheme interpreter in 4KB! The original Mac OS was 64KB! — but yowza, is it ever difficult to code that way.

                  There was an early Mac word processor — can’t remember the name — that got a lot of love because it was super fast. That’s because they wrote it in 68000 assembly. It was successful for some years, but failed by the early 90s because it couldn’t keep up with the feature set of Word or WordPerfect. (I know Word has long been a symbol of bloat, but trust me, Word 4 and 5 on Mac were awesome.) Adding features like style sheets or wrapping text around images took too long to implement in assembly compared to C.

                  The speed and efficiency of how we’re creating stuff now is crazy. People are creating fancy OSs with GUIs in their bedrooms with a couple of collaborators, presumably in their spare time. If you’re up to speed with current Web tech you can bring up a pretty complex web app in a matter of days.

                  1. 24

                    I don’t know, I think there’s more to it than just “these darn new languages with their package managers made dependencies too easy, in my day we had to manually download Boost uphill both ways” or whatever. The dependencies in the occasional Swift or Rust app aren’t even a tenth of the bloat on my disk.

                    It’s the whole engineering culture of “why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application, and then implement your glorified scp GUI application inside that, so that you never have to learn anything other than the one and only tool you know”. Everything’s turned into 500megs worth of nail because we’ve got an entire generation of Hammer Engineers who won’t even consider that it might be more efficient to pick up a screwdriver sometimes.

                    We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                    That’s the argument, but it’s not clear to me that we haven’t severely over-corrected at this point. I’ve watched teams spend weeks poking at the mile-high tower of leaky abstractions any react-native mobile app teeters atop, just to try to get the UI to do what they could have done in ten minutes if they’d bothered to learn the underlying platform API. At some point “make all the world a browser tab” became the goal in-and-of-itself, whether or not that was inefficient in every possible dimension (memory, CPU, power consumption, or developer time). It’s heretical to even question whether or not this is truly more developer-time-efficient anymore, in the majority of cases – the goal isn’t so much to be efficient with our time as it is to just avoid having to learn any new skills.

                    The industry didn’t feel this sclerotic and incurious twenty years ago.

                    1. 7

                      It’s heretical to even question whether or not this is truly more developer-time-efficient anymore

                      And even if we set that question aside and assume that it is, it’s still just shoving the costs onto others. Automakers could probably crank out new cars faster by giving up on fuel-efficiency and emissions optimizations, but should they? (Okay, left to their own devices they probably would, but thankfully we have regulations they have to meet.)

                      1. 1

                        left to their own devices they probably would, but thankfully we have regulations they have to meet.

                        Regulations. This is it.

                        I’ve long believed that this is very important in our industry. As earlier comments say, you can make a complex web app after work in a weekend. But then there are people, in the mentioned above autoindustry, that take three sprints to set up a single screen with a table, a popup, and two forms. That’s after they pulled in the internet worth of dependencies.

                        On the one hand, we don’t want to be gatekeeping. We want everyone to contribute. When dhh said we should stop celebrating incompetence, majority of people around him called this gatekeeping. Yet when we see or say something like this - don’t build bloat or something along the line - everyone agrees.

                        I think the middle line should be in between. Let individuals do whatever the hell they want. But regulate “selling” stuff for money or advertisement eyeballs or anything similar. If an app is more then x MB (some reasonable target), it has to get certified before you can publish it. Or maybe, if a popular app does. Or, if a library is included in more then X, then that lib either gets “certified”, or further apps using it are banned.

                        I am sure that is huge, immensely big, can of worms. There will be many problems there. But if we don’t start cleaning up shit, it’s going to pile up.

                        A simple example - if controversial - is Google. When they start punishing a webapp for not rendering within 1 second, everybody on internet (that wants to be on top of google) starts optimizing for performance. So, it can be done. We just have to setup - and maintain - a system that deals with the problem ….well, systematically.

                      2. 1

                        why learn a new language or new API when you can just jam an entire web browser the size of an operating system into the application

                        Yeah. One of the things that confuses me is why apps bundle a browser when platforms already come with browsers that can easily be embedded in apps. You can use Apple’s WKWebView class to embed a Safari-equivalent browser in an app that weighs in at under a megabyte. I know Windows has similar APIs, and I imagine Linux does too (modulo the combinatorial expansion of number-of-browsers times number-of-GUI-frameworks.)

                        I can only imagine that whoever built Electron felt that devs didn’t want to deal with having to make their code compatible with more than one browser engine, and that it was worth it to shove an entire copy of Chromium into the app to provide that convenience.

                        1. 1

                          Here’s an explanation from the Slack developer who moved Slack for Mac from WebKit to Electron. And on Windows, the only OS-provided browser engine until quite recently was either the IE engine or the abandoned EdgeHTML.

                      3. 10

                        On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                        The problem is that your dependencies can behave strangely, and you need to debug them.

                        Code bloat makes programs hard to debug. It costs programmer time.

                        1. 3

                          The problem is that your dependencies can behave strangely, and you need to debug them.

                          To make matters worse, developers don’t think carefully about which dependencies they’re bothering to include. For instance, if image loading is needed, many applications could get by with image read support for one format (e.g. with libpng). Too often I’ll see an application depend on something like ImageMagick which is complete overkill for that situation, and includes a ton of additional complex functionality that bloats the binary, introduces subtle bugs, and wasn’t even needed to begin with.

                        2. 10

                          On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                          The problem is that computational resources vs. programmer time is just one axis along which this tradeoff is made: some others include security vs. programmer time, correctness vs. programmer time, and others I’m just not thinking of right now I’m sure. It sounds like a really pragmatic argument when you’re considering your costs because we have been so thoroughly conditioned into ignoring our externalities. I don’t believe the state of contemporary software would look like it does if the industry were really in the habit of pricing in the costs incurred by others in addition to their own, although of course it would take a radically different incentive landscape to make that happen. It wouldn’t look like a code golfer’s paradise, either, because optimizing for code size and efficiency at all costs is also not a holistic accounting! It would just look like a place with some fewer amount of data breaches, some fewer amount of corrupted saves, some fewer amount of Watt-hours turned into waste heat, and, yes, some fewer amount of features in the case where their value didn’t exceed their cost.

                          1. 7

                            We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t

                            But we aren’t. Because modern resource-wastfull software isn’t really realeased quicker. Quite the contrary, there is so much development overhead that we don’t see those exciting big releases anymore with a dozen of features every ones loves at first sight. They release new features in microscopic increments so slowly that hardly any project survives 3-5 years without becoming obsolete or out of fashion.

                            What we are trading is quality, by quantity. We lower the skill and knowledge barrier so much to acompdate for millions of developers that “learned how tonprogra in one week” and the results are predictably what this post talks about.

                            1. 6

                              I’m as much against bloat as everyone else (except those who make bloated software, of course—those clearly aren’t against it). However, it’s easy to forget that small software from past eras often couldn’t do much. The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                              1. 5

                                The original Mac OS could be 64KB, but no one would want to use such a limited OS today!

                                Seems some people (@neauoire) do want exactly that: https://merveilles.town/@neauoire/108419973390059006

                                1. 6

                                  I have yet to see modern software that is saving the programmer’s time.

                                  I’m here for it, I’ll be cheering when it happens.

                                  This whole thread reminds me of a little .txt file that came packaged into DawnOS.

                                  It read:

                                  Imagine that software development becomes so complex and expensive that no software is being written anymore, only apps designed in devtools. Imagine a computer, which requires 1 billion transistors to flicker the cursor on the screen. Imagine a world, where computers are driven by software written from 400 million lines of source code. Imagine a world, where the biggest 20 technology corporation totaling 2 million employees and 100 billion USD revenue groups up to introduce a new standard. And they are unable to write even a compiler within 15 years.

                                  “This is our current world.”

                                  1. 11

                                    I have yet to see modern software that is saving the programmer’s time.

                                    People love to hate Docker, but having had the “pleasure” of doing everything from full-blown install-the-whole-world-on-your-laptop dev environments to various VM applications that were supposed to “just work”… holy crap does Docker save time not only for me but for people I’m going to collaborate with.

                                    Meanwhile, programmers of 20+ years prior to your time are equally as horrified by how wasteful and disgusting all your favorite things are. This is a never-ending cycle where a lot of programmers conclude that the way things were around the time they first started (either programming, or tinkering with computers in general) was a golden age of wise programmers who respected the resources of their computers and used them efficiently, while the kids these days have no respect and will do things like use languages with garbage collectors (!) because they can’t be bothered to learn proper memory-management discipline like their elders.

                                    1. 4

                                      I’m of the generation that started programming at the tail end of ruby, and Objective-C, and I would definitely not call this the golden age, if anything, looking back at this period now it looks like mid-slump.

                                    2. 4

                                      I have yet to see modern software that is saving the programmer’s time.

                                      What’s “modern”? Because I would pick a different profession if I had to write code the way people did prior to maybe the late 90s (at minimum).

                                      Edit: You can pry my modern IDEs and toolchains from my cold, dead hands :-)

                                2. 6

                                  Node is an especially good villain here because JavaScript has long specifically encouraged lots of small dependencies and has little to no stdlib so you need a package for near everything.

                                  1. 5

                                    It’s kind of a turf war as well. A handful of early adopters created tiny libraries that should be single functions or part of a standard library. Since their notoriety depends on these libraries, they fight to keep them around. Some are even on the boards of the downstream projects and fight to keep their own library in the list of dependencies.

                                  2. 6

                                    We’re trading CPU time and memory, which are ridiculously abundant

                                    CPU time is essentially equivalent to energy, which I’d argue is not abundant, whether at the large scale of the global problem of sustainable energy production, or at the small scale of mobile device battery life.

                                    for programmer time, which isn’t.

                                    In terms of programmer-hours available per year (which of course unit-reduces to active programmers), I’m pretty sure that resource is more abundant than it’s ever been any point in history, and only getting more so.

                                    1. 2

                                      CPU time is essentially equivalent to energy

                                      When you divide it by the CPU’s efficiency, yes. But CPU efficiency has gone through the roof over time. You can get embedded devices with the performance of some fire-breathing tower PC of the 90s, that now run on watch batteries. And the focus of Apple’s whole line of CPUs over the past decade has been power efficiency.

                                      There are a lot of programmers, yes, but most of them aren’t the very high-skilled ones required for building highly optimal code. The skills for doing web dev are not the same as for C++ or Rust, especially if you also constrain yourself to not reaching for big pre-existing libraries like Boost, or whatever towering pile of crates a Rust dev might use.

                                      (I’m an architect for a mobile database engine, and my team has always found it very difficult to find good developers to hire. It’s nothing like web dev, and even mobile app developers are mostly skilled more at putting together GUIs and calling REST APIs than they are at building lower-level model-layer abstractions.)

                                    2. 2

                                      Hey, I don’t mean to be a smart ass here, but I find it ironic that you start your comment blaming the “high-level languages with package systems” and immediately admit that you blindly picked a library for the job and that you could solve the problem just by “taking some time finding alternative packages with fewer dependencies”. Does not sound like a problem with neither the language nor the package manager honestly.

                                      What would you expect the package manager to do here?

                                      1. 8

                                        I think the problem in this case actually lies with the language in this case. Javascript has such a piss-poor standard library and dangerous semantics (that the standard library doesn’t try to remedy, either) that sooner, rather than later, you will have a transient dependency on isOdd, isEven and isNull because even those simple operations aren’t exactly simple in JS.

                                        Despite being made to live in a web browser, the JS standard library has very few affordances to working with things like URLs, and despite being targeted toward user interfaces, it has very few affordances for working with dates, numbers, lists, or localisations. This makes dependency graphs both deep and filled with duplicated efforts since two dependencies in your program may depend on different third-party implementations of what should already be in the standard library, themselves duplicating what you already have in your operating system.

                                        1. 2

                                          It’s really difficult for me to counter an argument that it’s basically “I don’t like JS”. The question was never about that language, it was about “high-level languages with package systems” but your answer hyper focuses on JS and does not address languages like python for example, that is a “high-level language with a package system”, which also has an “is-odd” package (which honestly I don’t get what that has to do with anything).

                                          1. 1

                                            The response you were replying to was very much about JS:

                                            In some ways, high-level languages with package systems are to blame for this. I normally code in C++ but recently needed to port some code to JS, so I used Node for development. It was breathtaking how quickly my little project piled up hundreds of dependent packages, just because I needed to do something simple like compute SHA digests or generate UUIDs.

                                            For what it’s worth, whilst Python may have an isOdd package, how often do you end up inadvertently importing it in Python as opposed to “batteries-definitely-not-included” Javascript? Fewer batteries included means more imports by default, which themselves depend on other imports, and a few steps down, you will find leftPad.

                                            As for isOdd, npmjs.com lists 25 versions thereof, and probably as many isEven.

                                            1. 1

                                              and a few steps down, you will find leftPad

                                              What? What kind of data do you have to back up a statement like this?

                                              You don’t like JS, I get it, I don’t like it either. But the unfair criticism is what really rubs me the wrong way. We are technical people, we are supposed to make decisions based on data. But this kind of comments that just generates division without the slightest resemblance of a solid argument do no good to a healthy discussion.

                                              Again, none of the arguments are true for js exclusively. Python is batteries included, sure, but it’s one of the few. And you conveniently leave out of your quote the part when OP admits that with a little effort the “problem” became a non issue. And that little effort is what we get paid for, that’s our job.

                                        2. 3

                                          I’m not blaming package managers. Code reuse is a good idea, and it’s nice to have such a wealth of libraries available.

                                          But it’s a double edged sword. Especially when you use a highly dynamic language like JS that doesn’t support dead-code stripping or build-time inlining, so you end up having to copy an entire library instead of just the bits you’re using.

                                        3. 1

                                          On the other hand, I’m fairly sympathetic to the way modern software is “wasteful”. We’re trading CPU time and memory, which are ridiculously abundant, for programmer time, which isn’t.

                                          We’re trading CPU and memory for the time of some programmers, but we’re also adding the time of other programmers onto the other side of the balance.

                                          1. 1

                                            I definitely agree with your bolded point - I think that’s the main driver for this kind of thing.

                                            Things change if there’s a reason for them to be changed. The incentives don’t really line up currently to the point where it’s worth it for programmers/companies to devote the time to optimize things that far.

                                            That is changing a bit already, though. For example, performance and bundle size are getting seriously considered for web dev these days. Part of the reason for that is that Google penalizes slow sites in their rankings - a very direct incentive to make things faster and more optimized!

                                          1. 15

                                            Consider using something like beets to tag your music.

                                            1. 2

                                              Beets is fantastic. I would like to find “beets for photos”

                                              1. 3

                                                I hope whoever makes that calls it “pheets”.

                                            1. 18

                                              With Japanese we could still write variable names in Hiragana, Katakana or Kanji 🤔

                                              1. 6

                                                Also the Roman alphabet and Arabic numerals! Plus emoji, Japan’s orthographic gift to the world 🎏🗿💮

                                                1. 5

                                                  FULL width is also a thing 🤭

                                                  1. 4

                                                    ハンカクカナモアルンデス。

                                                  2. 2

                                                    Didn’t we have emoticons like a decade before emoji became popular (in the west, anyway)? Weren’t those functionally emojis (albeit not part of unicode, but I’m guessing the earliest emojis weren’t either). Anyone know for sure?

                                                    1. 1

                                                      The first emoji were made for Japanese cellphones in 1997, by which time English emoticons had existed for a decade or so. Japan even had their own variations on emoticons like ^_^.

                                                      1. 2

                                                        What I remember from that time is being mind blown by how Japanese had taken emoji to a whole new level, breaking free of the idea that eyes were always “:” and that faces were always sideways.

                                                        1. 1

                                                          I remember chat clients and forums would let you put actual smileys inline with text. It wasn’t ascii.

                                                1. 2

                                                  Pun obviously intended.

                                                  1. 1

                                                    While I wholeheartedly agree with the sentiment I find it “disturbing” (can’t find a better word) that “cloud” is used as a synonym for services like Spotify that hold data and grant you limited access to it. Spotify makes use of cloud computing, sure, but they are not the same thing.

                                                    1. 3

                                                      All of the retrospectives of StackOverflow’s culture clearly state that asking questions effectively is a skill. They then repeat the same few bullet points about ensuring that questions are within the scope of the site, provide enough information, aren’t duplicates, etc. etc. What they don’t mention is that this will have approximately zero impact on your question getting answered. As far as I’ve been able to tell over the past decade on StackOverflow, what leads to your question getting answered is the size and popularity of the technology or programming language your question relates to. Ask a Javascript question? You’ll have a dozen replies almost before you hit submit, even if your question is poorly formed and has been asked a million times before. Ask a question about a relatively obscure technology, and it’s very possible that your question will never be answered.

                                                      1. 2

                                                        You can blame this on the network effect or simply probability. What’s the chance that an expert in an obscure domain would answer the question on StackOverflow, It would be better to send that person an email directly. There’s too much expectations of StackOverflow in my opinion.

                                                        1. 4

                                                          I think you can also blame it on the platform. There is no reward in answering a question that will be seen by 1 person a year.

                                                          1. 1

                                                            Sure, there is a bit of that. But the questions I’m referring aren’t that obscure. It’s mostly for things like Erlang, or F#. It’s the sort of thing where StackOverflow ought to shine, because there aren’t a whole lot of resources for these languages, whereas Javascript or Python tutorials are a dime a dozen.

                                                          2. 1

                                                            Ask a Javascript question? You’ll have a dozen replies almost before you hit submit, even if your question is poorly formed and has been asked a million times before.

                                                            I played with this briefly - if you’re quick enough you can get a couple of upvotes, the answer tick and the consequent dopamine hit before mods get around to marking it as a dupe. It does get boring very quickly though.

                                                          1. 2

                                                            The subject of how to ask questions and how can someone learn to ask better questions is something that has been bugging me lately, mostly because of personal reasons (mentoring and teaching). Does anyone have more articles on this topic, other than the classic ESR “how to ask questions”?

                                                            1. 2

                                                              Lately I’ve noticed that “how to ask questions” in a domain seems to be one of the best metrics for expertise. When you’ve learned the language of your topic well enough to form the kinds of questions StackOverflow prefers, you often won’t need StackOverflow - and when you do, you’ll also have the bedrock of clear language and details that help you get a response when there are other experts around.

                                                              Unfortunately, I’m not sure I have a good answer that generalizes well - and also no articles. I’m honestly not sure question-asking is a skill that generalizes between domains other than at very high levels of abstraction like “respect the readers’ time”, “show evidence of the problem and what you’ve done”, and “be humble about the whole situation”.

                                                              1. 2

                                                                I’ve liked Tatham’s guide to reporting bugs, because of the overlap.

                                                                1. 1

                                                                  In my experience, if you want someone to learn how to ask good questions, put them to answer questions. They will quickly figure it out.

                                                                1. 4

                                                                  In my opinion, the big change that agile brought to the table was the formalization of trial-and-error. It assumes that we will not get it right the first time, so instead of adding more documentation we simply build it, test it, and make it fail fast. And finally (and most importantly) we learn from it and start over.

                                                                  As a developer with no experience in project management, my suggestion would be to listen, dig deep into the reasons why they think they need what they are asking for, and once you have an idea build a prototype and test it quickly to find where you got it wrong.

                                                                  In other words, optimize for failure :)

                                                                  1. 3

                                                                    In retrospect it’s kind of amazing how quickly we moved from an Internet with no “like” counts (the golden age of blogging) to an Internet where it’s very difficult to find any community where “like” counts or upvotes are not a core part of the system. Even indie sites like Lobste.rs or Metafilter that eschew a lot of the apparatus of the modern Internet incorporate this very quantitative approach to community and social interaction.

                                                                    1. 2

                                                                      Yes. The quieter, less-evaluative Internet was hijacked by one of addictive narcissism.

                                                                      1. 2

                                                                        After writing my earlier comment I realized that there is one type of online community I participate in that is completely free of likes/voting/ranking/quantitative anything: mailing lists.

                                                                        It’s probably not a coincidence that I love mailing lists, while people whose Internet experience started even a few years later than mine did seem to really, really hate them. I wonder if there is a real generational (or internet-generational) divide here, or if I’m just an outlier.

                                                                        1. 2

                                                                          It’s probably not a coincidence that I love mailing lists, while people whose Internet experience started even a few years later than mine did seem to really, really hate them. I wonder if there is a real generational (or internet- generational) divide here, or if I’m just an outlier.

                                                                          As a guy who first acquired an ISP in 1993, I can honestly say that I generally dislike mailing lists (like most people, I guess). I always think of them as a poor-man’s usenet, I would much rather just hop on tin(1) and read the latest posts in my subscribed groups.

                                                                          Having said that, I am a member of some mailing lists that I genuinely enjoy. Though they are the exception, not the rule…

                                                                        2. 1

                                                                          It would be interesting to see an implementation of an upvote button that didn’t display the count to the users. You still get the “community” aspect of it, without the narcissistic side.

                                                                          1. 1

                                                                            HN does this.

                                                                            1. 2

                                                                              Right! For the comments. They still show the points for each story, which I think makes sense (or does it…?)

                                                                          2. 1

                                                                            Back then we had guestbooks and hit counters to provide the tingle of popularity that is oh so addictive.

                                                                            I remember when I first added commenting to my blog and getting ten or so meaningfull comments within the first week of publishing a new post was a thrill to see; those were different to likes though, because they were actual meaningful interactions that often spawned discussion.

                                                                          1. 4

                                                                            I was confused to see random snippets of code until I realized these are part of a book: http://rtoal.github.io/ple/

                                                                            IMO it would have been much more interesting to link the book, which already links to the repo.

                                                                            1. 2

                                                                              The object literal pattern only supports switching on strings, because only strings can be object keys. By contrast, JavaScript’s built-in switch statement can switch on numbers too. Though with JavaScript’s type coercion, maybe you could use strings like '39' as object keys and access it with the index 39, but that feels unsafe to me.

                                                                              1. 2

                                                                                It might “feel” unsafe, but it’s perfectly fine. Here, try this:

                                                                                var x = {"39": 13.37}
                                                                                print(x[39])
                                                                                
                                                                                1. 3

                                                                                  I think just further demonstrates why people don’t trust Javascript: to many reasonable, seasoned programmers that shouldn’t work, but it does.

                                                                                  1. 3

                                                                                    This works because JavaScript converts the hash keys to string before using it as key:

                                                                                    > a = 'key';  // String
                                                                                    > b = 4;  // Number
                                                                                    > c = {};  // Object
                                                                                    > d = {};
                                                                                    > d[a] = 1;
                                                                                    > d[b] = 2;
                                                                                    > d[c] = 3;
                                                                                    > d
                                                                                    { '4': 2,
                                                                                      key: 1,
                                                                                      '[object Object]': 3 }
                                                                                    

                                                                                    This means that even though you can use objects as keys, they will all translate to '[object Object]'.