1. 14

    I think it’s good to give people feedback when they’re being harsh/unkind but I don’t think it’s worth a downvote.

    A downvote to me is like saying “this comment doesn’t belong here”, as opposed to “thank you for your comment but I wish you were kinder in the way you wrote it”.

    1. 4

      This is a good point. I agree that my downvote in that situation would be “please express that differently”, not “don’t express that”.

      On the other hand, in-thread discussion seems to risk derailing the discussion, and is a bit too explicit of a lecturing stance. (I imagine I might react more positively to some relatively subtle expression of “I really didn’t like your tone” than to being called out publically.)

      1. 4

        “in-thread discussion seems to risk derailing the discussion, and is a bit too explicit of a lecturing stance. “

        In the political metas, most of the community already voted in favor of such comments in any thread where they thought something needed calling out. They also have been doing that for years now. So, this risk is already standard practice here.

        Might as well follow that practice by simply pointing out the problem in a civil way. They’ll have a chance to improve.

        1. 3

          What about some sort of flagging mechanism for tone? Wouldn’t have to be restricted to abrasiveness; could easily include things like sexist / racist terms, etc. It’d be orthogonal to correctness (i.e. regular upvotes / downvotes), and could be filtered on separately.

      1. 3

        I think it’s good to give people feedback when they’re being harsh/unkind but I don’t think it’s worth a downvote.

        A downvote to me is like saying “this comment doesn’t belong here”, as opposed to “thank you for your comment but I wish you were kinder in the way you wrote it”.

        1. 1

          I fall into the same boat as you. I think it is worthwhile feedback, but don’t know if I think it should count as an actual downvote.

        1. 4

          Learn C++ in 21 Days by Jesse Liberty is one of my all time favorite programming books by one of my all time favorite technical writers. Just like all book series, “X in Action”, “Learning Y”, “Z for Dummies”, quality varies from one book to the other.

          The series are marketing’s way of indicating the size of the book (short, medium, long), the assumed level of experience (beginner, intermediate, advanced) and the way it is mean to be consumed (start to finish, read what interests you, cookbook, reference, etc).

          I’m not fond of the marketing around the “For Dummies” series of books but I’ve found a couple that were great at introducing me to (non-technical) subjects I was vaguely interested in. If you learn what the marketing is trying to convey and to evaluate each book on its technical merits, your choice of books will expand, and that’s a good thing.


          The novice phase of learning anything is unavoidable. Even if you write a 10,000 page book mean to be read over 10-years, you will still have novices who’ve read the first few hundred pages who “know enough to be dangerous”. The problem is not in the learning materials, is in people’s poor ability to evaluate themselves, especially when they are novices. The solution to this problem is external evaluation. So before you allow people to write potentially dangerous code, you need to evaluate them through exams (in college), technical interviews, and supervised work.

          1. 11

            5 Whys is like a buggy DFS that visits the first adjacent node and ignores the rest and where the stop condition is reaching a stack size of 5.

            1. 3

              That’s what I got from the article too, even though it didn’t go so far as to suggest BFS as an alternative. It should be 1 Why And 4 Why Elses.

            1. 10

              Serve up AMP page to Google bots, and non-AMP to everyone else.

              1. 7

                When you visit an AMP page from Google’s results page, it’ll have a google.com url. You can’t get the higher ranking without serving the AMP version.

                1. 4

                  Is this really doable? I.e. do you have experience / data that shows that this is something you can do without getting penalized?

                  1. 3

                    It’s hard to do, because Google bots for crawling AMP won’t tell you whether they’re Google bots or regular users.

                    1. 2

                      Agreed. It’s hard, but if we don’t fight back Google’s going to hoover up everything.

                      I, for one, will not sit idly by and let the free and open Internet die. I remember the walled gardens of the 80’s, or the siloed access of the 90’s with AOL.

                      1. 1

                        I dunno. I still miss getting free frisbees in the mail every month.

                  2. 4

                    That won’t work — Google search users will get the AMP page anyway. Part of Google’s AMP implementation is that you no longer host the site yourself.

                    1. 2

                      That part would work fine. People are talking about giving AMP where AMP is not necessary, not when it is expected.

                  1. 1

                    I was baffled at first but it makes sense. This is the sort of thing more appropriate for an extension. No reason to have it as a browser feature. And as mentioned by many, JS is a lot more powerful and convenient for tracking. I doubt that anyone is using this in the real world.

                    1. 8

                      If you haven’t come across it, I also highly recommend Bob Nystrom’s book Crafting Interpreters, available for free. It has two parts: first he goes over building a tree-walking interpreter in Java then he goes over building a bytecode compiler & VM in C.

                      This second part is still a work-in-progress but he’s kept a strong pace, last chapter was released about a week ago.

                      Thank you for the great write up. I’m on a similar learning path and I really enjoyed it and got me excited to write my own compiler as well!

                      1. 6

                        I’m hoping to get the chapters done by the end of 2019. If you’re impatient, all of the code for the entire book is already done. (In fact, I wrote all of the code and carefully split it into chapters before I wrote the first sentence of prose.)

                        You can see it all here: https://github.com/munificent/craftinginterpreters/tree/master/c

                        1. 2

                          Thanks! I had briefly noticed Crafting Interpreters before, but I’m glad to hear it’s worth a second look.

                          Thanks for the kind words, and I hope you will continue on your journey!

                        1. 7

                          In 2005 WordPress themes very the hottest internet commodity but they were all designed for left-to-right languages and were using CSS.

                          Before then, it was easy to flip an entire site to be right-to-left with <html dir="rtl">, and designs were based on HTML tables and would correctly get flipped horizontally with that attribute. However, CSS based design were a regression from that point-of-view because they were filled with hard-coded directions like margin-left and padding-right.

                          I wrote a Python script full of Regexes that converted all of those CSS properties, including combined ones like margin: 1px 2px 3px 4px; and put it up as an online converter. It’s been running since then. It briefly went offline a couple of years ago and I got a lot of emails asking me to fix it. I haven’t had to touch it since I wrote and I wouldn’t dare change any of it by now.

                          1. 7

                            This seems like more of an anti-feature to me. Maybe in limited uses it won’t be too bad?

                            <joke> Maybe the perl folks are staring to move to ruby now? </joke>

                            1. 6

                              This snippet is kotlin (which had the opportunity to make it a reserved word at the getgo), but it’s applicable and imho is a good example of how very readble succinct code can come out of this:

                              nums.filter { it > 5 }.sortBy { -it }.map { it * 3 }
                              

                              I’m a fan, at least. @1 is an uglier sigil to me, but that’s history.

                              1. 11

                                I’ll argue that any feature that has every been added to any programming languages has at least a few good use cases. It’s not like language designers are adding features just for the craic, they do it to solve real problems.

                                The question isn’t so much “does this language feature make a certain type of problem easier to solve?”, but rather “does this solve enough problems to offset the costs of adding it to the language?”

                                Adding features to languages comes with real costs. It will increase programmers cognitive load, it will make tools harder to write, it will make future language improvements/changes harder as features interact with features, etc.

                                In this particular case, I’m not so sure if it’s a good trade-off. The problem it solves is typing an explicit parameter (|a|). It strikes me as a small problem, at best.

                                1. 9

                                  They all seem like warts to paper over a lack of proper partial application.

                                  1. 4

                                    Partly, though you can use these variables to apply deeper than the first position

                                  2. 6

                                    Swift had $0, $1, etc since 1.0. I though I’d never use this syntax when I first saw it but I was very wrong. Your example is exactly where it shines.

                                    On paper, it looks magical. But in practice, coming up with arbitrary names for a parameter is probably less clear and adds more cognitive load, including coming up with good names when writing the code. Here’s the same example with an explicit “good” parameter name:

                                    nums.filter { num -> num > 5 }.sortBy { num -> -num }.map { num -> num * 3 }
                                    
                                    1. 5

                                      Oleg Kiselyov has an interesting take on the subject. Suppose Kotlin took this from Scala’s _.

                                    2. 2

                                      I think ‘limited uses’ is key. I expect we (team/employer) will adopt it, restricted to use in one-line blocks, enforced by a Rubocop.

                                      Haven’t seen Clojure mentioned yet in the comments, but that’s where I first encountered this kind of thing.

                                      1. 1

                                        That could potentially encourage people to write ‘smarter’ and more magical one-liners.

                                      2. 1

                                        Well it’s really close to perl’s $_[1]. But it doesn’t work in blocks if I recall correctly.

                                        I think some people really want Perl but are afraid to admit that.

                                      1. 4

                                        Honest question: Why are C programmers so keen about libraries being within one single source file? I guess it’s great if a library is simple and small, but a single file can also be very big…

                                        1. 3

                                          Probably because of the lack of a package and dependency management system.

                                          1. 2

                                            After having experience with old software and pip (Specifically trying (and failing) to get OsChameleon up and running) I feel safer knowing that a tarball of my code will build and work forever given POSIX 2008 is supported, and a good C compiler. Introducing package management systems into a programming language to me feels like a disgusting half-baked replication of a problem that is already ideally solved for 99% of linux systems.

                                        1. 4

                                          Very interesting read. Makes me think pure functional programming is a leaky abstraction.

                                          1. 6

                                            I dare say most abstractions become leaky if you push on them hard enough. :) Perhaps one dimension of the utility of an abstraction is just how hard you have to push before it leaks.

                                            1. 2

                                              Many of the problems they discuss are specific to Haskell and GHC, rather than being a general problem with all pure functional languages. Things like: they can’t inline loops, they have to copy a string in order to pass it to a foreign function. Even the problem with not being able to write low level code in a way that will be compiled and optimised predictably is fixable with the right language abstractions. (This is on my mind because I’m working on fixing some of these issues in my own pure functional language.)

                                              1. 2

                                                I get where you’re coming from, but from experience, Haskell is a lot leaker than others due to thunking and laziness. OCaml’s a functional language, but I don’t find it any leaker than, say, Python, because it’s strict and has a straightforward runtime. (Which isn’t to say Haskell’s bad/slow/whatever, just that the abstraction tower gets a lot higher the moment you pull laziness into the picture.)

                                              1. 7

                                                From the README:

                                                pydis is an experiment to disprove some of the falsehoods about performance and optimisation regarding software and interpreted languages in particular.

                                                Unfortunately many programmers [..] spend countless hours by making life harder for themselves in the name of marginal performance gains [..]

                                                The aim of this exercise is to prove that interpreted languages can be just as fast as C

                                                Okay, fair enough. but a SET operation is 80% the speed of Redis, a GET is 60%, and others as low as 40%!

                                                Even the best case is not “marginal performance gains”. Perhaps the project will improve, but in its current state it seems to disprove the point it’s trying to make.

                                                That doesn’t mean I completely disagree with the point as such, but at the same time Python does come with a real measurable performance cost – which may be corrected for by being easier to develop faster algorithms with, which is why mercurial is faster than git – and attempting to deny that seems a bit strange to me.

                                                1. 10

                                                  I poked around a bit and he’s using the hiredis python package to parse requests, which is a thin wrapper around the C library, so he’s not actually doing everything in Python. Also, his implementations aren’t functional matches- he doesn’t update any query data, for example. So the actual Python code looks faster than it really is here.

                                                  I’d cut him a bit of slack; it looks like he’s still early in Uni and probably doesn’t know much better.

                                                  Tangent: Python is a bad language for proving how fast you are, you’d probably want to try LuaJIT or K.

                                                  1. 4

                                                    Update: raised issue, was closed as “invalid” because hiredis is part of the python ecosystem. I don’t think this is an accurate benchmark or will become one.

                                                    1. 3

                                                      Look at it from the point of view of a developer writing a Python web app. The main take away may be that an in-app cache might be all you need.

                                                    2. 2

                                                      Totally agree. He says marginal, then in the same breath says 40% drop in performance. 40% is not marginal period.

                                                      1. 7

                                                        Getting 60% of obsessively-optimized C performance with idiomatic Python would be unbelievably good, like revolutionarily good. It’s much more likely he’s made a benchmarking mistake.

                                                        1. 1

                                                          Interesting theory regarding a benchmarking mistake. Isn’t it possible though, that this is simply because the Python interpreter itself is obsessively-optimized C for running this sort of idiomatic Python code.

                                                          Okay may be not obsessively-optimized, but still pretty decent.

                                                          1. 2

                                                            It’d be revolutionary because in most other cases, Python gets nowhere near that close. At least in the benchmarks game you see it barely crack 1-10% as fast.

                                                    1. 5

                                                      The most interesting problem to me is large orders, overlapping orders, and kitchen capacity. This must be a classic operations management problem but also very similar to the task of an instructions scheduler in a pipelined CPU.

                                                      1. 2

                                                        I hope all of this gets fixed when the Microsoft facelift is out.

                                                        1. 2

                                                          Either you use the word “fixed” with a fair share of sarcasm, or… “you must be new here” :-)

                                                          1. 1

                                                            it was sarcasm :-)

                                                            1. 1

                                                              you got me then :-)

                                                        1. 7

                                                          Almost all nontrivial filesystem usage is riddled with race conditions x_x

                                                          1. 7

                                                            This is why people want filesystems with snapshot support, like ZFS, BTRFS, APFS, and NTFS. That way, your applicaton doesn’t have to know how to deal with files changing in the middle of being read.

                                                            Or, you could just use a transactional DBMS.

                                                            1. 6

                                                              SQLite: a safer, better filesystem.

                                                              1. 8

                                                                D. Richard Hipp (the creator and main author of SQLite) has said many times that the goal of SQLite is not to replace other databases but replace broken file opening/writing.

                                                                1. 2

                                                                  filesystem or about what it takes for a proper FILE* = fopen(…) implementation?

                                                                  1. 5

                                                                    I’m just saying I trust SQLite to do storage correctly, esp in presence of faults, than what most programmers, including me, roll by hand. It’s one of the rare examples where adding complexity to TCB reduces risk. No larger analysis or claim… just that.

                                                                    1. 4

                                                                      I apologise for being a bit too dense in my reply. What I meant was principally the same as msingle pointed out but with emphasis on the C view of “FILE” rather than as a whole filesystem; If you follow the nitty gritty details of actually getting something like the fopen/fwrite/… API to work reliably, which means accounting for concurrent access, locking, buffer management, integrity and so on you end up with something quite similar to SQLite. Minus the SQL part.

                                                                      1. 2

                                                                        Glad you elaborated cuz that last part is basically how I think about it. Easier to let it handle all that than get it right myself on every app every time, esp in a hurry.

                                                                    2. 1

                                                                      Do you want fopen to give you a view of a file, frozen at the moment of opening? I can see the advantages, but also aren’t there use cases that would not work? And it’s immediately obvious it would not be trivial - imagine a program that leaves a file open for days, or a file that is now open 100 times at different moments.

                                                                      1. 1

                                                                        There are ways of dealing with this efficiently in modern file systems. They basically use a copy on write and/or a log structure to efficiently allow multiple readers reading at different points of time in the files history without any of them locking the others for extended periods of time.

                                                                        1. 1

                                                                          Yes, I just wonder if there are some edge cases that are not easy to think of - and also how copy-on-write semantics works with files that are not block structured in storage (extents or flash file systems). And just in my role as old person, re that “modern file systems” thing - http://www.yodaiken.com/2016/01/25/the-auragen-file-system/

                                                                        2. 1

                                                                          In the spirit of keeping it as an acceptable “easy” file abstraction to avoid having to deal with the large number of edge cases, yes. It could possibly be done by some fringe use of current mechanisms via the memfd_create (+ sealing, linuxism) and exposing it back to FILE* via open_memstream. I think I would personally prefer to do it by having normal mmap take (yet) another flag that would enable Copy-on-Write- like behaviour to avoid the truncate()+SIGBUS case, but I am not exactly sure.

                                                                          1. 1

                                                                            Question is whether the trade off is worth it. After all, the problem here is backups of active files which is a dumb idea in the first place.

                                                                      2. 2

                                                                        Now that I reread what I wrote, bringing up DBMS’s is kind of off-topic. A general-purpose backup tool obviously can’t force everything on the system to use SQLite. It should be able to just request a frozen (Copy-on-Write) view into the file to handle this, like all of the filesystems that I previously mentioned support.

                                                                        1. 2

                                                                          This requires your backup program to open the database and create a transaction to block changes. Most tar like backup programs don’t do that.

                                                                          1. 1

                                                                            Yeah that makes sense. Im chalking it up to speed reading while on first cup of coffee. My bad.

                                                                            1. 4

                                                                              That’s okay. I like sqlite, use it a lot, but there’s been some exaggerated love recently. There was another thread on HN a few days ago that also somewhat incredibly assumed you could s/fopen/sqlite/ and everything would be magically fixed. I think this is pretty dangerous without understanding how sqlite actually achieves its consistency.

                                                                    1. 1

                                                                      From what I understood, if you attach an encrypted external HDD or open an encrypted virtual disk, then use QuickLook to preview file contents on the encrypted, the previews will be cached on the host machines HDD. So even after you unmount the encrypted drive, there will be some data left on the host machines HDD.

                                                                      If the main HDD is a trusted machine with full drive encryption, there’s no leak beyond that.

                                                                      This is relevant to people carrying around encrypted USB sticks and using them on untrusted machines.

                                                                      The title is click bait-y and there’s an inflammatory, uninformative quote in the beginning of the story.

                                                                      1. 16

                                                                        The headline is a bit weird. The article is about “these apps are so popular, Apple went out of their way to ensure they still worked”. It’s not clear that the apps were buggy, except in the NSBundle unload case.

                                                                        1. 5

                                                                          To me it’s more accurate to say that Apple frameworks are buggy and older versions of those apps had to use workarounds and depend on the buggy behaviour until Apple decides to fix those bugs.

                                                                          I wouldn’t be surprised if Apple knew about some of the bugs from the developers of the apps during the beta period of a new OS. It doesn’t necessarily mean that Apple actively tests all of those apps as part of its OS QA routine.

                                                                          1. 4

                                                                            Yeah to me the article read like “Apple consider’s these apps important enough that they go to extra lengths to ensure OS updates don’t break them”. The title here seems weirdly judgey and negative.

                                                                        1. 7

                                                                          There’s a great list of resources compiled by Steven Shaw. It’s very broad but divided into categories.

                                                                          1. 2

                                                                            That’s a great list, though now I’m definitely feeling paralyzed by choice haha

                                                                          1. 1

                                                                            I’m still betting that this will happen gradually. It makes sense to start with the smaller laptops which are already being beaten by Apple’s current ARM chips.

                                                                            1. 1

                                                                              I agree. Most ARM CPUs are not up to the challenge. The new MacBook has ARM chips, but they run the Touch Bar and a couple other functions. Not the main processor.

                                                                              1. 1

                                                                                My money is on a scaled up ARM for a super-light notebook, followed by requirements that developers start shipping LLVM bitstreams instead of fat binaries. Once that settles down it’ll go into all portables and probably a compute-add-on (or MB-only replacement model) for the 2019 Mac Pro.

                                                                              1. 3

                                                                                From my read of the Bloomberg story, there’s no indication that Apple is planning to replace all intel chips on all Macs. Just that they’ll make some macs with ARM chips, which pundits have been talking about for a while now, as if it was only a matter of time. It’s still interesting to see but I’m not sure the news are relevant to workstations with Xeon’s.