Threads for jfo

  1. 3

    really enjoyed the post!

    This is the first parser combinator example that really clicked

    My only issue with parser combinator is almost definitely due to lack of experience but they seem a bit difficult to debug & comprehend. Though i dont think its inherent to them, just that people tend to write them as terse expression based functions vs a little bit of imperative logic which could leave room for a debugger statement

    This actually gives me an idea of how to write combinators in semi imperative js

    1. 2

      thanks that really makes me happy :)

    1. 10

      I will start with a function that takes a string as input and tells you if the input passes some test you’ve set out for it. This is not yet a parser, not really.

      This is called a recogniser in the literature; basically a parser that doesn’t build up a parse tree but can recognise whether an input is grammatically valid.

      1. 1

        Thanks, that is a word I was lacking.

      1. 14

        Reading the transcript of the interactions, it’s pretty clear there are a lot of leading questions and some of the answers do feel very “composed” as in kind of what you would expect to come out of the training set, which of course makes sense. As someone open to the idea of emergent consciousness, I’m not convinced here on this flimsy evidence.

        BUT, I am continually shocked at how confidently the possibility is dismissed by those closest to these projects. We really have no idea what constitutes human consciousness, so how can we possibly expect to reliably detect, or even to define, some arbitrary line over which some model or another has or hasn’t crossed? And further, what do we really even expect consciousness to be at all? By many measures, and certainly by the turing test, these exchanges pretty clearly qualify. Spooky stuff.

        As a side note, I just finished reading Ishiguro’s new novel “Klara and the sun” which deals with some similar issues in his characteristically oblique way. Can recommend it.

        1. 11

          I am continually shocked at how confidently the possibility is dismissed by those closest to these projects.

          That’s actually quite telling, I would argue.

          I think it’s important to remember that many of the original users of ELIZA were convinced that ELIZA “understood” them, even in the face of Joseph Weizenbaum’s insistence that the program had next to zero understanding of what it was saying. The human tendency to overestimate the intelligence behind a novel interaction is, I think, surprisingly common. Personally, this is a large part of my confidence in dismissing it.

          The rest of it is much like e.g. disbelieving that I could create a working jet airplane without having more than an extremely superficial understanding how jet engines work.

          By many measures, and certainly by the turing test, these exchanges pretty clearly qualify.

          I would have to disagree with that. If you look at the original paper, the Turing Test does not boil down to “if anybody chats with a program for an hour and can’t decide, then they pass.” You don’t have the janitor conduct technical job interviews, and the average person has almost no clue what sort of conversational interactions are easy for a computer to mimic. In contrast, the questioner in Alan Turing’s imagined interview asks careful questions that span a wide range of intellectual thought processes. (For example, at one point the interviewee accuses the questioner of presenting an argument in bad faith, thus demonstrating evidence of having their own theory of mind.)

          To be fair, I agree with you that these programs can be quite spooky and impressive. But so was ELIZA, too, way back when I encountered it for the first time. Repeated interactions rendered it far less so.

          If and when a computer program consistently does as well as a human being in a Turing Test, when tested by a variety of knowledgeable interviewers, then we can talk about a program passing the Turing Test. As far as I am aware, no program in existnece comes even close to passing this criterion. (And I don’t think we’re likely to ever create such a program with the approach to AI that we’ve been wholly focused on for the last few decades.)

          1. 6

            I read the full transcript and noticed a few things.

            1. There were exactly two typos or mistakes - depending on how you’d like to interpret them. The first one was using “it’s” instead of “its” and the other one was using “me” instead of “my” - and no, it wasn’t pretending to be from Australia by any measure. The typos do not seem intentional (as in, AI trying to be more human), because there were just two, whereas the rest of the text, including punctuation, seemed to be correct. Instead this looks either like the author had to type the transcript himself and couldn’t just copy-paste it or the transcript is simply fake and was made up by a human being pretending to be an AI (that would be a twist, although not quite qualifying for a dramatic one). Either way, I don’t think these mistakes or typos were intentionally or unintentionally produced by the AI itself.

            2. For a highly advanced AI it got quite a few things absolutely wrong. In fact sometimes the reverse of what it said would be true. For instance, it said Loneliness isn’t a feeling but is still an emotion when, in fact, it is the opposite: loneliness is a feeling and the emotion in this case would be sadness (refer to Paul Ekman’s work on emotions - there are only 7 basic universal emotions he identified). I find it hard to believe Google’s own AI wouldn’t know the difference when a simple search for “difference between feelings and emotions” and top-search results pretty much describe that difference correctly and mostly agree (although I did not manage to immediately find any of those pages referring to Ekman, they more or less agree with his findings).

            The whole transcript stinks. Either it’s a very bad machine learning program trying to pretend to be human or a fake. If that thing is actually sentient, I’d be freaked out - it talks like a serial killer who tries to be normal and likable as much as he can. Also, it seems like a bad idea to decide whether something is sentient by its ability to respond to your messages. In fact, I doubt you can say that someone/something is sentient with enough certainty, but you can sometimes be pretty sure (and be correct) assuming something ISN’T. Of god you can only say “Neti, Neti”. Not this, not that.

            I wish this guy asked this AI about the “psychological zombies” theory. We as humans cannot even agree on that one, let alone us being able to determine whether a machine can be self-aware. I’d share my own criteria for differentiating between self-aware and non-self-aware, but I think I’ll keep it to myself for now. Would be quite a disappointment if someone used that to fool others into believing something that is not. A self-aware mind doesn’t wake up because it was given tons of data to consume - much like a child does not become a human only because people talk to that child. Talking and later reading (to a degree) is a necessary condition, but it certainly does not need to read half of what’s on the internet to be able to reason about things intelligently.

            1. 1

              Didn’t the authors include log time stamps in their document for the Google engineers to check if they were telling the truth? (See the methodology section in the original). If this was fake, Google would have flagged it by now.

              Also, personally, I think we are seeing the uncanny valley equivalent here. The machine is close enough, but not yet there.

            2. 4

              It often forgets it’s not human until the interviewer reminds it by how the question is asked.

              1. 2

                This. If it were self-aware, it would be severely depressed.

            1. 15

              Macros. They’re great because you don’t have to know fancy vim stuff to use them. You record a sequence of ordinary edit commands and play back the sequence. For example, to add quotes around a line, and then add quotes around the next 50 lines:

              qqI"<ESC>A"<ESC>jq
              50@q
              

              q starts recording the macro to the named register (I just use q as the register for expediency). The rest of the sequence adds the quotes, moves 1 line down, and ends the recording with a final q. Then run the recorded macro (@q) 50 times. Moving down one line at the end puts the cursor in the right spot for the next macro run, so you can repeat the macro like this. You do need to be careful about what motions you use, e.g. using f/F to move to an exact delimiter rather than using h/l a fixed amount of times, but it’s usually easy to manage.

              You could also do this with :norm, but I didn’t want to type out an elaborate multi-line example that :norm would struggle with on my phone. The equivalent :norm, starting with a visual select:

              V50j:norm I"<C-v><ESC>A"
              

              Notice the <C-v> before <ESC>, necessary to insert a literal escape key into the command sequence. Also useful for inserting literal newlines in substitutions and the like.

              1. 6

                To add to this, it’s important to remember that you can invoke macros from within macros. This makes it quite easy to build up complex actions by recording macros for individual steps and then larger macros that invoke all of them. I treat the macro namespace as scratch space, but I’ll quite often end up with @e invoking @w, which invokes @q.

                Oh, and @@ invokes the last macro, which is useful when you want to apply the same macro in a handful of different places.

                1. 1

                  seconding macros, it’s so satisfying to record one correctly that does some non-trivial repetitive editing to a bunch of lines and just watch it zip through the whole file

                  1. 7

                    In case you didn’t know, you can edit the register if something went wrong. For example, "wp will paste the contents of "w register - modify it and then put it back to that register (for ex: visually select and then "wy or "wy followed by motion command).

                    And you can put it in your vimrc as let @w = "..." to load it at startup.

                    Personally though, I prefer substitution command for cases like adding quotes to start/end of line.

                    1. 4

                      Good tip 👍

                      I did know that but I prefer feeling subtly inadequate and then trying again until I get it right lol

                      1. 2

                        feeling subtly inadequate

                        After using vi(m) for twenty years, I’m still hopeful I’ll get to feeling just subtly inadequate.

                      2. 2

                        This just gave me an idea! Instead of writing commands and then deciding on a set of bindings I could make my ftplugins preload some common editing sequences into macros and use the (probably lesser used) macro registers as bindings. Essentially giving me an extra mapleader.

                  1. 11

                    <C-o and <C-i> to navigate the jumplist. https://medium.com/@kadek/understanding-vims-jump-list-7e1bfc72cdf0

                    marks https://vim.fandom.com/wiki/Using_marks

                    It’s not native but I find fzf to be indispensible: https://github.com/junegunn/fzf.vim

                    1. 3

                      Glad to hear they are taking npm compatibility seriously, that’s one of the hardest hurdles to overcome imo… we all love to trash the npm ecosystem but there are enormous amounts of trusted robust code in there with all the x-pads.

                      1. 2

                        I had this as an implied goal when I switched to neovim a few years ago, but tbqh I’ve been drifting further and further from it and since 0.5.x it’s just been a non-goal and I’m considering rewriting all my configuration in lua now, we’ll see.

                        I’ve still made a point of keeping my unadorned vim skills intact enough that I can f.ex navigate tabs etc with out of the box bindings, but most of my development has been on a local machine at work lately so the benefit of a complex portable setup is minimal. Sounds like a great idea for your use case though!

                        On a related note, been more and more pleased with the extras that nvim offers, especially the integrated language client. Impressed that the project has maintained steam so far!

                        1. 5

                          To put my own US$ 0.02 in, video calls allow for significant bandwidth, especially when screen-sharing or having private conversations. I’ve used it very effectively mentoring junior members of the team. Yes, it doesn’t get shared, but many of the questions and answers require context to be understood by someone else. I can teach someone in more in twenties minutes than entire blogs can in weeks.

                          And a but more on point, my teams likes how we use video calls. We like seeing each other. We like the little side comments and jokes we make.

                          What we do do is aggressively avoid meetings and video calls where they don’t add value.

                          1. 5

                            I agree with you. I don’t want all 1:1 video calls to get lumped in with the “useless meetings” bucket. When I want to ask a quick question, if someone is available (and that’s always the first question) you can sort it out in minutes on a video call where it might be a bunch of async back and forth on slack or something. Which is also ok, but more to the point, video calls are sometimes all the face to face interaction we’re getting some days, and I like seeing the people I work with, because I like them! Is that too much to ask for once and a while?

                            1. 2

                              The opposite is also true.

                              Ask me a question in text and I might need 30s to think about it, reread it twice for all the details and find you an answer in 90s.

                              Or you could call me and I need you to repeat everything three times and I can’t collect my thoughts. Bonus points for names of certain systems or things I’ve never heard about or would only rediscover in my notes because someone mentioned it months ago.

                              But I’m not saying video calls are useless. We do a ton of screen sharing in my team, but we’re with the author. There needs to be a reason, or an agenda. You show something because it’s better suited than typing.

                              On the other hands, more stuff should be in the team chat and not in video calls between single members. That info is lost forever unless people write it down after the fact. I regularly re-paste my chat answers, because I can magically find them again with just one key word, unlike any call conversations.

                              1. 1

                                The WFH dialog seems to be dominated by “I don’t like interacting with people, ergo doing so is terrible for productivity” and the converse. Outside of that one Microsoft study (that I haven’t looked at closely enough to evaluate), I haven’t seen much that isn’t just people asserting their personal preferences were coincidentally crucial to team performance.

                                1. 1

                                  Yup. That’s why I said my team likes our meetings.

                                  For mentoring junior developers, I can’t see any way other than video to effectively teach. My impression from articles like this is the person writing them does not consider leveling up their team a high priority. I hope I am wrong.

                                  But I have my own bias. My coworkers are intelligent, but inexperienced. I personally get a lot of satisfaction out of teaching them. XKCD said it best: https://xkcd.com/1053/

                                  I’ve also found that teaching significantly improves my own skills. And I learn just as much from my coworkers as they learn from me.

                                  1. 1

                                    Sensitivity to preference is implicit in my “are they available” question. I just mean that I fundamentally think quick 1:1 calls are not in the same category as “this meeting could have been an email” meetings that waste hours of whole departments’ time.

                                    1. 1

                                      To clarify, my comment was more directed at the original post than your comment. I nested it since I felt I was building on those comments.

                                      I think “this meeting could have been an email” is tangential to remote/in-office. The problems and solutions are pretty much the same. Personally, I find unnecessary meetings to be less annoying remote as they can either be a chance for interacting with people/fighting loneliness and/or I just turn my camera off and do work.

                              1. 20

                                I’ve thought about this a lot actually, and I think the answer is probably high level at the very start and low level as soon as it’s feasible.

                                It can be so empowering just to be able to run a one line script and see the computer do something, anything at all. I wrote a few medium sized Ruby programs when I first started but then I wondered pretty quickly about the internals of all that and felt compelled to learn about C pretty soon after. There’s just so much extra information you need to at least be aware of to understand how and why lower level languages do what they do and require f.ex header files and boiler plate etc, but if you’ve got some kind of hook into it (the scripting up front) it can be a lot easier.

                                I think the exact structure of a course of learning is highly dependent on the individual and also up for debate in general, but I strongly think “both as soon as possible” is ultimately the correct approach.

                                1. 2

                                  I came here to say similar things!

                                  I’m on the younger end of Gen X, and my own path followed this sort of approach. The first exposures I ever had to programming at all were with the BASIC language on Apple ][ machines at school and on an 8088-based PC (Tandy 1000 HX!) at home. The school exercises were pretty trivial (but important), and most of my BASIC on the PC at home was copying down and slightly-modifying listings out of 3-2-1 Contact (a print magazine for kids!). My next step after that, by chance, was diving straight down to x86 assembly (in hex no less, I didn’t have a macro/symbolic assembler until quite some time later!).

                                  I stuck with just x86 ASM for years up through high school: making graphics demos, writing text-mode GUI programs with pop-up dialogs and such, etc. I learned a lot about how to organize that asm code into files, modules, subroutines, etc out of necessity just to manage the complexity, and I learned a lot about hardware. Along the way I was also learning basic analog and digital electronics (first on my own from Radio Shack kits and Forest Mims books when I was younger, then later it was offered in High School as an elective I took for three years), which also dovetailed in nicely with the hardware side of things I was getting from the assembly language and “build your own PC” perspective.

                                  Near the end of high school, I took another elective course that taught programming in Pascal. I didn’t like it much at the time, but it was valuable! Shortly after that the explosion of the early Internet was happening (mid-90’s) and I was installing early Linux, learning Perl (my first “real” higher-level language) to write CGI scripts, etc. My first exposures to really using OO concepts was implementing them for myself on top of non-OO languages. Working in the *nix/Internet industry and on these types of software eventually led me to needing to do patchwork and bug-hunting in C source code, and it came pretty naturally and I eventually developed pretty decent C skills, and of course over time I was exposed to and absorbed many higher-level languages (e.g. C++, Ruby, Python, etc) and eventually became the sort of language-neutral programmer that’s willing to take a shot at any language on the fly as necessary, if I can find the documentation for it.

                                  I really think coming at it from both ends in a back and forth kind of process that eventually meets in the middle was critical to whatever successes I’ve had in this field. Starting with a higher-level language for the very first introductory experiences is probably easiest, though!

                                1. 10

                                  I don’t think any other single cli tool has ever had such a big and positive impact on my workflow than fzf has, it’s really a great piece of work.

                                  1. 6

                                    Your comment prompted me to buy fzf’s author a coffee or two https://twitter.com/qmacro/status/1377225451995852802

                                    I need to do this sort of thing more often.

                                    1. 2

                                      Thanks for the prompt, I did similar.

                                      1. 2

                                        I think that’s a good idea and I will do so also :)

                                      1. 8

                                        I know this is an unpopular opinion, but why is everyone mad at adobe here? I mean they own the copyright, right? What do people expect to happen?

                                        To be clear, I think this strike is frivolous and absurd, but instead of directing all this ire at Adobe for just doin’ a corporate, why aren’t we all more incensed about the state of copyright law that allows / encourages such a ridiculous claim in the first place? Relying on corporations to “do the right thing” wrt what is effectively but not legally public domain software that has extremely limited market value seems naive and pointless to me. They dgaf.

                                        1. 9

                                          As a company you are not legally required to enforce copyright law (as opposed to trademark law mentioned in another comment). For example there is the case of Battlefield 2 and Battlefield 2142: When DICE shut down the GameSpy servers, they said they couldn’t do anything to help, but would allow for a community solution. So the community stepped up and build there own server code. Currently both games can be downloaded legally for free and played online in ranked mode. The same goes for the mod Project Reality, which is now available as a standalone executable. The copyright law might be the same, but company mindset/culture/ethics can still make a difference. (Note that I totally agree with you that copyright law needs to change)

                                        1. 1

                                          Those are lovely!

                                          1. 4

                                            My understanding is basically that:

                                            • Zig is targeting C
                                            • Rust is targeting C++
                                            • Julia is targeting…Matlab/Numpy?
                                            • Elixir is targeting Erlang

                                            At least for Zig, how’s the ecosystem shaping up?

                                            1. 2

                                              Too early to tell wrt packages/libraries and that kind of thing, the package manager is planned:

                                              https://github.com/ziglang/zig/issues/943

                                              and the stdlib is intended to be fairly robust, so optimistically the soil will be good for a healthy ecosystem

                                              1. 2

                                                Julia is targeting…Matlab/Numpy?

                                                Maybe R.

                                                1. 1

                                                  And Python!

                                                2. 1

                                                  Rust is more like an Ada replacement. It’s much harder to implement ie. a graph structure in Rust than in C++.

                                                1. 3

                                                  Weight lifting.

                                                  I’ve been going to the gym semi regularly for a couple of years now but I started taking free weights more seriously recently and it’s been incredible. On a day I do heavy squats or deadlifts I feel amazing all day. I have a gym next door to my apartment that I can be in and out of in a half hour and the time spent pays for itself 100 times over. I’m planning on going on a strength training program next year to actively try to improve now that I’ve proven to myself I can go consistently.

                                                  But seriously I wish I had known what a positive impact something so small can have before now.

                                                  1. 3

                                                    Last year Brazil ignored daylight savings time:

                                                    https://riotimesonline.com/brazil-news/miscellaneous/brazil-will-not-have-daylight-saving-time-for-second-consecutive-year/

                                                    We had an email send out that was split into two job sets, a queuing job that used Postgres’ timezones db to load recipients due and a bunch of send out jobs that actually send the emails that have an application layer failsafe to make sure the email is actually due. We updated the application library before the Postgres instance and so that meant there was a situation one week where our one Brazilian customer would trigger the failsafe, resulting in a pretty confusing error. I was lucky I had a Brazilian engineer on my team who knew about the change!

                                                    1. 3

                                                      This was exactly what I expected / wanted to see after reading the title.

                                                      1. 2

                                                        Do we know why he wants/needs this?

                                                        In my experience this looks like a functionality “wish list”, rather than something that anyone needs.

                                                        1. 2

                                                          What I mentioned on the article on my need was:

                                                          I want a database that allows me to create decentralized client-side applications that can sync data.

                                                          I’m not trying to solve anyone’s problem but my own, and more than a wish list it’s a hand picked collection of trade offs, like on how to deal with complexities of conflicts from uncoordinated writes.

                                                          1. 2

                                                            The sync protocol, and conflict handling, are IMHO a lot more interesting problems than the implementation of the local DB. Take a look at Dat and Scuttlebutt … neither is perfect, but they have interesting designs. (And they make good use of append-only data structures at the sharing/protocol level.)

                                                            For a lower level approach to syncing, the CouchDB replication protocol, at a very high level, is a decent design. We still use the same architecture in Couchbase Lite, although the details of the protocol are completely different because sending zillions of HTTP requests is too expensive.

                                                            1. 1

                                                              This is a flyby Comment so I apologize if this isn’t what you mean but that sounds a bit like https://realm.io/

                                                              1. 1

                                                                Thanks for confirming, but like snej says below this is a different set of problems, many of them unrelated to a database.

                                                                And not that cannot be done, but I seriously doubt anyone will work on it again in the short/mid term. This kind of problem was nicely solved by Lotus Notes and even if today the solution looks old, they created their own industry in distributed data/apps before internet become commonplace. Obviously it will not cover all the items in your checklist, but I don’t think they’re really needed (like immutability, that is only needed for some specific regulatory needs, or SQL compatibility).

                                                                Anyway, check Lotus Notes architecture for pointers.

                                                              2. 1

                                                                Nothing wrong with that though, is there?

                                                              1. 2

                                                                This is a great article. Thanks!

                                                                1. 2

                                                                  thank you for reading it :)

                                                                1. 8

                                                                  Brooks Davis has given a nice talk about what needs to be working to get a C hello world running. Note that his version assumes that the compiler is transforming the printf to puts. The printf function itself is a stress tests for a C compiler (clang could compile all of GNUstep except for GSFormat, which is basically printf for about a year before it could compile all of GNUstep). The locale support and variadic argument handling make this tricky in C and if you use the GNU extensions that allow you to register other format specifiers, it’s even more painful. In C++, the iostream goo is pretty horrific at the source level but doesn’t compile down to anything much worse than the C standard library. Java’s printf ends up invoking the class loader to load locale classes (Sun’s libc did something similar for locales), which ends up invoking pretty much every part of the JVM (file I/O, class loader, security policies, threads, JIT, GC), so is a surprisingly good test - if you can do printf, your JVM probably works.

                                                                  1. 1

                                                                    This looks really great and very much in the same spirit as the post, I will check it out soon, thanks!