Threads for vrthra

  1. 1

    What effect will this have on Rust development (Syntax and idioms)? Does this mean that the syntax of the Rust version that gets merged into the Kernel will be more blessed? (i.e., when newer syntax/idioms are invented by the community at a later time?)

    1. 3

      Linux is likely to influence some small features. They have opinions about panicking, OOM handling. Rust will very likely get a feature to disable floating-point support for Linux.

      However, I don’t expect this to make a noticably different “kernel Rust” language. Rust already has a “no_std” flavor that is used for other OSes and embedded development.

    1. 14

      Reading the transcript of the interactions, it’s pretty clear there are a lot of leading questions and some of the answers do feel very “composed” as in kind of what you would expect to come out of the training set, which of course makes sense. As someone open to the idea of emergent consciousness, I’m not convinced here on this flimsy evidence.

      BUT, I am continually shocked at how confidently the possibility is dismissed by those closest to these projects. We really have no idea what constitutes human consciousness, so how can we possibly expect to reliably detect, or even to define, some arbitrary line over which some model or another has or hasn’t crossed? And further, what do we really even expect consciousness to be at all? By many measures, and certainly by the turing test, these exchanges pretty clearly qualify. Spooky stuff.

      As a side note, I just finished reading Ishiguro’s new novel “Klara and the sun” which deals with some similar issues in his characteristically oblique way. Can recommend it.

      1. 11

        I am continually shocked at how confidently the possibility is dismissed by those closest to these projects.

        That’s actually quite telling, I would argue.

        I think it’s important to remember that many of the original users of ELIZA were convinced that ELIZA “understood” them, even in the face of Joseph Weizenbaum’s insistence that the program had next to zero understanding of what it was saying. The human tendency to overestimate the intelligence behind a novel interaction is, I think, surprisingly common. Personally, this is a large part of my confidence in dismissing it.

        The rest of it is much like e.g. disbelieving that I could create a working jet airplane without having more than an extremely superficial understanding how jet engines work.

        By many measures, and certainly by the turing test, these exchanges pretty clearly qualify.

        I would have to disagree with that. If you look at the original paper, the Turing Test does not boil down to “if anybody chats with a program for an hour and can’t decide, then they pass.” You don’t have the janitor conduct technical job interviews, and the average person has almost no clue what sort of conversational interactions are easy for a computer to mimic. In contrast, the questioner in Alan Turing’s imagined interview asks careful questions that span a wide range of intellectual thought processes. (For example, at one point the interviewee accuses the questioner of presenting an argument in bad faith, thus demonstrating evidence of having their own theory of mind.)

        To be fair, I agree with you that these programs can be quite spooky and impressive. But so was ELIZA, too, way back when I encountered it for the first time. Repeated interactions rendered it far less so.

        If and when a computer program consistently does as well as a human being in a Turing Test, when tested by a variety of knowledgeable interviewers, then we can talk about a program passing the Turing Test. As far as I am aware, no program in existnece comes even close to passing this criterion. (And I don’t think we’re likely to ever create such a program with the approach to AI that we’ve been wholly focused on for the last few decades.)

        1. 6

          I read the full transcript and noticed a few things.

          1. There were exactly two typos or mistakes - depending on how you’d like to interpret them. The first one was using “it’s” instead of “its” and the other one was using “me” instead of “my” - and no, it wasn’t pretending to be from Australia by any measure. The typos do not seem intentional (as in, AI trying to be more human), because there were just two, whereas the rest of the text, including punctuation, seemed to be correct. Instead this looks either like the author had to type the transcript himself and couldn’t just copy-paste it or the transcript is simply fake and was made up by a human being pretending to be an AI (that would be a twist, although not quite qualifying for a dramatic one). Either way, I don’t think these mistakes or typos were intentionally or unintentionally produced by the AI itself.

          2. For a highly advanced AI it got quite a few things absolutely wrong. In fact sometimes the reverse of what it said would be true. For instance, it said Loneliness isn’t a feeling but is still an emotion when, in fact, it is the opposite: loneliness is a feeling and the emotion in this case would be sadness (refer to Paul Ekman’s work on emotions - there are only 7 basic universal emotions he identified). I find it hard to believe Google’s own AI wouldn’t know the difference when a simple search for “difference between feelings and emotions” and top-search results pretty much describe that difference correctly and mostly agree (although I did not manage to immediately find any of those pages referring to Ekman, they more or less agree with his findings).

          The whole transcript stinks. Either it’s a very bad machine learning program trying to pretend to be human or a fake. If that thing is actually sentient, I’d be freaked out - it talks like a serial killer who tries to be normal and likable as much as he can. Also, it seems like a bad idea to decide whether something is sentient by its ability to respond to your messages. In fact, I doubt you can say that someone/something is sentient with enough certainty, but you can sometimes be pretty sure (and be correct) assuming something ISN’T. Of god you can only say “Neti, Neti”. Not this, not that.

          I wish this guy asked this AI about the “psychological zombies” theory. We as humans cannot even agree on that one, let alone us being able to determine whether a machine can be self-aware. I’d share my own criteria for differentiating between self-aware and non-self-aware, but I think I’ll keep it to myself for now. Would be quite a disappointment if someone used that to fool others into believing something that is not. A self-aware mind doesn’t wake up because it was given tons of data to consume - much like a child does not become a human only because people talk to that child. Talking and later reading (to a degree) is a necessary condition, but it certainly does not need to read half of what’s on the internet to be able to reason about things intelligently.

          1. 1

            Didn’t the authors include log time stamps in their document for the Google engineers to check if they were telling the truth? (See the methodology section in the original). If this was fake, Google would have flagged it by now.

            Also, personally, I think we are seeing the uncanny valley equivalent here. The machine is close enough, but not yet there.

          2. 4

            It often forgets it’s not human until the interviewer reminds it by how the question is asked.

            1. 2

              This. If it were self-aware, it would be severely depressed.

          1. 1

            For someone without the necessary background, what exactly are abstract machines here? The wiki seems to suggest it is just a lower level language that is higher than the end goal (machine code) but lower than the current level. E.g. opcodes for a VM. Is this the abstract machine referred to here?

            1. 2

              I would guess so.

              The problem is that every programming language, high or low level, has a corresponding abstract machine. So to say that “abstract machines don’t do much well” is misleading. If you compile to x86_64, you’re not avoiding abstract machines, you’re just targeting a different abstract machine.

              I think the authors who are “dissatisfied by abstract machines” are just dissatisfied with their choice of intermediate representation.

              1. 2

                I think abstract machines here mean something different from intermediate representation. Note that SSA is mentioned as an alternative to abstract machines, but SSA is an intermediate representation.

                The next to the last slide in Leroy’s presentation suggests that abstract machines here are useful for implementing bytecode interpreters, so vrthra’s “opcodes for a VM”. I think what is being said is that representations suitable for implementing bytecode interpreters are often not the best for intermediate representation of a native code compiler. When you state it that way, it is kind of obvious: I mean why they would be, compilers are not interpreters after all.

                1. 1

                  For any given Turing-complete language, there are infinitely many abstract machines. However, I’m not sure that any of them are in natural correspondence. For example, what’s the abstract machine for Wang tiling? I can imagine several abstract machines, but I don’t see why one of them is naturally the corresponding abstract machine.

                  This isn’t a facile objection. Consider Turing machines, and also consider Post’s correspondence problem for Turing machines. Which is the corresponding abstract machine: the Turing machine itself, or Post’s machine which assembles toy bricks? Is the deterministic approach the natural approach, or is non-determinism more natural?

                2. 1

                  An abstract machine is a machine that isn’t implemented in hardware. They are slower to execute than hardware, since they must be emulated. They are also more flexible than hardware, allowing reprogrammable behavior at runtime.

                  If an abstract machine is to be used in a compiler, then it needs to be between the user and the hardware. The idea is that the user’s code targets the abstract machine, and then the abstract machine is either implemented in software or its behaviors are compiled to native code for the target hardware. The abstract machine forms a “narrow waist”, a point in common between compilers to different hardware, just like an intermediate representation.

                  Abstract machines are also found outside compilers. For example, emulators are abstract machines.

                1. 1

                  What I really need is a way for me to specify that a container is for a specific website only. For example, I always forget to open in a new container when using google, and with twitter, it is really hard to, and I don’t want either to track me beyond the first click.

                  1. 1

                    Open the site you want, click on the container tabs icon, select always open in container .

                    1. 1

                      I use twitter, google, stackoverflow, reddit etc. to obtain links to new and interesting websites. The problem is that, these websites are often different, and I do not visit them often enough to make it worth creating a container for them. So what I want is some ability to say: Only open google websites in the google container. For all others, which opens from some link, open in the default container. I am also fairly diligent in deleting cookies in the default container. So, opening it in default is OK with me so long as google/twitter etc. doesn’t get to track it.

                  1. 2

                    What is the password manager that folks here use? Is there one recommended by Mozilla?

                    1. 2

                      I’ve always liked and recommended 1Password as it’s had the best UI, features and independent security audits - BUT - as of 1Password 8 they’ve moved from a native application to an Electron webapp which isn’t only bad for performance - but Electron is one of the last pieces of software I’d want handling secrets, it’s not even sandboxed!

                      A lot of people have been switching to Bitwarden - but that has the same problem - the desktop “app” is Electron as well.

                    1. 1

                      No NoScript?! Possibly one of the greatest security improvements to the browser?

                      Great list nevertheless and I learned some new things.

                      1. 1

                        Can noscript be enabled on a container specific fashion?

                        1. 1

                          I do not know.

                        2. 1

                          Noscript is good for security - but for me it’s not worth how annoying as it breaks so many websites.

                          1. 2

                            That is a question of priorities. Also, it doesn’t really break websites, just web applications in disguise … ;)

                            Personally, I don’t touch the web without it. It really helps you understand what people are doing, and gives fine-grained control.

                            Another recommended extension is jshelter.org from FSF. And saving webpages with WebMemex is just so much nicer than any of the alternatives…

                        1. 2

                          If you’re going to get into containers, I would pair multi-account containers with https://addons.mozilla.org/en-US/firefox/addon/temporary-containers/. You can set this up so that each tab opens in a fresh (new) container, but you can also configure it so some sites have a persistent container. I tend to do this for things I frequent (e.g., GitHub), but for a lot of other things I just log-in as I need to.

                          1. 2

                            When I tried temporary-containers, it created hundreds of temporary containers which were visible in my containers. It made it really difficult to use other container based extensions.

                          1. 22

                            I don’t understand. Nowadays people don’t remember their ASCII by heart any more?

                            1. 25

                              I genuinely hope this is a joke, because I’m 35 and been programming as a career for 12-ish years (including a 1-year paid internship) and I haven’t needed an ASCII chart more than a handful of times.

                              1. 12

                                Hahaha, I feel you.

                                I guess it’s always other people who deal with stuff like vtys and software flow control and fast parsers and keyboard drivers for us.

                                OTOH, man ascii. Typed in cool-retro-term for the best effect when teaching.

                                I believe it’s nice to start with unescaped in-band formats, then move on to the escaped ones, then fixed layout and finally TLV.

                                1. 5

                                  man ascii

                                  Did not know that was there. Nice!

                                  1. 4

                                    Ha, I just had to check to make sure \044 was what I thought it was when I saw it in a script yesterday.

                                  2. 8

                                    Being able to read hex and distinguish ASCII can come in handy in surprising places (especially during debugging).

                                    1. 1

                                      Recognizing SHIFT-JIS in hex has saved me days before.

                                    2. 2

                                      I’m sure it’s mostly a joke, but surely most developers know at least 1-3 ASCII codes by heart, especially full stack devs. When debugging HTTP-relates stuff you see stuff like %20 a lot

                                      1. 1

                                        Depends on what you work on, after many years of googling (no, I don’t have it ALL memorized) I started self-hosting one at an url I remember)

                                      2. 2

                                        Most people can’t read hexdumps anymore, and don’t have to thanks to improvements in tooling in recent years. We should be glad people don’t have to remember ASCII anymore :)

                                        1. 2

                                          Mine has rotted quite thoroughly out of my head. In the blue moon that I need to check, I just go pull up the table.

                                          Most of the character problems I deal with these days involve Unicode instead, or rather, people not understanding Unicode and screwing it up.

                                          1. 2

                                            The last time I saw an ASCII chart printed in a book, it was published in the 80s (in fact, I think every computer book published in the 80s, at least in the US, had an ASCII chart somewhere in it). By the 90s, I think it was assumed ASCII was a known standard.

                                            1. 1

                                              I slacked off back in the days and didn’t get around to actually learning it. Then we switched to Unicode and I just gave up.

                                              1. 6

                                                FWIW, all the ASCII knowledge is still applicable; the first 128 Unicode characters matches 7-bit ASCII, and UTF-8 encodes the values below 128 as just plain bytes with those values. So when looking at most text using the Latin alphabet, you can’t tell if you’re looking at ASCII encoded as bytes or Unicode encoded as UTF-8 even when looking at a raw hex dump.

                                            1. 2

                                              I think that John Warnock got the programming language (Postscript) right when compared to Tex, and I wish something like Latex was built on top of Postscript and it was the default language for research publications rather than Latex as it is now.

                                              1. 13

                                                The article was pretty bad (and, I guess, was a reprint of something from the ‘90s, given its use of the present tense when talking about printers with m68k processors). PostScript didn’t solve the problem of the printer not being able to print arbitrary things, precisely because it was a Turing-complete language. It was trivial to write PostScript programs that would exhaust memory or infinite loop. I had a nice short program to draw fractal trees that (with the recursion depth set sensibly) would take 5 minutes to print a single page on my laser. It was trivial to DoS any PostScript printer. Often even unintentionally: I used to have a laser with a 50 MHz MIPS processor. Printing my PhD thesis, it could do about two pages a minute if I went the PostScript to the printer, 20 pages a minute if I converted to PCL on my computer and sent the resulting PCL to the printer. The output quality was the same.

                                                This is a big part of the reason that early laser printers were so expensive. The first Apple LaserWriter had 1.5 MiB of RAM and a 12 MHz 68000, in 1985. The computer that it was connected to had either 128 or 512 KiB of RAM and a 6 MHz 68000: the printer was a more powerful computer than the computer (and there were a load of hacks over the next few years to use the printer as a coprocessor because it was so much faster and had so much more memory).

                                                The big improvement of PDF over PostScript was removing flow control instructions. The rendering complexity of a PDF document is bounded by its size.

                                                Putting rasterisation on the printer was really a work around for the slow interconnect speed. An A4 page, at 1200 dpi in three colours is around 50 MiB. At the top speed of the kind of serial connection that the early Macs had, it would take about an hour to transfer that to the printer (about 4 minutes at 300dpi). Parallel ports improved that a lot and could send a 300 dpi page in 21 seconds for colour, 7 seconds for mono (faster than most inkjets could print), though a 1200 dpi page was still 6 minutes for colour, 2 minutes for mono, so required some compression (often simple run-length encoding worked well, because 95% of a typical page is whitespace). With a 100Mbit network connection you can transfer an uncompressed, fully-rasterised, 1200dpi, CMY, A4 page in around 4 seconds.

                                                The problem is made worse by the fact that not all of the input to a page is in the form of vectors and so PostScript / PDF / PCL also need to provide a mechanism for embedding lossless raster images. At this point, you may as well do all of the rasterisation on the host and use whatever lossless compression format works best for the current output to transfer it to the printer. This is what a lot of consumer printers from the ’90s onwards did.

                                                The real value of something like PostScript or PDF is not for communicating with a printer, it’s for communicating with a publisher. Being able to serialise your printable output in a (relatively) small file that does not include any printer-specific details (e.g. exactly what the right dithering patterns should be to avoid smudging on this technology, what the exact mix of ink colours is), is a huge win. You wouldn’t want to send every page as a fully rasterised image, because it would be huge and because rasterisation bakes in printer-specific details.

                                                HP’s big innovation in this space was to realise that these were separate languages. PCL was a far better language for computer-printer communication than PostScript and let HP ship printers with far slower CPUs and less RAM than competitors that spoke PostScript natively. At the same time, nothing stopped your print server ( a dedicated machine or a process on the host) from accepting PostScript and converting it to PCL. This had two huge advantages:

                                                • You could upgrade the print server when CPUs became cheaper. You’d often keep a printer for 5-10 years. You could upgrade the print server to one twice as fast a few times in that time.
                                                • You could easily add support for newer features on the computer-computer communication language (e.g. the alpha channels on later PDF revisions with blending between overlayed raster images).
                                                1. 1

                                                  I agree with what you say; I am not trying to defend the use of Postscript as a format for communication between computers. As you noticed, there are better communication languages for computer-printer communication. Rather, what I want to point out is that, Postscript can be a really good language for writing complex documents that are meant to be edited by humans when compared to Tex.

                                                2. 3

                                                  Apples vs oranges. Have you ever programmed in PostScript? It’s a much lower-level language than TeX, and not at all suited to writing documents.

                                                  For one thing, it’s inside-out from TeX: in PostScript, everything is code by default and the text to be printed has to be enclosed in parentheses. Worse, a string renders only with the font’s default spacing, so any time text needs to be kerned it has to be either broken up into multiple strings with “move” commands between them, or you have to call a library routine that takes the entire string plus an array of kerning offsets.

                                                  I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable. Not something I would ever want to edit, let alone write.

                                                  1. 1

                                                    Actually I have. It is a fun little language in th Forth family that is extremely suitable for abstraction, and recalls the features of Lisp family in that it is Homoiconic, and you can write a debugger, editor etc. entirely in postscript. You can program in postscript in a program paradigm called concatenative programming – similar to tacit programming or point free style.

                                                    The language was used (think of it as a precursor to Javascript) for client side programming and rendering in Display Postscript used by NeXt and Adobe for their windowing systems and in NeWS windowing system by Sun Microsystems.

                                                    There has been a number of higher level document formatting libraries in postscript. The best known (by me) is TinyDict and another is here. (The same person wrote his CV in postscript which is a great example of versatility of postscript. Start from line 60. This is what the rendered pdf looks like.

                                                    I used to write in TeX in college and then render it on an Apple LaserWriter. Sometimes I got a glimpse of what the PostScript output of the TeX renderer looked like, and it was basically unreadable.

                                                    Have you seen what generated C code looks like when it is used as a backend by other compilers? Do not judge a language by what generated code looks like.

                                                    1. 1

                                                      I’m surprised you did not mention Don Lancaster’s many PS macros for publishing - https://www.tinaja.com/pssamp1.shtml <- that’s one of the coolest hobbyist use of PS in my experience.

                                                      1. 1

                                                        Indeed! Thank you for the link.

                                                      2. 1

                                                        Looking at the TinyDict docs, I don’t think I’d want to work in a markup language that looks like

                                                        palegreen FB 3 a 12 IN 2.5 paleyellow FB R 4 b SB
                                                        L H 24 rom gs 13 T ( CAPPELLA ARCHIVE ) dup 0.5 setgray s gr 1 a 11 T red CS L
                                                        13 bol ( P R A C T I C A L
                                                        L H 1 red LB
                                                        

                                                        That’s much less clear than TeX. If you’re a fluent PS programmer this might be appealing, but not for anyone else…

                                                    2. 2

                                                      Can you expand on this? What makes PostScript preferable to the rather straightforward markup of (La)TeX?

                                                      1. 1

                                                        See this reply from me. As what you want to accomplish becomes more complex, you really need a well designed programming language, and Postscript IMO is really well designed, though perhaps not as familiar to people from the traditional programming languages.

                                                        1. 1

                                                          Looking at your examples, I’m not convinced.

                                                          I’m a firm believer in separating authoring from layout, something that LaTeX (and HTML) enforce quite well. The canard about amateur desktop publishing was the enthusiastic tyro that mixed different typefaces in a document just because they could. Having to specify typefaces and sizes in the document being authored is a throwback. While fighting with underfull hboxes in bigger LaTeX docs is a thing, the finished product is of high typographic quality.

                                                          I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically. Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.

                                                          Having to manually end each line with an explicit line break is another thing that feels very primitive.

                                                          Regarding the link to TinyDict, the hosting website seems offline, so it does not seem to be under active development.

                                                          It doesn’t look as if PS has Unicode support, either: https://web.archive.org/web/20120322112530/http://en.wikibooks.org/wiki/PostScript_FAQ#Does_PostScript_support_unicode_for_CJK_fonts.3F

                                                          Sorry if I come off as negative, but computer/online authoring is a subject close to my heart, and as time has gone by I’ve come to to the conclusion it’s better to let the author not have to bother with stuff the computer does better.

                                                          1. 1

                                                            I agree that Postscript does not have the similar higher level capabilities already available as TeX for separating content from layout. As it is, the basic primitives provided are at a lower level than TeX. However, my point is that the human interface – the language of postscript is much more amenable to building higher level packages than what TeX provides as a language.

                                                            I don’t want to dump on the person who wrote their CV in PS, but it doesn’t look that good, typographically.

                                                            Surely, these are not related to the language itself?

                                                            Back when I maintained a CV in LaTeX I used a package for that purpose, and it was easy to keep “chunks” of it separate so I could easily generate slightly different versions depending on the position I was applying for.

                                                            This is doable in Postscript. You have a full programming language at your disposal, and the language is very amenable to creating DSLs for particular domains.

                                                            Postscript language at this point is an old language that did not receive the same level of attention that TeX and LaTeX did. My point is not that everyone should use Postscript from now on. What I expressed was a fond wish that something like LaTex was built on top of Postscript so that I could use the Postscript language rather than what TeX and LaTex provides.

                                                            At this point, I have used LaTex for 12 years for academic work, and even after all these years, I am nowhere close to being even middling proficient in Latex. With Postscript, I was able to pick up the basics of the language fast, and I can at least make intelligent guesses as to what a new routine does.

                                                    1. 7

                                                      Brad Cox was the creator of Objective-C and I found this, and some of his related writings, very insightful when I first read them 15-20 years ago. He was a strong advocate of both building reusable components and of the idea of building abstractions for different kinds of developers. His original vision for Objective-C was to allow people to package C libraries into Smalltalk-like components that programmers who understood the business problems but didn’t need to know about the implementation details of the libraries could use.

                                                      I was reminded of this by the recent article on the use of Python in scientific computing. Python has been a great example of Brad Cox’s vision: a smallish number of people write C libraries and Python wrappers around them that are easy to use, a huge body of programmers use a language that makes it easy to compose those building blocks. Python hasn’t displaced Fortran there, Fortran and C++ are still used where Fortran was used, Python has replaced MATLAB or blackboards, which would then have been given by a non-programmer to a programmer to tell them how to assemble Fortran building blocks.

                                                      1. 5

                                                        Objects as composable components hasn’t really worked out in general because APIs are complicated and it’s hard to design and agree on common interfaces between components. Software is also prone to leaky abstractions, where internal state in the component unintentionally becomes part of its behavior in undocumented ways.

                                                        I’m guessing the math/science domain works well because the components tend to be functional (little or no state) and the APIs use standardized mathematical concepts like vectors and matrices.

                                                        I spent a couple of years in the 90s working on OpenDoc, an attempt to create very high level (end-user composable) components for GUI applications. The API became very complex, to the point where it was extremely difficult to implement a component that could contain other components.

                                                        1. 2

                                                          I’m guessing the math/science domain works well because the components tend to be functional (little or no state) and the APIs use standardized mathematical concepts like vectors and matrices.

                                                          Absolutely! Although, even so, a composable, high-level approach still took a long time to catch on, and I have a feeling it’s still mostly confined to areas where interaction with external and/or commercial applications is more common, like statistical learning. There’s a cultural bridge that’s really hard to cross.

                                                          The article @david_chisnall mentioned was pretty vindicating for me to read because back in 2010 or so I tried that exact same thing. I wrote some parallel number crunching code, loosely based on an older Matlab prototype (“loosely” as in it tried to get to the same results starting from the same inputs, but in a slightly different way, which resulted in a lot less number crunching that could also be done in parallel).

                                                          The number crunching parts were all C, and getting that working was the first step. But I was a little reluctant to do all the not number crunching parts in C – parsing job descriptions, reading inputs, writing results, drawing graphs and so on. There was also a tiny but critical piece of interpolation logic that I was not looking forward to re-implementing in any language – it was very well-maintained, well-tested, fifteen year-old Matlab code, and being able to use the exact same interpolation method as the older Matlab prototype would’ve saved me a lot of headaches. So I wrote the whole thing in Python, which called C for the heavy number crunching, and Matlab for that one interpolation function).

                                                          The folks at the lab were mostly okay with me doing this back at the time, largely because, to everyone’s surprise, it seemed to work. The real shock came when I told them this is actually how Matlab works, too: When you do x = A\B it’s actually calling a bunch of C code behind the scenes (I don’t know what it uses now but back then the sparse solver it used was UMFPACK, I think, which my program also used, because UMFPACK worked really well for us).

                                                          Then they let me do an improved version based on the the same stack (more or less, I think by this time we had a bunch of PhD students who were using all available Matlab licenses and I was fed up with waking up at 2 AM so I could test my code, and ported all that crap to Octave). But they were already kindda jittery about it, and I don’t think “that’s how Matlab does it” really registered on their radar. They once again let me do the Python-calling-C-code scheme because we were on a tight deadline but they did ask me if it were possible, once we were done and the paper was submitted, to rewrite this in a “proper” language like C++. I think someone actually tried it after I left, but they hated it and it really didn’t take off.

                                                          1. 2

                                                            I am curious. Would you consider the COM architecture a reasonable approach? (This was your competition right?) Or do you see problems in that approach to reusable components similar to that found in opendoc? I would be happy for any info as I am interested in this area.

                                                            1. 1

                                                              Sorry for the delay replying … this is kind of apples-vs-oranges. COM is more of an ABI, as I understand it, a convention for how to structure vtables. The counterpart on the OpenDoc side was SOM, a technology from IBM for cross-language object binding.

                                                              ActiveX would be the layer more like OpenDoc, although they still don’t match because ActiveX components were developer-oriented while OpenDoc was user-oriented.

                                                              1. 1

                                                                Thank you! much appreciated. So, from what you say, should I understand that you were trying to design objects that were directly usable by the end users?

                                                                With ActiveX one could fairly easily build a component, and make it available for developers who were using say VB or VC, and from my time developing for Windows, I remember that this was fairly easy, albeit with security problems. What other problems did ActiveX have? (In the context of “Objects as composable components hasn’t really worked out in general”)?

                                                                1. 1

                                                                  Yes, OpenDoc components were user-embeddable and bound to document content. Sort of like when you embed a table or chart or picture in a word processor, or a rich text field in a drawing program … only the components don’t belong to the same app or even the same developer.

                                                                  It failed for a bunch of reasons, including

                                                                  • its own architectural flaws and the complexity of its APIs;
                                                                  • this was the mid-90s and suddenly everyone’s attention went from desktop productivity apps to the World-Wide Web;
                                                                  • Apple, its primary contributor, was having a near-death experience and had to kill off a bunch of non-essential R&D.
                                                                  1. 1

                                                                    Thanks! If I understand correctly, this is sort of like the OLE in the Windows environment? where you can embed an Exel file in a Word document? Microsoft seems to be using it still. Does it mean the problem was with the circumstances at OpenDoc rather than objects as composable components? Or are there issues that make objects completely unusable for this purpose?

                                                                    (I am really interested in how one can design components, especially GUI that are easily reusable by developers and end users from an academic perspective, how distribution as objects compare to distribution as libraries, and would really appreciate your insight).

                                                                    1. 1

                                                                      I think there’s still information about OpenDoc available online. It was definitely more advanced than OLE.

                                                                      1. 1

                                                                        Thanks! much appreciated.

                                                        1. 2

                                                          Is there any plan for improving the text-recoverability from Latex generated PDF documents? or is this impossible in Latex and has to be tackled at PDF level?

                                                          1. 1

                                                            ACME really shines on its native OSes that are designed for it, such as Plan 9 and Inferno.

                                                            There used to be an inferno instance with ACME called ACME_SAC that could be run in Windows, Linux and OSX which could access the host operating system paths. I really wish that it gets reborn one day.

                                                            1. 1

                                                              sadly, VitaNova stopped working on it. But the code is all there. Eventually, I believe almost all of these tools would be ported to Go and “modernized”. There is already a version of Sam in progress.

                                                            1. 2

                                                              I did this a while back. It is not as detailed as this post, but implemented in pyodide. So you can explore online.

                                                              1. 1

                                                                Very nice

                                                              1. 8

                                                                Feels like the real takeaway here was that allowing pages to open new windows unprompted was a terrible mistake from the beginning.

                                                                1. 4

                                                                  Does it need to be a new browser window? I thought it was done by painting a window using Javascript?

                                                                  1. 6

                                                                    Which is right, but if browser’s wouldn’t be allowed to open a new window, this deception would seem alarming instead of natural behaviour.

                                                                    1. 3

                                                                      This pretends to open a new window, but the other comment is still fair. Consider it websites could never open new windows - there would always be two zones that don’t overlap: the web content zone and the browser frame zone. Users could (in theory) be trained not to trust anything in the web content zone since it might be fake.

                                                                      But when an overlapped window pops up, that line gets blurred. Something might be surrounded by a browser frame, yet itself legitimately be another trusted browser frame (the overlapping popup window). So it erodes that strict “don’t trust things inside this box*” rule.

                                                                      A while ago, there was a way to make a popup window with no extra browser frame. No url box, etc. That feature was removed for exactly this reason: without a browser frame, the separation of trusted browser vs untrusted content was impossible to determine. This OP demo shows it it is still difficult to determine.

                                                                      • unless you put it there yourself, overlapping windows are still a nice feature but if you put it there yourself vs a popup from the browser you’re more likely to know what it is.

                                                                      The good news is I’m pretty sure all popup windows still get a slot on the OS taskbar……. but with the recent Windows taskbars being transformed into useless application groupings instead of actual representations of open windows, that’s not much help to anyone except the eagle-eyed check-and-double-check everything user.

                                                                  1. 2

                                                                    If you want to programmatically generate your graphs from Python instead, you can use this hacked version instead. Not an editor, but simply attaching the WASM compiled graphviz by ObservableHQ to Pyodide.

                                                                    1. 3

                                                                      From Chapter 2

                                                                      Opening Brackets: Syntactically, an identifier followed immediately by a opening bracket character is treated differently than if the two were separated by spaces. For example f(x) is syntactically different than f (x)

                                                                      For example, this is why it was stressed to you to remember the space after main in defn main () :

                                                                      I get why in an expression f (x) and f(x) need to be distinguished. However, why force that in the function definition? It looks awkward. Are you doing something interesting there? For example, can I do the following?

                                                                      defn create_name_with_prefix("xx") () :
                                                                         ...
                                                                      
                                                                      1. 2

                                                                        I really wish the WASM compiled graphviz was part of this distribution.

                                                                        And I also wish they would make a standalone HTML page that contains Jupyter notebook that I can modify and distribute. This would make a great teaching tool if only…

                                                                        1. 4

                                                                          Mine is moving ctrl to my capslock key, and then becoming proficient with the various c- prefixed keys like <c-w> for window navigation and <c-p> or <c-x><c-f> and others for completion.

                                                                          1. 2

                                                                            It is even better if you move escape to capslock; For vim, escape is the main meta key.

                                                                            1. 2

                                                                              I agree! I have my caps key mapped to esc on tap, and ctrl on hold.

                                                                              If I had to only pick one between ctrl and escape I would pick ctrl for remapping caps lock. The more I use my mappings that begin with ctrl, the more I appreciate having a ctrl key within easy reach. Also useful for other programs. And <c-[> sends the escape key, so it’s not too hard to press escape when needed.

                                                                              1. 2

                                                                                I don’t think there are many situations where ctrl-c doesn’t do the same thing as escape. It’s not a bad muscle memory anyhow for a similar concept everywhere else. I don’t remember the last time I’ve reached for escape in vim…

                                                                                1. 1

                                                                                  imap kj <esc> is my favorite. Your fingers never need to leave the home row. Add this to your .inputrc to get the same effect in bash:

                                                                                  $if mode=vi
                                                                                      set keymap vi-insert
                                                                                      "kj": vi-movement-mode
                                                                                  $endif
                                                                                  
                                                                                  1. 3

                                                                                    mapping kj and jk to esc is great because you just have to mash both them without worrying about the order… it’s liberating.

                                                                              1. 7

                                                                                Why a paywalled link? Both the source and the paper are available online.

                                                                                And yes, Menhir is the best LR parser generator in the world. ;)

                                                                                1. 1

                                                                                  It seems this is an extended version. The scihub pdf link.

                                                                                  1. 1

                                                                                    It seems this is an extended version. The scihub pdf link.