1. 1

    I’m trying hard to both ignore and retain all the kubernetes expertise I crammed in this week. Ignore so I can recover, retain so I never have to re-learn it.

    Let me know if you need help mounting NFS volumes in a Pod, though.

    1. 1

      I’ve been to a bunch of conferences, many good, but only a few I consider really good.

      In a really good conference, I find myself with the following pattern:

      • Attend a not-too popular talk (I have crowd issues, so I self-extract from anything packed)
      • Get exposed to something interesting in that talk
      • Digest that something interesting into actual learning, by skipping talks and talking with people (“hallway track”)
      • Repeat as much as is feasible (generally not more than once or twice per day)

      In good conferences, I still attend things, and still talk with people. But the talking is more about the technical issues at my job (or hobby, depending) than about the presented talks. Being able to talk with knowledgeable, interested people who are outside the culture and assumptions of my workplace, is really helpful. And being able to provide that perspective for someone else is very satisfying.

      1. 1

        Oh, hey, I had that shepherd in undergrad, though it was before he moved to Pomona. Better at teaching some advanced topics (particularly programming languages) than the 101 stuff, but decent fellow.

        Thank you for reading my largely irrelevant comment.

        1. 1

          https://en.wikipedia.org/wiki/Electronic_Life has two parts, the glossary and a coding tutorial and programs to type in for the Apple II. Not sure how well they translate to the IIe though.

          1. 1

            Any Applesoft BASIC that works on the II will work on the II+, IIe, and IIc.

          1. 3

            If anyone’s got any spare 5 1⁄4-inch floppies, let me know, because now I’ve got two whole slots for them (this is actually a serious inquiry, please tweet at me!).

            I have some bad news for you: floppies degrade over time. When I was poking an Apple IIe about 15 years back, a bunch of my disks from the 80s and early 90s already had corruption.

            IIRC, there are ways to store them to minimize degradation, but the most comon ways of storing them were really bad.

            edited to add: There are plenty of Apple II disk images out there (the asimov.net apple section is pretty remarkable). It may well be more profitable to get a Raspberry Pi to interface with your Disk II controller, translating from images.

            1. 2

              Most of my floppies from the 80s still work fine.

              1. 1

                My Amiga DD floppies do.

                HD floppies on the other hand are mostly unreadable by PC FDCs. And not usable anymore. I needed to write a bunch recently from disk images, went through old ones, about one in 6 did format, write and verify. I have a large stack of bad floppies from attempts.

              2. 2

                My C64 disks are in my parents’ (unconditioned) attic. Bet they’re in real pretty shape.

                1. 2

                  Hey, at least you still have them.

                  One time several years ago, my parents were cleaning out their storage room. They came across a box full of old Commodore 64 and 128 software. Knowing my soft spot for retrocomputing, they asked me what to do with it. I was pretty sure our old Commodore 128D we’d had was somewhere nearby, so I told them to keep it unless they couldn’t find the 128. They never did find the 128, so they threw it out.

                  I remained half-convinced that they still had it somewhere, though, and much more recently, I was visiting them and we found the 128 in a box in the shed in the back yard. I’m not sure I have ever felt such intense regret at being proven right.

                  1. 2

                    About 27 years ago, when I was 12, I wrote an Apple II program in assembler that I was inordinately proud of. My mother and I planned to sell it, and she made sure I kept frequent backups. It never went anywhere; after all, that was the year that the Apple IIe was discontinued (though I didn’t know that until much later).

                    Then a couple of years ago, feeling nostalgic, I wanted to look back at that project. I called my mother to ask if any of our old Apple II disks were still around. They had all been thrown out, on the assumption that we had moved on.

                    To be fair to my parents, in several of the intervening years, I didn’t care about that old stuff any more than they did. I really had moved on. But now I wish I could look back at some of that old code I wrote.

                    1. 1

                      Never, ever trust family with your old computers and media. You should have taken them with you.

                      If any of you has a bunch of floppies or computers you care about at your family’s place, ensure you get them on your next visit. You’ll be lucky if they are stil there. Priorize the data, as computers are somewhat replaceable, but data is not.

                      1. 1

                        Oof. That sucks. At least in my case the discarded disks didn’t represent the loss of anything I had created personally (I’m a bit younger than you, so my memories of the Commodore consist mainly of playing Space Taxi and Montezuma’s Revenge until we got a PC), but because the machines and all those disks were originally my dad’s, it would’ve been an interesting time capsule.

                    2. 1

                      Just ensure you imaged the important ones.

                    3. 1

                      Is it degradation of the data or of the medium? I mean, okay, over time the magnetic surface enters a high-entropy state, losing data in the process, but that’s nothing a good old reformat isn’t supposed to resolve on floppies, making them - in theory - reusable.

                      1. 1

                        I’m not an electrical engineer, so take all this with a grain of salt, but: both.

                        Apple II disks are actually more resilient against data-loss than, say, the high-density 3.5” floppies that came later, just because larger bits means more tolerant to small changes in magnetism. But after 30 years, there’s likely to still be a fair bit of data degredation – and if your goal is to (e.g.) play vintage games, a reformat isn’t what you want to do.

                        But there is also medium degradation. Normal physical stuff (bending, dust, etc), environmental stuff (humidity, temperature fluctuations, etc), electronic stuff (magnetic fields including those from transformers and motors, static electricity, etc), and chemical stuff (oxide degradation, exacerbated by environmental factors) can all cause physical media degradation. Like any medium nowadays, there were also service lifespan issues: the more you use it, the closer to medium failure you get.

                        Some of the old disk manufacturers would say “30 year lifespan” or something similar. I suspect that was an exaggeration, or maybe a best-case scenario, since not many people expected floppies to still be in use 30 years later.

                        None of that is to say that an old disk is guaranteed to have failed. But the likelihood increases over time, same as everything else. And oxide-on-mylar is less durable than most of what we use today.

                    1. 5

                      Frustrated that the top quora answer is from somebody who hates lisp.

                      My take:

                      • Lisp had some popularity that died a hard death, killing adoption for roughly a generation
                      • Python has had a unified ecosystem from the getgo. Lisp had ecosystem fragmentation early on, with several major implementations costing $$$. ASDF-install didn’t show up til 2003ish, took years to get widespread adoption, never worked on Windows, and is no longer in use (replaced by Quicklisp). PyPI showed up around the same time, and is still the primary package repository
                      • Python has had a robust standard library from the getgo
                      • Python’s standard library had web-relevant libraries very early on
                      • Python has a syntax, and that syntax grows. In particular, Python has a pattern of seeing useful concepts in other programming languages (including Lisp), and incorporating them after refining the syntax to the most common use cases
                      1. 5

                        I have to say that I found erlang rather unwieldy as a language and environment, but I did love its binary pattern-matching. And I miss it whenever I’m parsing a binary file format.

                        1. 0

                          I agree. I am not sure why other languages do not implement this. I was trying to do the same with other languages and it is a hilariously complicated task. I mostly dislike two things about Erlang:

                          • OTP
                          • the lack of an easy way of creating a single binary from a project
                          1. 5
                            • OTP

                            Why? This is exactly what makes Erlang great.

                            • the lack of an easy way of creating a single binary from a project

                            This is described into Erlang nature and one of the greatest features - hot upgrades.

                            1. 1

                              Maybe it is just me. I need to spend more time on it. I never built anything that required OTP, and at the time I wanted to learn it I could not learn it fast enough.

                              Hot upgrades sounds amazing in telco but in many industries, they want to have more control over what is running in production and have a solution to swap out the software using different methods.

                              1. 2

                                I never built anything that required OTP

                                Supervisors are part of the OTP not “Erlang core”. gen_servers as well. So in Erlang virtually any application will use OTP.

                        1. 5

                          SML came out the same year as Haskell, which was a much “purer” example of a typed FP language.

                          Well, that’s just not true. “A Proposal for Standard ML” first came out in 1983, and “The Standard ML Core Language” was 1984. Implementations resulted from each.

                          Both of those are prior even to Miranda’s first release (the lazy predecessor to Haskell).

                          Even if you wait until “The Definition of Standard ML (SML ‘90)”, that was actually completed in 1989, before Haskell’s first release. And SML ’90 is closer in concept and maturity to the Haskell 98 effort – standardizing an existing language in a consistent and portable way.

                          1. 3

                            I was going on the “Definition of SML” the release of the first Haskell, but yeah, it seems like that’s incorrect. Will go ahead do a bit more reading. If you have any resources for why SML didn’t get as popular as OCaml, I’d love to read them.

                            1. 1

                              No resources to point you to, but I can give you my take.

                              SML didn’t want to change. Milner was on the record as thinking SML was completed. It’s much harder to change than most languages, because of the formal verification requirement.

                              That resistence to change meant that it took a very long time for a standard library to materialize. It finally did with the ‘97 standard (the Basis). You can see that the library is low-level and limited, especially given the rise of the web at that time. It hasn’t changed since then.

                              Compare ocaml, which had its first release in 1996. By 1997 (possibly earlier – this was the earliest doc I was able to find), its standard library included threading, graphics, a file-based DB interface, lex/yacc parsers, and more robust unix interfacing. The libraries were messier than the Basis, but they were there, and they improved and expanded over time.

                          1. 8

                            I might be in the minority here, but I tend to:

                            • use top (well, usually htop) as a short-running, rather than long-running tool
                            • start with my terminals at 80 columns and only expand if needed

                            So I have the following questions:

                            • Do the fancy graphs populate immediately, or are they only populated over time?
                            • How well does each tool deal with smaller terminal sizes?
                            1. 6

                              I don’t think you’re in the minority. That seems like a pretty common use case. The graphs aren’t pre-populated, they start from when the tool starts. Although zenith does have a persistent history feature that lets you navigate data from previous invocations.

                            1. 3

                              Why is this tagged ‘erlang’? It seems pure rust, for rust, with only a joking comparison to erlang on the front page.

                              1. 1

                                Because it’s essentially Erlang for Rust, so people interested in Erlang might find it interesting. I’m an Erlang guy myself, and I was curious about it.

                              1. 2

                                Interesting idea, though if you cared about privacy why would you allow something like Alexa into your home in the first place?

                                1. 1

                                  Usefulness? One can care about privacy and also care about convenience, even if the two are often at odds.

                                1. 5

                                  where computation is almost free, memory size is almost unlimited (although programmers’ ingenuity in creating bloated software apparently knows no bounds)

                                  I would submit that using bignums (or the local equivalent) everywhere, when fixed size types will do, is an example of “programmers’ ingenuity in creating bloated software”.

                                  This:

                                  Compromises to minimize instructions extend so far as to make familiar-looking operators like + and < behave in unintuitive ways. If as a result a program does not work correctly in some cases, it is considered to be the programmer’s fault.

                                  should be addressed in new languages by having + and < stop the program if they’re about to do something “unintuitive” as opposed to removing bounds everywhere, IMO.

                                  Maybe I’m not yet understanding the author’s argument, though; I’ve only read the page linked, not the rest of the work.

                                  1. 2

                                    I would submit that using bignums (or the local equivalent) everywhere, when fixed size types will do, is an example of “programmers’ ingenuity in creating bloated software”.

                                    FWIW, Haskell takes the other approach: every number is a Num, which is a very abstract type, but also possibly an Integer, or maybe a fraction, or a Float. The compiler infers the type based on what operations are performed, but the programmer can specify the type to ensure more efficiency.

                                    Basically, instead of getting fed up and saying “bignums everywhere”, use “numbers everywhere”. This can also lead to bloated software of course, so give programmers the tools to mitigate that.

                                    1. 1

                                      Let us consider something simple:

                                      9999999999999999.0 - 9999999999999998.0
                                      

                                      What can we actually do about this?

                                      • atof() could fail since the string representation doesn’t match the resulting float, but then what should happen for 0.3?
                                      • We could use a hypothetical atonumber() which uses decimal or big float or some other representation, but what does this do to performance?
                                      • We can ignore the issue and blame the programmer for not constraining the input domain to that of our function. This is what most people do.

                                      I’m interested in other solutions. My current best idea feels extremely ambitious and would not be easily ported to other languages and programming, but it seems right, so is it worth being unpopular?

                                      Popularity is an important function of culture, and our “culture” is extremely resistant towards unpopular solutions, but now consider this: If the language and tooling contributes to faster, shorter, and more correct (i.e. produces the “right” answer for a larger input domain), then isn’t that better?

                                      For most programmers, the answer seems to be no: For most programmers having the right amount of whitespace and the ability to use sublime text and stack overflow is more important. That’s a shame, and I think it really makes it hard to talk to other programmers about just how much better programming could be.

                                      1. 4

                                        Popularity is an important function of culture, and our “culture” is extremely resistant towards unpopular solutions, but now consider this: If the language and tooling contributes to faster, shorter, and more correct (i.e. produces the “right” answer for a larger input domain), then isn’t that better?

                                        Massive snark warning: popular programming culture is the intellectual equivalent of Medium: neither rare nor well-done. Pop culture is fashion, and gets dragged kicking and screaming to each new idea, which it then slowly accepts. The worst part is it then rewrites the oral history to argue that it always saw the brilliance of the aforementioned idea. In this way, it is never wrong about anything, and, implicitly, all the things that aren’t popular can’t be that good.

                                        1. 3

                                          We could use a hypothetical atonumber() which uses decimal or big float or some other representation, but what does this do to performance?

                                          Not only that, it just shifts problems around. With decimal, what happens when you compute 1/7? With bigfloat, what about 0.1? With rationals, what happens when you have iteration on an expression with relatively prime numbers? With any representation, what about irrational numbers?

                                          Fundamentally, we’re just trying to figure out how to represent an infinite amount of precision in a finite amount of space. The meaning of sufficient precision is application dependent, and there are tradeoffs.

                                          Better knobs and more types that let us pick tradeoffs may lead to more precise results, but it doesn’t move us to easier.

                                          And given that people don’t do error analysis today, I suspect it won’t happen in this alternate knob filled universe either.

                                          1. 1

                                            Good point. I was reading it through the lens of my own experience. That made me think of integer overflows/underflows when I read “ + and < behave in unintuitive ways”… and my answer in that case for the programs I write is that I’d like them to fail rather than just silently give incorrect results.

                                        1. 1

                                          It took a long, rambling way for me to get to the point where i asked: “wait, people spend the majority of their programming time in a code editor? I spend the majority of my programming time either offline or reading and absorbing requirements”

                                          1. 7

                                            After digging through the Go source code, we learned that Go will force a garbage collection run every 2 minutes at minimum. In other words, if garbage collection has not run for 2 minutes, regardless of heap growth, go will still force a garbage collection. We figured we could tune the garbage collector to happen more often in order to prevent large spikes, so we implemented an endpoint on the service to change the garbage collector GC Percent on the fly. Unfortunately, no matter how we configured the GC percent nothing changed. How could that be? It turns out, it was because we were not allocating memory quickly enough for it to force garbage collection to happen more often.

                                            As someone that’s not very familiar with GC design, this seems like an absurd hack. That this 2-minute hardcoded limitation is not even configurable comes across as amateurish even. I have no experience with Go – do people simply live with this and not talk about it?

                                            1. 11

                                              As someone who used to work on the Go team (check my hats… on the Cloud SDK, not on language/compiler), I would say that:

                                              1. It is a mistake to believe that anything related to the garbage collector is a hack. The people I met who worked on it were far smarter than I and often had conversations that went so far over my head I may as well have walked out the room for all I could contribute. They have been working on it a very long time (see the improvements in speed version over version). If it works a particular way, it is by design, not by hack. If it didn’t meet the design needs of Discord’s use case, then maybe that is something that could be worked on (or maybe a later version of Go would have actually fixed it anyway).
                                              2. Not providing knobs for most things is a Go design decision, as mentioned by @sanxiyn. This is true for the whole language. I have generally found that Go’s design is akin to “here is a knife that’s just about sharp enough to cut your dinner, but you’ll find it fairly difficult to cut yourself”. When I worked with Java, fiddling with garbage collection was just as likely (if not more) to make things worse it than was to make it better. Additionally, the more knobs you provide across the language, the harder it is to make things better automagicaly. I often tell people to write simpler Go that’s a little slower than complex Go that’s a little faster algorithmically because the compiler can probably optimize your simpler code. I would guess this also pertains to GC, but I don’t know anything about the underpinnings.
                                              1. 6

                                                One of explicit design goals of Go’s GC is not to have configurable parameters. Their self-imposed limit is two. See https://blog.golang.org/ismmkeynote.

                                                Frankly I think it is a strange design goal, but it’s not amateurism. It’s a pretty good implementation if you assume the same design goals. It’s just that design goals are plain weird.

                                                1. 13

                                                  I have no experience with Go – do people simply live with this and not talk about it?

                                                  My general impression is that tonnes of stuff about Go is basically “things from the 70s that Rob Pike likes”. Couple that with a closed language design team…

                                                  1. 2

                                                    It is configurable, though. You can set an environment variable to disable GC and then run it manually or you can just compile your own go with a bigger minimum interval.

                                                    Either would be a lot less work than rewriting a whole server in rust, but maybe a rewrite was a good idea anyway for other reasons.

                                                    1. 2

                                                      or you can just compile your own go with a bigger minimum interval.

                                                      I’m not sure “rewrite code to change constants then recompile” counts as “configurable”, nowadays.

                                                  1. 3

                                                    I’m of the impression that nothing happened for the simple reason that the only players who could have made it happen had commercial incentive not to do so.

                                                    1. 8

                                                      I clicked on this figuring it was another “use hand drawn maps, get someone to move around, add passive enemies” tutorial. I was pleasantly surprised: this is very detailed, going into enemies, stats, UI elements, town layers, economy, XP, AI, magic, a wide variety of different map generators, plus ways to combine them ….

                                                      Genuinely impressed.

                                                      1. 2

                                                        Not even sure which of the comments were intended to be snarky and which are just real developments :P

                                                        One of the biggest things for me is version control, I hardly knew anybody who used in in the year 2000.

                                                        1. 2

                                                          Absolutely the most important factor, I think. Revision control existed earlier, but it’s not only gotten nigh-universal (even hobbyists use it), but it’s improved a lot (for instance, I use both svn and git for work, and git is so much easier to deal with because it’s not file-based). In 2000, we would have at best been using CVS.

                                                          1. 1

                                                            When I started using VCS I already used SVN for most of the things (university, work, private stuff) but PHP was still on CVS - iirc merging was a bit of a complicated thing. git was a lot easier to “repair” if someone did something weird, but I don’t remember much. CVS wasn’t horrible, actually.

                                                            1. 1

                                                              SVN is a pretty minor improvement over CVS, IMO. It removed some of the flakiness & weird corner cases. I’m still much happier merging, trusting diffs & logs, & moving around big chunks of code in git. (About the only thing I prefer from SVN & CVS is the way you could revert uncommitted changes to a file by deleting it and running update).

                                                              1. 2

                                                                Subversion was better with submodules and binary files. For games with assets, for example. (Talking small games here, not real[tm] professional[tm] game development :)

                                                                1. 2

                                                                  As someone who had to do branch merges and cleanups (not to mention flaky network connections causing broken commits) in CVS, I say SVN was a remarkable improvement. Also, and very importantly, SVN was not file-based, but commit-based, even though it used nearly the same UI as CVS did. It was honestly a remarkable feat.

                                                                  And, frankly, that UI was much easier for me to understand than git’s (which, honestly, is a UI nightmare).

                                                                  Still, when bitbucket and github launched, they ushered in ubiquity of source control, and that wouldn’t have happened without git and hg.

                                                                  1. 1

                                                                    I managed to avoid having to do branch cleanups & merges in CVS, though I’ve heard that it’s a nightmare.

                                                                    You’re right – I’m technically wrong when I say SVN is file-based. SVN doesn’t identify identical blocks of code as they move between files, as git does. Additionally, I’ve frequently found that the svn state of the root directory of some tree won’t track with individual files, so that ‘svn log’ will give out of date information unless you do an update first – a gotcha that can be very confusing to people who are used to other RCSes.

                                                          1. 5

                                                            There are both jokes and serious observations, I like that :-)

                                                            I have a problem with two items specifically though:

                                                            Since we have much faster CPUs now, numerical calculations are done in Python which is much slower than Fortran.

                                                            Actual numerical calculations are run by a vectorized C code, even if it’s called from Python. Python is there to describe the logic around.

                                                            Unit testing has emerged as a hype and like every useful thing, its benefits were overestimated and it has inevitably turned into a religion.

                                                            It also made software orders of magnitude* more reliable. I consider testing and version control the two most important innovations in software developments since I started on it.

                                                            *) I can exaggerate things as I see fit :-)

                                                            1. 3

                                                              Actual numerical calculations are run by a vectorized C code, even if it’s called from Python. Python is there to describe the logic around.

                                                              Same nitpick, but reminder that SciPy has more Fortran in it than C, which makes the author’s statement even more confusing.

                                                            1. 13

                                                              I used BeOS as my primary OS for a year or so, eventually dual-booting with Linux and then dropping it altogether.

                                                              Many things about BeOS were sort of incredible. Booted in a couple seconds on the machines of the era, easily 5-10x more quickly than Linux. One of the “demos” was playing multiple MP3 files backwards simultaneously, a feat that nothing else could really do at the time, or multiple OpenGL applications in windows next to each other. The kernel really did multiprocessing in a highly responsive, very smooth way that made you feel like your machine was greased lightning, much faster than it felt under other OSes. This led to BeOS being used for radio stations, because nothing you were doing in the foreground stood a chance of screwing up the media playback.

                                                              BeOS had a little productivity suite, Gobe Productive. It had an interesting component embedding scheme, I guess similar to what COM was trying to be, so you just made a “document” and then fortified it with word processing sections or spreadsheet sections.

                                                              There were a lot of “funny” things about BeOS that were almost great. Applications could be “replicants,” and you could drag the app out of the window frame and directly onto your desktop. Realistically, there were only a couple for which this would be useful, like the clock, but it was sort of like what “widgets” would become in a few years with Windows and Mac OS X.

                                                              The filesystem was famous for being very fast and for having the ability to add arbitrary metadata to it. The mail client was really just a mail message viewer; the list of messages was just a Tracker window (like Finder) showing attributes for To, From, Subject, etc. Similarly, the media player was just able to play one file, if you wanted a playlist, you just used Tracker; the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something. You could do what we now call “smart searches” on Mac OS X by saving a search. These worked just like folders for all the apps.

                                                              The POSIX compatibility was only OK. I remember it being a little troublesome to get ports of Unix/Linux software of the era going. At the time, using a shittier browser than everyone else wasn’t really a major impediment to getting anything done, so usually I used NetPositive. There was a port of Mozilla, but it was a lot slower, and anyway, NetPositive gave you haiku if something went wrong.

                                                              There were not a huge number of applications for BeOS. I think partly it was a very obscure thing to program for. There were not a lot of great compatibility libraries you could use to easily make a cross-platform app with BeOS as a target. I wasn’t very skilled at C++ (still am not) but found trying to do a graphical app with BeOS and its libraries a pretty huge amount of work. Probably it was half or less the work of doing it in Windows, but you had to have separate threads for the app and the display and send messages between them, and it was a whole thing. Did not increase my love for C++.

                                                              All in all, it was a great OS for the time. So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today, but it was such an idiosyncratic platform I imagine it would have been quite difficult to get graphical Emacs on there, let alone the others. But perhaps it’s happening with Haiku.

                                                              1. 3

                                                                the filetype, Title, Artist, Album, etc. were just attributes on the file. I’m not entirely sure how it parsed them out, probably through a plugin or something.

                                                                Querying was built into the filesystem. There was a command-line query, too. So many applications became so much simpler with that level of support for queries, it was great.

                                                                you had to have separate threads for the app and the display and send messages between them, and it was a whole thing

                                                                Yeah, that was a downside, but it was very forward-thinking at the time.

                                                                So much of my life now is spent in Emacs, Firefox and IntelliJ that if I had those three on there I could use it today

                                                                Well, you’re almost in luck. Emacs is available – a recent version, too!

                                                                IntelliJ is there, too, but 1- only the community edition, and 2- it’s a bit further behind in versions.

                                                                Unfortunately, Firefox doesn’t have a Haiku port at this time. Rust has been ported, but there are still a boatload of dependencies that haven’t been. The included browser, WebPositive, is based on a (iirc, recent) version of webkit, fwiw, so it’s not antiquated.

                                                                1. 2

                                                                  the problem with relying on additional file metadata for functionality in a networked world is that you have to find a way to preserve the metadata across network transfers. I also used BeOS for several years for daily everything. Networking in BeOS was practically an afterthought.

                                                                  1. 2

                                                                    Sure, and you need to be able to populate metadata for untagged files from the network.

                                                                    Fortunately, most modern file types have metadata in them, so discarding the fields outgoing doesn’t hurt, and populating them incoming isn’t too hard. IIRC, that sort of thing was generally part of the application. So, e.g., the IMAP sync app would populate your email files with metadata from the email header fields, the music player app would populate metadata from the mp3 or ogg info headers, etc.

                                                                    1. 2

                                                                      but then this becomes a schema problem. next-gen ideas like tagging files pervasively with identical metadata regardless of type for relating and ordering dies as soon as you tar it up and pass it through a system that doesn’t know about your attributes - unless you have abitrary in-band metadata support, and then it becomes a discoverability and a taxonomy problem, and if you have it in multiple places you have to keep it synchronised and stable with regards to shallow copies like links. You can still have the support for it as a second layer of metadata, of course, and the ability to index and query otherwise extant metadata out of band is useful as an optimisation, but once you extend the idea of the file namespace to include foreign data, you lose out on ‘smart metadata’ as a first class foundation. A similar thing happened with multi-fork files for MacOS.

                                                                      1. 1

                                                                        A similar thing happened with multi-fork files for MacOS.

                                                                        Sure, but it’s still so useful that when Apple rewrote their filesystem a couple years ago, they included support for resource forks. NTFS supports them, too, as does (iirc) the SMB protocol.

                                                                        Apple standard practice has moved to bundle directories for fork-requiring executables, sure, and that reduces those interop problems a little bit.

                                                                        I guess what I’m saying is: file forks are still widely supported, regardless of difficulty integrating with un*x filesystems. Since they’re still incredibly useful ways of interacting with file systems, I don’t see why we should avoid them.

                                                              1. 4

                                                                I remember this. It never had a chance. Very few people were technically inclined enough to participate.

                                                                1. 4

                                                                  Not only that. I remember looking at it and throwing the towel because the effort didn’t seem worth the result. It was even widely ignored by bloggers, that should tell you enough. There was no traction, there were no good libraries for it - and this was even before anyone had ever heard of Facebook or Twitter.

                                                                  1. 4

                                                                    I remember it too, and agree.

                                                                    If someone wonders why JSON won, take a look at the RDF spec.

                                                                    1. 4

                                                                      Now we have JSON-LD, which is basically an invasive RDF graft onto JSON…

                                                                      1. 1

                                                                        And — like with RSS before it — lots of applications don’t follow the JSON-LD spec as written, either. See Litepub, for example.

                                                                      2. 3

                                                                        RDF does something complicated, which is what the specs describe. That it happens to use XML is arbitrary; it’s just a data format.

                                                                        1. 2

                                                                          Intellectually I’m aware of this, but I’ve never seen RDF expressed in anything other than XML - I guess it was because of the Semantic Web connection.

                                                                          Out of curiosity, is there any other plain-text serialisations of RDF? (Apart from JSON-LD, mentioned in this thread)

                                                                          1. 2

                                                                            Turtle? - https://www.w3.org/TR/turtle/

                                                                            I’ve seen it in use at work (BBC Sport website).

                                                                            1. 3

                                                                              Yep, Turtle. Or N3. Apparently there’s also N-Triples and N-Quads, though I don’t remember those being available when FOAF was a going concern.