1. 1

    I am finding myself in a ridiculous dilemma.

    I like ripgrap better than the-silver-searcher, but I am finding it nearly impossible to switch my muscle memory from typing ‘ag’ to ryping ‘rg’!


    1. 16

      alias ag="rg"

      1. 3

        Wow. I always typed it out fully. Ripgrep. 🤯

        1. 2

          I was in the same dilemma, and the way I overcame it was by cheating my brain.

          A simple alias ag=rg should do the trick !

        1. 11

          This raises an interesting point and one that I think browsers could address. Much like they carefully craft information display to help people recognize being on genuine/secure sites, one can imagine a browser feature where if the link text contains a link and that link doesn’t match the href, a warning is displayed.

          1. 3

            It’s honestly somewhat shocking that with the amount of thought that goes into other browser security features, this one was overlooked. This also feels particularly dangerous in HTML email.

            1. 1

              I like the idea. I’m somewhat concerned about false positives with URLs that don’t match but redirect or even with typos. So the warning has to take that into account and shouldn’t be too scary. Or you need to perform a request to detect redirects or implement a heuristic etc, but all that is prone to mistakes.

              This is likely more fun on mobile which doesn’t have a mouseover (do people still check that?)

              1. 1

                Facebook’s tracking links would break. Probably a good thing, but I’m not sure everyone will agree.

                1. 1

                  It could be presented in a way that just makes that more obvious and gives the user the choice to follow the displayed link or the href. Then users could chose to be tracked or not. Definitely with you that it’s a good thing and that not everyone will think that—clearly the people who make trackers think they’re okay at least.

                  1. 1

                    Same with the links in google search results, twitter (t.co), slack, …

                1. 1

                  Ambiguous parsing, undocumented escaping. Hidden complexity. My favorite topics and (imho) the shared root cause for various security bugs.

                  1. 1

                    An opinion in good company.

                    1. 1
                  1. 4

                    Well, so FloC sucks, as we already knew for months. Luckily it doesn’t need to be implemented in Firefox, so we are good?

                    1. 4

                      We are openly opposed. So..yeah :)

                      1. 1

                        If you want de-FloC’ed chrome, there is ungoogled-chromium on the desktop, and bromite on android, both of which I use. Although I like the Firefox plugin ecosystem, I also find Firefox buggy, so it’s nice to have a chromium-based alternative that is stripped of spyware and telemetry.

                        1. 1

                          I’m using ungoogled-chromium for testing yes. Although i disagree with a lot Mozilla does, I’ll keep using it until they pry it from my cold dead hands, or something better (=more secure, better privacy) comes along that doesn’t strengthen Google’s power over the web.

                      1. 4

                        I fondly the remember the times when your python2 script could start with # encoding: rot-13 and your source code be completely rot13 encoded. More on my blog at https://frederik-braun.com/rot13-encoding-in-python.html

                        1. 1

                          A reasonable corollary is that software whose source code is actively withheld is probably exploitable. This is the security-oriented argument for Free Software, leading to Linus’ Law.

                          1. 2

                            More likely than public source code, maybe. But there aren’t quite as many eyeballs as we’d like to believe. See also that recent Linux kernel hypocrite bug fixes incident.

                            1. 1

                              That’s fine for software which isn’t formally proven correct. All unproven software has to be either trivial to prove correct or have some outstanding bugs, by definition; as a result, most software is buggy. Thus, it’s enough to simply point out that Free Software can have fewer bugs, not that it has no bugs.

                          1. 34

                            For those who don’t know, Tavis Ormandy is one of Google’s infosec research people, and has been involved in discovering many high-profile vulnerabilities. People should be taking his observations here seriously.

                            1. 6

                              On the other hand, they work for Google. Google makes a web browser, and their most profitable business is leveraging user data to sell advertisements and ???

                              So this person saying “trust me, use your browser’s password manager” kinda seems like a conflict of interests…

                              1. 7

                                As someone who works on security for the competition, I can say this is far-fetched.

                                The (current) security model of web extensions is really bad and Tavis is giving really good security advise by sugesting the browser built-in password manager mainly and firmly for the reason that it doesn’t have to subject itself to the a huge class of attacks. It’s somewhat hidden behind Tavis’ main evidence link.

                                There are easily dozens of password managers implemented as extensions which have fallen into the very same trap. Really, the web extensions API are an unfortunately brittle foundation to build upon. The original author of AdBlock Plus, Wladimir Palant, has written countless articles of password managers being broken due to this poor design (and poor implementations), take a look at the password-manager category on his blog.

                              2. 8

                                If this had been written anonymously, should I take it less seriously??

                                1. 27

                                  Attribution is one among many factors in considering what is worthy of your attention. Since the Internet is infinitely large in comparison to our available attention, most people only look at what they have somehow been guided to look at.

                                  In this case, you are looking at a website which is dedicated to pointing out other interesting websites, for some very weird values of “interesting”. So your first factor is “did it appear here?”. Another factor, probably weaker, is “how many votes did it get?”. A stronger factor would be “Does the subject line interest me?”.

                                  All that gets you to look at it, but the actual quality is still debatable. Since the subject is security, it is relevant that the author has some significant accomplishments in that area. If the subject was solo trekking the length of the Erie Canal, information security expertise might be much less useful in evaluating whether the author knows whereof they write.

                                  This whole process includes discovery and distribution in many different and repeated ways.

                                  So, yes, if this had been written anonymously, it is less likely that you would be seeing it and you might consider it less thoroughly. There are a lot of wackos out on the Internet.

                                  1. 2

                                    I think you’re raising a serious epistemological question, and I’m happy to chat about that over on IRC (regulars to the site should feel free to direct-message me), but I don’t think it makes sense to have the conversation here. It would be a lot of controversy and not really topical for the site.

                                    1. 2

                                      I agree that this is ad hominem but then again, ad hominem is only dangerous because it’s a good heuristic.

                                      Still, I agree that putting this here as an argument seems silly.

                                  1. 2

                                    Looks like that it hasn’t been shipped on official Fedora 34 repo

                                    1. 4

                                      Most, if not all, Linux distros compile their own builds. Depending on the distro, it can sometimes take a full week :(

                                      1. 3

                                        Free QA for Mozilla! And it makes sure distributions are actually providing only free software that can compile from source…

                                    1. 3

                                      HTML5 is a result of truly marvelous engineering. There is a debate about whether it is easier or harder to write a web browser compared to the past, but it is definitely, approximately infinitely easier to parse HTML now.

                                      1. 3

                                        Is there a debate? I can’t imagine arguing for the “easier” side. It’s gone from “a reasonably smart person can do it” to “it takes a billion-dollar organization, and actually, most of the billion-dollar organizations have given up.”

                                        1. 2

                                          I think you’re talking more about writing a web rendering engine? i.e. with CSS (multiple versions) and the semantic meaning of HTML (displaying forms, etc.) Not to mention JavaScript, WebAssembly, images, video, etc.

                                          Parsing HTML is just one part of that. While the spec is large, it’s at least fully specified and I’d say makes it easier to write that part of a rendering engine.

                                          1. 1

                                            I was replying to the part of the comment that said “there is a debate about whether it is easier or harder to write a web browser compared to the past”.

                                            1. 1

                                              Ah I see, I missed that. Yes I definitely agree it’s way harder to write a web browser!

                                              At least the problem of parsing is cleaned up as much as it can be though :) Too bad you can’t ditch the XHTML stuff since I think there must be old content still out there, and will be forever …

                                        2. 2

                                          The amount of oddities and backwards compatibility hacks to support existing content are a bit saddening, but at least they are specified :)

                                          1. 1

                                            Yes I like using HTML5 directly (or via Markdown which supports HTML), but I think a lot of people don’t like to use it that way.

                                            Before HTML5, there was a strong motivation to use layers of generators, tools, and frameworks on top of the web. It was less compatible, especially CSS, and it was useful to have that abstracted away.

                                            But then HTML got good, but the habit of covering it up remained!

                                            One of my favorite things about HTML5 is that I can just type <!DOCTYPE html> and not all the nonsense with HTML4 :)

                                            Also it’s funny that I remember people complaining that HTML5 won’t be ready until 2022, but now we’re almost there :) https://www.wired.com/2008/09/html-5-won-t-be-ready-until-2022dot-yes-2022dot/

                                            It’s turned out very well in the long term and the effort should be appreciated more! They really cleaned up a big mess.

                                          1. 5

                                            This is a really great intro to the concepts of regular languages and context-free grammars! But I need to disappoint you:

                                            HTML is not a context-free grammar. There is a lot of contextual parsing. If you want to gaze into the abyss, I recommend folks take a look at the parsing bits from the HTML spec.

                                            But for the love of everything that is good in this world, don’t expose those parsers to “wild” HTML from the web. I mean, if you control the web page and the templating and you know it’s strict in what kind of HTML trees it is emitting, then sure. Go ahead and use your shotgun parser.

                                            1. 2

                                              This is how my first Linux router in the late 1990s worked-ish. There was a Linux distribution called fli4l.

                                              1. 4

                                                Working on converting Lark from Python to Javascript, by writing my own transpiler.

                                                Just as an anecdote, it’s able to automatically convert this:

                                                x = [(x,y) 
                                                  for x in range(10)
                                                  for y in range(x)
                                                  if x+y==5

                                                To this:

                                                let x = [].concat(
                                                  ...range(10).map((x) =>
                                                      .filter((y) => x + y == 5)
                                                      .map((y) => [x, y])
                                                1. 3

                                                  Do you walk the ast? Which language is the transpiler implemented in? Looks like a really fun project! If you’re willing to bother with a basic test suite, a license I’d be interested in helping. Just for fun. Totally understand if you don’t :-)

                                                  1. 2

                                                    Hi! I’m writing it in Python, using Lark to parse the Python code in a way that preserves the comments and keeps track of line ranges. I walk the resulting ast, once to make structural transformations, and another to convert the code. I’ll probably add some analysis layer soon, because the code needs to change according to the types.

                                                    I put it here on github: https://github.com/erezsh/py2js

                                                    But because I was writing it to myself, the code is a little hacky. But if needed, I’m willing to refactor it a little to make it easier to work with (for example, to use AST classes instead of Tree instances).

                                                    Let me know if you’re still interested in helping :)

                                                    1. 1

                                                      Are you not using the python ast module? Do you emit a JS ast or directly source code?

                                                      1. 3

                                                        I started using the ast module, but decided against it because it throws away the comments.

                                                        The code currently emits JS source code directly.

                                                1. 3

                                                  I actually chuckled. This is seriously a self aware wolf moment. This guy is so very, very close to realizing how to fix the problem but is skipping probably the most important step.

                                                  He mentioned single-core performance at least 5 times in the article but completely left out multi-core performance. Even the Moto E, the low end phone of 2020, has 8 cores to play with. Granted, some of them are going to be efficiency/low performance cores but 8 cores, nonetheless. Utilize them. WebWorkers exist. Please use them. Here’s a library that makes it really easy to use them as well.


                                                  Here’s a video that probably not enough people have watched.

                                                  The main thread is overworked and underpaid

                                                  1. 7

                                                    The article claims the main performance cost is in DOM manipulation and Workers do not have access to the DOM.

                                                    1. 1

                                                      if you’re referring to this:

                                                      Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent. React-based apps are tied to the DOM and only the main thread can touch the DOM therefore React-based apps are tied to single-core performance.

                                                      That’s pretty weak. Any javascript application that modifies the DOM is tied to the DOM. It doesn’t mean the logic is tied to the DOM. If it is then at least in react’s case it means that developers thought rendering then re-rendering then rendering again was a good application of user’s computing resources.

                                                      I haven’t seen their code and I don’t know what kinds of constraints they’re being forced to program under but react isn’t their bottleneck. Wasteful logic is.

                                                      1. 2

                                                        The author’s point is that a top of the line iPhone can mask this “wasteful logic”. Unless developers test their websites on other, less expensive, devices they may not realize that they need to implement some of your suggested fixes to achieve acceptable performance.

                                                        1. 1

                                                          You’re right. I missed the point when I read into how he was framing the problem. Excuse me.

                                                    2. 3
                                                      1. iPhones also have many cores, so that’s not going to bridge the gap.

                                                      2. From TFA: “Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent.”

                                                      3. See also: Amdahl’s Law

                                                      1. 1

                                                        Gonna fight you on all of these points because they’re a bunch of malarkey.

                                                        iPhones also have many cores, so that’s not going to bridge the gap.

                                                        If you shift the entire performance window up then everyone benefits.

                                                        From TFA: “Browsers and JavaScript engines continue to offload more work from the main thread but the main thread is still where the majority of the execution time is spent.”

                                                        This shouldn’t be the case. If it is then people are screwing around and running computations in render() when everything should be handled before that. Async components should alleviate this and react suspense should help a bit this but right now I use Redux Saga to move any significant computation to a webworker. React should only be hit when you’re hydrating and diffing. React is not your bottleneck. If anything it should have a near constant overhead for each operation. You should also note that the exact quote you chose does not mention react but all of javascript. Come on.

                                                        See also: Amdahl’s Law

                                                        I did. Did you see how much performance you gain by going to 8 identical cores? It’s 6x. Would you consider that to be better than only having 1x performance? I would.

                                                        1. 1

                                                          Hmm..if you’re going to call what I write “malarky”, it would help if you actually had a point. You do not.

                                                          If you shift the entire performance window up then everyone benefits.

                                                          Yep, that’s what I said. If everyone benefits, it doesn’t close the gap. You seem to be arguing against something that nobody said.

                                                          Amdahl’s law … 8 identical cores? 6x speedup

                                                          Er, you seem to not understand Amdahl’s Law, because it is parameterised, and does not yield a number without that parameter, which is the portion of the work that is parallelizable. So saying Amdahl’s law says you get a speedup of 6x from 8 cores is not just wrong, it is non-sensical.

                                                          Second, you now write “8 identical cores”. I think we already covered that phones do not have 8 high performance cores, but at most something like 4/4 high/efficiency cores.

                                                          Finally, even with an exceedingly rare near perfectly parallelisable talks that kind of speedup compared to a non-parallel implementation is exceedingly rare, because parallelising has overhead and also on a phone other resources such as memory bandwidth typically can’t handle many cores going full tilt.

                                                          but the main thread is still where the majority of the execution time is spent

                                                          This shouldn’t be the case … React …

                                                          The article doesn’t talk about what you think should be the case, but about what is the case, and it’s not exclusively about React.

                                                    1. 8

                                                      I found this really interesting in the context of a blog post by a coworker or mine, about the “why don’t you just….” sentences in this world. http://exple.tive.org/blarg/2019/04/17/why-dont-you-just/

                                                      1. 5

                                                        “Why don’t you just?” is poor phrasing.

                                                        Assuming people are dumb is a poor starting point.

                                                        And the best premise is that there are reasons why your “much simpler” solution wouldn’t work.

                                                        All that said, things being much more complicated than they need to be is real phenomenon, as is being unaware of much better alternatives. Providing a great simplification, or accepting one that’s offered to you, can be massively productive. I wouldn’t want to throw out the baby with the bathwater.

                                                        1. 3

                                                          “why don’t you just” and “I would simply” have the same energy, IMHO.

                                                          1. 1

                                                            What phrasing do you like?

                                                            1. 5

                                                              I normally go with ‘my naive approach would be to do X, what am I missing?’. It frames the suggestion properly as ‘I have no idea what I’m talking about, there’s a small chance that you missed the obvious thing because you’re too close to the problem, but a bigger chance that I don’t understand something important and I’d like to learn’.

                                                          2. 2

                                                            Back in another company when we were actually doing estimation with planning poker, a dear coworker of mine adopted the practice that whenever someone said “we just need to…” he instantly increased the number. It worked wonders for people to either clarify why it was easy or we got to the better estimation.

                                                            1. 2

                                                              I want that whole post as an inspirational poster on my office wall. (A very big poster, I guess.)

                                                            1. 6

                                                              Any measurements on resource usage? My tab count on Firefox quite often exceeds 100, and with current architecture, the resident memory cost of a processes make up only a small portion of total memory usage. With Chrome, high memory usage is one of the main problems I encountered, so I fear that with this model Firefox might turn that way as well.

                                                              1. 3

                                                                I’ve also got about a 100 tabs open divided over 4 windows. Currently Firefox only has 13 processes running of which 5 are named “FirefoxCP Isolated Web Content” on macOS, so it looks like it only creates processes for recently used tabs.

                                                                1. 2

                                                                  IIUC there is a process per domain (ie. not a process per tab, like in Chrome), which should cut down some memory use. But yeah I don’t know much of the specifics.

                                                                  1. 3

                                                                    it’s process per site

                                                                    1. 2

                                                                      Because the OS can just swap out the idle ones?

                                                                  1. 4

                                                                    Wow, pleasantly surprised that Project Fission can now be used outside of Firefox nightly. I have been waiting for years for some privilege separation in Firefox. Hopefully this will be a good start to get to the security level of Chrome as discussed here and here.

                                                                    1. 4

                                                                      The openbsd discussion is from 2018 has long since been outdated. The other article was discussed (and partially debunked) on lobste.rs a few weeks ago at https://lobste.rs/s/eys36p/firefox_chromium :)

                                                                      (Obviously, you’re all allowed to perceive my opinion as heavily biased. I work on Firefox Security.)

                                                                      1. 2

                                                                        I purpously didn’t link to the discussion on lobste.rs because unfortunately that discussion didn’t focus on privilege separation. I think most points from both the OpenBSD discussion and the other one still stand for Firefox as long as Fission is not enabled (and it’s not enabled by default just yet). Although there is some separation of priviliges in Firefox internal architecture, it never came close to the level in which Chrome separates privileges and uses this to protect one site from another by extensively using security features from the Operating System. I think once Fission is enabled by default, the groundwork is ready to get seriously started to harden each individual process and get on par with Chrome w.r.t. software security. Only then I would say the story can be “debunked”. ;-) Or to repeat Theo de Raadt’s words from 2018:

                                                                        It is my understanding that firefox says they are catching [up], but all I see is lipstick on a pig. It now has multiple processes. That does not mean it has a well-designed privsep model. Landry’s attempt to add pledge to firefox, shows that pretty much all processes need all pledges.

                                                                        1. 9

                                                                          Landry’s attempt to add pledge to firefox, shows that pretty much all processes need all pledges

                                                                          And my fully working patch to add Capsicum to Firefox shows that this is a problem with the pledge model, not with Firefox ;)

                                                                          never came close to the level in which Chrome separates privileges and uses this to protect one site from another by extensively using security features from the Operating System

                                                                          On the mainstream OSes, Firefox literally uses the same Chromium sandbox code to use these platform features, btw.

                                                                          1. 5

                                                                            Do you know how it handles setting up the IPC channels? Chromium made a spectacularly bad design choice here: service endpoint capabilities are random identifiers, so any sandboxed process that can guess the name of an endpoint can connect to it. This means that any information leak from a privileged process (including cache side channels from prime-and-probe attacks by the renderer process) has the potential to be a sandbox escape. Every other compartmentalised program that I’ve seen uses file descriptors / handles as channel endpoints and either sets them up at process creation time or has a broker that authorises them based on identity or other attestations.

                                                                            1. 2

                                                                              Firefox does currently use legacy Chromium IPC as a transport. Are you referring to this Windows channel ID thing? This mechanism is not used on posix, it’s all SCM_RIGHTS. That’s really the only usage of randomness I could find in ipc/chromium. Well, also this macOS mach port process launching thing.

                                                                            2. 3

                                                                              Very interesting! I hope to have some time at some point to look into it in more depth. :)

                                                                      1. 5

                                                                        That is because a long term standard must be extensible. Thus the fields can not be fixed and it must be possible to add new types.

                                                                        The new wisdom is “have one joint and keep it well oiled”. Last I heard, PGP was still having problems with the message authentication hash upgrade.

                                                                        All you have to do is directly transcribe these cases into a case statement and you are done. There is no real thinking required.

                                                                        This is only correct if out of bounds reads are not something you need to worry about, which is not correct in PGP’s favorite implementation language.

                                                                        Since this is from the public key section of the PGP published standard it might be interesting to compare with the same sort of thing from the Signal Protocol

                                                                        Wait, we were talking about packet parsing and you criticize Signal for not providing “identity information outside the program”? Massive non sequitur, and I can hardly call GPG’s “human only output” policy as proper access (see e.g. GDPR’s opinion on “machine readable” data exports). Additionally, from what I can tell this was an intentional design decision, so it’s misleading to describe it as “missing” from the standard.

                                                                        RSA with 2048 bit keys is a perfectly reasonable and conservative default.

                                                                        Uh, what year is it? 1024 bit RSA is considered too short now. 2048 is a perfectly reasonable minimum, not conservative at all.

                                                                        Most people prefer to keep their identity indefinitely.

                                                                        This statement demonstrates a blatant misunderstanding of the complaint.

                                                                        But apparently the author does not understand why authenticated encryption is not used or even required for the things that PGP is normally used for.

                                                                        I briefly read the author’s citation; the argument can be paraphrased as “attacks (such as oracle attacks) on the lack of authenticated encryption are impossible, because you can’t automate PGP processing.” This is a bold claim for a piece of software, frankly.

                                                                        PGP’s signatures are cryptographically strong and are indeed intended to both prove that a particular entity sent a message and that message was delivered as sent. They work very well in practice and are normally used. […] and that is the third time the fact that MDCs can be stripped is mentioned […] Quite excessive when talking about a feature that is not normally used in PGP.

                                                                        Again, this is a staggering indictment of the author’s lack of understanding of the criticisms. This could only be an acceptable response if we pretend that signature forgery via ciphertext malleability isn’t a thing that exists.

                                                                        Of course, if you allege that your software is impossible to automate, then you can try to pretend that IND-CCA2 doesn’t matter.

                                                                        16 whole bits of security.

                                                                        this statement is quite misleading […] guessing with only a one in 2^16 (65536) chance

                                                                        More reading comprehension problems. The rebuttal is reiterating the complaint in a different tone of voice. But of course, our software is impossible to automate, so 16 bits of security is plenty.

                                                                        There is no such “reference PGP implementation”.

                                                                        This is simply being obtuse. OpenPGP and GPG are practically indistinguishable.

                                                                        If they were distinguishable, the community would be forced to recognize that the spec is insecure as soon as you automate usage, which brings in decryption oracles and enabling all the attacks that Walzer has dismissed as irrelevant.

                                                                        it forces the user to comprehend the existence of a key in a way where it is intuitively obvious

                                                                        Let me just slide this one against this earlier paragraph…

                                                                        They are usability studies. They are deliberately designed to be as difficult as possible (you hide all the documentation) in order to get results.

                                                                        Sure would be nice if you had documented evidence of that intuitiveness, but it seems the author has a grudge against usability studies.

                                                                        EFAIL turned out to be a kind of a media hoax against PGP

                                                                        EFAIL was a vulnerability in a program attempting to automate usage of PGP. This failed for predictable reasons, because you can’t automate usage of PGP, therefore any vulnerability from any such attempted automation is not the fault of PGP.

                                                                        1. 2

                                                                          Yes, I have a grudge in general against software that makes it impossible for the user to automate it.

                                                                          1. 2

                                                                            EFAIL was just programs trying to decrypt everything stupidly without parsing or considering message limits. It wasn’t because you can’t usage of PGP (you totally can), but because the program that doing it was written in incredibly stupid ways, dare I say without any security considerations.

                                                                            1. 2

                                                                              It’s not just the implementation that was erroneous. The Efail work identified various protocol bugs in gpg too.

                                                                          1. 1

                                                                            I realize this was submitted a couple of years ago, but it seems like a super interesting target for very cheap and low-profile VMs (e.g., on a raspberry pi using firecracker)

                                                                            1. 1

                                                                              I, ahem, need this hardware. For… stuff. :)

                                                                              written from someone who works on a software project that takes 8~ minutes to build on a 28core machine.

                                                                              1. 2

                                                                                Rust was in reality just the idea of mozillians to slow down compile times to have more playtime

                                                                              1. 16

                                                                                When writing a small scale personal project, why bother with the headache of setting up yet another MySQL database, or running MongoDB in a docker container? Yes you could use SQLite, and I do find myself doing this on occasion. But SQLite is just a file, as is YAML, so where are the gains?

                                                                                Easier parsing, consistent locking, performance?

                                                                                1. 4

                                                                                  That was pretty tongue in cheek but maybe it didn’t come across that way. SQLite is definitely more than just a file.

                                                                                  • Not sure what you mean by easier parsing, is that by code?
                                                                                  • I’ve found locking file access with a mutex sufficient
                                                                                  • Performance agreed is better on SQLite, but the use case for this wouldn’t realistically make a massive difference. At least, I’m not seeing performance from my apps that make me think switching to SQLite would give me any discernible additional performance boost.
                                                                                  1. 13

                                                                                    I’d rather write SQL and get structured results back than parse out an ambigious YAML file.

                                                                                    1. 12

                                                                                      But then it would be called OrGS, and where’s the fun in that?

                                                                                      1. 4

                                                                                        I see your distinction, but if it’s your application which is writing the YAML file it’s not necessarily ambiguous. Likewise with libraries that parse YAML into a kv map it is structured (by varying degrees of the meaning of structured)

                                                                                        Yes there is nothing stopping you going in and adding data/fields to a YAML file on disc, but realistically for most hobby projects that is the same for the “grown up” DB technology that is used.

                                                                                        1. 2

                                                                                          Point taken. A write/read round-trip by your own application can still be ambiguous if you’re not quoting things properly (new lines, quotes, colons etc)

                                                                                          1. 1

                                                                                            100% true, it’s not infallible and yes it does lack some protections that things like prepared SQL statements give you

                                                                                        2. 3

                                                                                          This is not really YAML specific, but I found that document-style databases (mongodb, a yaml file) tend to be much easier to handle than expected when using a typed language (Golang in the OP’s case). You still need to be kind of careful but you have some sort of schema at object level at least, which is good enough to prevent inconsistent data and other mistakes to sneak in in most cases.

                                                                                          When it comes to untyped languages I found document stores to be a recipe for disaster though, no matter how careful you are and I would agree with your comment 10/10.

                                                                                        3. 2

                                                                                          I’ve read and written a moderate amount of YAML, yet I feel like I will never properly learn to distinct between

                                                                                          - foo
                                                                                          - bar
                                                                                          - baz


                                                                                          - foo

                                                                                          This might be a personal failing, but as always, one likes to blame technology in those cases ;) Therefore, if I can, I will choose a language that makes that distinction clear.

                                                                                          1. 2

                                                                                            I suppose if you look at it with a more realistic example it makes more sense:

                                                                                            - name: foo
                                                                                            - name: bar
                                                                                            - name: baz


                                                                                            - name: foo
                                                                                              author: Bob
                                                                                            - name: bar
                                                                                              author: John
                                                                                            - name: baz
                                                                                              author: Jeff

                                                                                            If you put both of those into a YAML to JSON convertor you can see the difference.

                                                                                          2. 1

                                                                                            They actually have a page on those benefits. One I liked was how it was more resilient during crashes. Their focus on reliability means both their fast path and error handling work better than 99+% of what developers will write themselves. The first link claims they’re faster than file I/O, too, in some cases. The library itself loads fast, too.

                                                                                            SQLite is probably the best default for storage if speed, reliability, and portability are concerns.