1. 4

    What am I supposed to be seeing here that I am not?
    The page is just a perspective-skewed screenshot and the message

    Replay is an early experiment. We’ll let you know on @FirefoxDevTools when it’s ready for input.

    I can make some guesses from the screenshot (the words “Paused in Recording”), but there’s no explanation what the “early experiment” even is. There is a link to @FirefoxDevTools on Twitter but it doesn’t mention Replay in any recent posts.

    1. 4

      yes there was more info before but it’s a debugging replay tool:

      https://web.archive.org/web/20191128111509/https://firefox-replay.com/

      1. 3

        I think someone found a page in testing and linked it here.

        To my knowledge, Firefox Replay is rr, but for the web.

        1. 1

          It’s probably a debugging tool that records what happens in the background to replay later on and dive deeper into code execution?

          1. 2

            That’s my best-guess assumption. I was wondering why this has so many upvotes for a zero-information page that isn’t even a release announcement. Since I can’t downvote (only flag, which seems wrong), figured I would ask.

            At this point I figure perhaps they changed the page since it was posted (as of writing this, 7 hours ago)?

            1. 4

              When it was posted the site had more content, yes.

              1. 1

                Yeah, there’s an archive.org link elsewhere in the thread which represents what was actually on the page at the time it was posted.

          1. 9

            I wonder how many projects requiring these trendier build systems like meson or ninja are using them as intended or to the capacity they allegedly facilitate. Meanwhile, make is unsexy but it’s everywhere.

            I sort of get cmake but even that boils down to a Makefile. Do front ends like that really reduce enough boilerplate to justify making them a build requirement? They haven’t for my own projects but I’m generally not building in the large.

            1. 6

              I sort of get cmake but even that boils down to a Makefile. Do front ends like that really reduce enough boilerplate to justify making them a build requirement? They haven’t for my own projects but I’m generally not building in the large.

              My experience with cmake is that it’s a worse wrapper than autotools/configure is, which is really saying something. I tried to get a i386 program to build on x86_64 and I had immense trouble just communicating to gcc what flags it should have, cmake actually lacked any architecture specific options that would have enabled me to do that.

              1. 9

                I’m not sure what you hit specifically, but cmake does provide those kind of options, namely CMAKE_SYSTEM_NAME, CMAKE_CROSSCOMPILING, and CMAKE_SYSTEM_PROCESSOR. I found this document really helpful in my own endeavours.

                My personal experience with CMake is that it has a bear of a learning curve, but once you’ve seem some example setups and played around with different options a bit, it starts to click. Once you are more comfortable with it, it is actually quite nice. It has its rough edges, but overall I’ve found it to work pretty smoothly in practice.

                1. 1

                  Ah, thanks! I’ll keep that in mind for next time I end up doing that

              2. 4

                CMake isn’t really an abstraction over Makefiles; in fact, there’s plenty you can probably do in Makefiles that would be cumbersome or perhaps impossible to do purely in CMake. It’s a cross platform build system that just uses Makefiles as one of it’s targets for doing the actual building step.

                Where CMake tends to get it’s use (and what it seems to be intended for) is:

                • Providing a build system for large, complex C++ (primarily) projects where lots of external dependencies exist and the project is likely to be distributed by a distribution or generally not controlled by the project itself
                • Cross platform projects that are largely maintained for platforms where Makefiles are not well supported in a way that is compatible with GNU or BSD make, or where supporting ‘traditional’ IDEs is considered a priority (i.e. Win32/MSVC).
                1. 2

                  Unfortunately, ninja is too simple for many tasks (e.g. no pattern rules) and building a wrapper is a more complex solution than Make.

                  Meson is too complex for simple tasks like LaTeX, data analysis, or plot generation. Make is great for these use cases but a few improvement are still possible:

                  • Hide output unless a command fails.
                  • Parallel by default.
                  • Do not use mtime alone, but in addition size, inode number, file mode, owner uid/gid. Apenwarr explained more.
                  • Automatic “clean” and “help” commands.
                  • Changes in Makefiles implicit trigger rebuilds where necessary.
                  • Continuous mode where it watches for file system changes.
                  • Proper solution for multi-file output.
                  1. 2

                    CMake can generate Makefiles, but I would hardly say its value simply boils down to what Make can provide. All of the value provided by CMake is what goes in to generating those files. It also benefits from being able to generate other build system files, i.e. using Ninja, or Visual Studio makefiles. A lot of complexity comes in when you need to adapt to locating dependencies in different places, detecting/auto-configuring compiler flags, generating code or other files based on configuration/target, conditionally linking things in, etc., and doing those things in plain Make is a huge pain in the ass, or not even possible in practice.

                    I wouldn’t say CMake is perfect, not by a long shot, but it works pretty well in my experience.

                  1. 5

                    I’m excited to see what comes of this. The BSDs are some of my favorite free software projects but at the moment Linux is the more practical choice for a consumer desktop IMO.

                    Alpine, Void, Devuan, Adelie, Salix, and Project Trident are a cluster of exciting projects trying to achieve grandparent-usability while avoiding to a degree unnecessary complexity. It will be interesting to see how these projects evolve and if something emerges which is merely a fringe project in the arena of desktop operating systems, rather than a barely existing one.

                    1. 4

                      What new things does Devuan bring in terms of user accessibility? I’ve not heard it brought up in this context before.

                      1. 1

                        It doesn’t, did it seem like I said it does?

                      2. 3

                        While I do think BSDs can be suited for day to day desktop usage, I don’t think Trident (and previously TrueOS/PC-BSD) has ever really been. There have been a lot of shifts into different directions, but when one listens in to the BSD community people there never seemed to be all that happy about it being sold as FreeBSD on the desktop.

                        And I don’t say it that lightly. I really think they also did a lot of cool stuff. From OpenRC on FreeBSD to sndio for everything they smoothed out some rough edges (even when it’s not 100% their work). Also their community is a great example of being both friendly and welcoming.

                        I also have to admit that I don’t really know the reason. A commonly mentioned reason is too broad of a focus though. Converting everything to OpenRC and create a new desktop environments are not bad ideas per se, but maybe a bit much when the overall goal is to essentially be a desktop oriented FreeBSD “distribution” (as in being clearly based off FreeBSD, but with some not really typical differences, like OpenRC).

                        Nevertheless I don’t think it’s wrong to say that most development on FreeBSD currently happens outside the realm of desktop usage. However, you can still just a better supported laptop and use it for every day tasks, without running into more issues than let’s say Ubuntu or Arch Linux. As usually there’s strengths and weaknesses which can change quite quickly. In the end the only way to find out if it suits you is to try it out.

                        1. 2

                          How is Alpine trying to achieve “grandparent-usability”? It’s pretty much an embedded/server distro. And Devuan is a project explicitly for those who don’t want evolution.

                          1. 1

                            You’re right about Alpine. I don’t see how the point about evolution is relevant.

                        1. 4

                          It includes primarily proprietary as well as open - source components.

                          What is the reason behind the proprietary components? Why not all opensource?

                          1. 4

                            Near as I can tell because the developers see it as a proprietary OS with a few GPL components, notably the desktop, and that’s only GPL because of a bitter disagreement between one of the developers and the company that owned it at the time.

                            1. 4

                              The Amiga community was always seemingly hostle to having source available. It’s changed a lot over the years but even today there’s a huge amount of Amiga software that is going to perish when the developers give up on it.

                              I think part of it is that the Amiga kept a very active cottage industry of small one- and two-person development teams since it never had really strong major software house support*.

                              This lack of major software house support meant tat (a) a large proportion of Amiga developers made (or wanted to make) a living of Amiga software and had relatively easy entry to the market and an enthusiastic captive market and (b) software piracy was rampant.

                              * The Amiga had some support from major game developers like EA and LucasArts but only for a few golden years. It never had huge support from the really big companies, with only a few releases if any.

                              (This is all just speculation from someone who’s watched the Amiga for 30+ years.)

                              1. 1

                                What is the reason behind the proprietary components? Why not all opensource?

                                While you can use and test the OS for up to 30 minutes for free before you need to reboot, there is a license cost.

                                Even if you do not put a value on your own time, developing a niche operating system is a costly endeavour since you need to buy stock piles of hardware to be able to test drivers. I am afraid Apple and AMD are not sending out free hardware samples so MorphOS can be made to support them.

                                1. 2

                                  There’s an argument to be made that making it free software makes both the appeal and the ability to support a wide variety of platforms expand far beyond what a small team on a niche hardware platform is able (like Haiku, for example)

                                  1. 1

                                    There’s an argument to be made that making it free software makes both the appeal and the ability to support a wide variety of platforms expand far beyond what a small team on a niche hardware platform is able (like Haiku, for example)

                                    Well, AROS is an example for an open source operating system that was inspired by the Commodore Amiga platform. I do not think I am being unfair in saying that it is not in a better position than MorphOS despite using an open source license.

                                    Technically, AROS does support more processor architectures but the alternative ports to ARM and PowerPC are generally less complete, less stable, and have access to a smaller pool of third-party software compared to the Intel-compatible versions.

                                    Being focused on a limited amount of hardware devices and processor architectures is not necessarily a downside but can help to more effectively use your resources in order to provide a more polished end user experience. (Think Apple MacOS vs. Microsoft Windows.)

                                    In short, just making something open source does not magically make everything better. Even if you are generally an open source proponent, I think it is healthy to be able to acknowledge that.

                                    1. 1

                                      Being focused on a limited number of devices does make it easier to polish how it runs on that device for sure, but it also ties the software to the intrinsic appeal of the underlying hardware.

                                      I don’t think that it’s really possible to gauge what making the source of MorphOS freely available would do to it’s development or focus and whether that’s productive for it’s continued development, but interest and historical documentation would almost certainly benefit. But which of those is “success” is very subjective.

                                      1. 1

                                        making something open source does not magically make everything better

                                        “better” on its own does not mean much. Yeah, it does not inherently make it better in terms of quality, but it does in terms of other things. The freedom to modify the software, the long term preservation aspect, these things are extremely valuable.

                                        1. 1

                                          The freedom to modify the software, the long term preservation aspect, these things are extremely valuable.

                                          Being able to enter and use your neighbour’s car is also technically “valuable” if you get my point. Having potential value does not equal indisputable entitlements.

                                          More to the point, MorphOS already runs in qemu so the “preservation aspect” is pretty much covered.

                                1. 8

                                  It’s probably actually significantly more than a 1.2 million lines since println! is a macro. Sounds like maybe there’s issues with extremely large function bodies regardless, perhaps related to the number of scopes created in a single function body? I’m not sure it’s worth optimising for the sake of a meme.

                                  1. 10

                                    It doesn’t use println!()

                                    1. 1

                                      I missed that (the screenshot where it shows that wasn’t readily visible in the tweet embed on my phone). I don’t really have a suggestion for why that happened - would certainly be interesting to find out!

                                    2. 5

                                      This was my first thought too.

                                      fn main() {
                                          println!("Hello, world!");
                                      }
                                      

                                      is expanded to

                                      #![feature(prelude_import)]
                                      #![no_std]
                                      #[prelude_import]
                                      use ::std::prelude::v1::*;
                                      #[macro_use]
                                      extern crate std as std;
                                      fn main() {
                                          {
                                              ::std::io::_print(::core::fmt::Arguments::new_v1(&["Hello, world!\n"],
                                                                                               &match () {
                                                                                                    () => [],
                                                                                                }));
                                          };
                                      }
                                      

                                      The interesting part being the println!(...) macro turning into it’s own lexical block with two function calls, a pointer to a slice containing a str literal, and then that useless match statement that would be used if there was any string formatting going on.

                                      So that 1_200_000 * 'println!("hello world");'.length == 28.8mb codebase that we thought we were compiling, is actually expanded into about 120mb of minimized code. That’s quite a bit for just the textual representation of a program!

                                      Beyond just expanding and churning through those lines, I wouldn’t be surprised if there’s alot of wasted work by the borrow and lifetime checkers as well. Each expansion has it’s own block, so it should have to re-setup and free the variables every single time. I wonder if that means it can’t lift the &["Hello, world!\n"] into a constant itself, though I assume LLVM would eventually.

                                      So it’s definitely doing a ton of work…likely more than it’s given credit for. But I’m curious if it’s worth optimizing for? Code generation isn’t that uncommon, but I would hope in this case it’d be a little more clever than just printing a line 1.2million times. This is the first thing I’ve seen that made me check if loop rolling (as opposed to loop unrolling) exists. Turns out LLVM has a pass that does it in some cases. I also found some old posts hinting that some older compilers would reroll loops after early optimization passes to help later passes recognize when autovectoriztion should occur. A sufficiently advanced compiler could determine that this is basically an unrolled loop…but that seems like it’d be a super uncommon scenario where the extra work that’s done would be worth it. Especially if LLVM would eventually do it’s thing and optimize it anyway, which it may or may not.

                                      1. 2

                                        Yeah, the scope analysis part are my suspect. It’s perhaps an interesting demonstration that Go was designed around compiler implementation first while Rust is more designed around what a conceptual compiler could optimise a chosen code pattern to in order to avoid runtime penalties, with optimisations to the compiler implementation coming later. Sort of like a production-focused research language.

                                    1. 4

                                      So the 90% probability answer is that this was thrown together quickly without time to put in all the complicated stuff. Google can afford it.

                                      I’m still wondering if maybe part of the reason might be a deliberate decision to skip using any of the clever stuff because all the good solutions involve a longer lived TCP connection, and longer lived TCP connections tend to fall afoul of shitty ISPs with buggy equipment, whereas all but the absolute shittest ISPs in the world will probably not accidentally block dumb polling.

                                      1. 1

                                        I think 90% is probably a lowball. This is absolutely how you would implement this if you were a small team (or even a single engineer) trying to do this strictly within the existing bounds of a product. Engineering time is never cheap and I imagine that this feature was more about bringing search results up to parity with what the Search app already does.

                                      1. 8

                                        I’m pretty suspect of the actual improvements in privacy Pale Moon affords over a reconfigured mainstream browser to disable telemetry and ‘bonus features’ coupled with privacy extensions. Pale Moon is itself uncommon enough to be a fingerprint all of its own (not to mention that I’m not very confident in the resilience to tracking hijinks or exploitation of a Gecko fork maintained by an extremely low number of people, taming it with a large group of devs seems hard enough)

                                        1. 1

                                          Isn’t it the other way around. If something is unpopular it’s easier to fingerprint. Not harder.

                                          1. 1

                                            That’s what I was trying to say, sorry if it wasn’t clear!

                                        1. 11

                                          I wanted to see what Zig is like, so I ported a small C program of mine. It was very easy to do a ~1:1 port, and then zig build-exe --release-small --single-threaded --strip gave me a ~10kB, zero-dependency executable, which I really appreciated. I spent some time making it nicer, and learning more about the language, and I have to say I’m smitten. I think this could eventually replace C for me, which is not the impression I got from Rust (which is fine; it just seems to have different priorities.)

                                          I updated the Nix package to 0.5.0 so I could give it a go, but unfortunately LLVM 9 is not in unstable yet, so I haven’t been able to try it, but I’m looking forward to it.

                                          1. 6

                                            I tend to think of Rust as able to replace C libraries and C applications at that sort-of systems-applications boundary, sort of like a restrained C++, and similarly if you restrict the feature set of Rust you can use it for systems or embedded. Zig to me feels like an actual replacement for C code rather than applications of C code, and it’s not for it’s own sake; it brings real advantages over some things which are extremely error prone or just painful in C, my personal favourites being sane handling of tagged unions, the ability to metaprogram without breaking out to code generation tools, and handling of errors.

                                            1. 4

                                              In my mind, Rust is much more a replacement for C++, than it is for C.

                                              1. 2

                                                I think that’s a fair assessment to make. You definitely can make the Rust compiler target things that C does, but the necessary stripping back of the language definitely makes it resemble less the language and ecosystem that higher level Rust users have. Kind of like IOKit C++ I guess (though I’ve not looked into that much).

                                                The side I was more alluding to was application level and userland libraries that have traditionally been written in C because it was the most portable language to write native code libraries with a C ABI (C++ being reliably portable across platforms is a fairly recent development as far as it’s history goes, and arguably it’s image regarding this is still tarnished now) and especially ones that involve protocol or file format parsing which have been easy to get catastrophically wrong in C in security terms.

                                              2. 2

                                                similarly if you restrict the feature set of Rust you can use it for systems or embedded.

                                                I wonder what restriction that might be?

                                                Rust can be easily used on those systems with a full feature set. What’s missing is the std (and with that, all allocating types).

                                            1. 0

                                              We could have used the C code. SQLite has a high quality reputation and did rock for many years and keep rocking.

                                              But it did not fit the Go and Rust culture to RIIR/RIIG (rewrite everything from scratch) to be as pure as possible.

                                              1. 11

                                                This is actually the opposite of what’s happening here though. The Go stdlib has a builtin package for interacting with SQL databases in an abstract way; this is a C SQLite binding which sacrifices the ability to use this abstraction in order to essentially modify this API in a way which is much more semantically close to the C library semantics (largely by removing the failure surfaces in the abstraction layer that don’t exist for a non-networked database) while preserving the overall design. There’s not any reimplementation of SQLite going on here.

                                                1. 3

                                                  Indeed, not a full rewrite. I was worried about replicating the efforts of SQLite. I read the articles without looking at the code, and there was no mention of using the existing SQLite code base.

                                                  The source makes this entirely obvious: https://github.com/crawshaw/sqlite

                                                  This is a very neat project.

                                                2. 5

                                                  It’s unclear what point you’re trying to make here.

                                                  1. 3

                                                    I was mistaken on what the author did. It is not a rewrite of the entire SQLite libraries.

                                                  2. 2

                                                    The main reason to implement sqlite in native Go sure is indepence from C. With a language native implementation you’re able to build static binaries for example (assuming you’re not using any other c dependencies).

                                                    You’d be unable to do that with dependencies to sqlite c-libs.

                                                    But I agree that sqlite has a reputation for rock solid code for a reason. But a subset of sqlite in a new implementation might be doable and probably good enough for a lot of people.

                                                    1. 3

                                                      Oh, it looks like it’s still using the sqlite c source, so that’s that.

                                                      1. 2

                                                        Sorry if I provoked confusion.

                                                      2. 2

                                                        You’d be unable to do that with dependencies to sqlite c-libs.

                                                        it is easy to statically link C libs. to the linker, there’s no difference between C and Go (or C++ or D or whatever).

                                                    1. 5

                                                      I know this is largely off topic, but can we not encourage “everyone on call” as something a business should aspire to or even consider a reasonable or fair decision? No matter how great your unicorn observability and monitoring system is, being on call is still an imposition to social and personal affairs, even if the call part happens seldom. Plus if that side of the business is so great then it shouldn’t be necessary anyway.

                                                      1. 4

                                                        I agree with this in principle (I’m a developer on an on-call rotation and it sucks to have to be ready to handle an alert at any time) but I think in the context of the rest of that section of the article, it’s not quite as bad as all that; the point, I think, was more about ownership of the alerting and monitoring capability. I interpreted it less as, “All employees must carry pagers 24x7,” and more as, “Developers don’t get to throw their crappy code over the wall to Ops with no concern about whether or not it spews pointless alerts all night.”

                                                        Also, when I’ve worked at online-service companies where developers weren’t formally on call, we were still on the escalation path for any issue the on-call people couldn’t figure out. It’s true there wasn’t nearly as much impact on personal life as being on call, but there was still some, e.g., we coordinated our vacations and weekend getaways such that all the engineers for service X weren’t unreachable at the same time. Maybe it’s not so horrible to say out loud that part of the job is to accept calls from SREs in the middle of the night when your stuff is blowing up.

                                                        1. 2

                                                          The author seems to heavily identify with her work and only her work. If all you can talk and think about all day is your work you may develop the false idea that this is the norm. Good for her if she has fun doing it, but she must also learn to live with that fact that others are not like that.

                                                        1. 4

                                                          const maxInt = std.math.maxInt;

                                                          Just curious, is this considered idiomatic zig? It looks similar to a similar construct in other languages, which a shorter alias for a commonly used function. However, that’s not the case here–it’s only used once. Is that just the way import notation works, or is it superfluous and could be replaced with const max_stack_size = std.math.maxInt(u8);?

                                                          1. 8

                                                            It’s superfluous and could be replaced with const max_stack_size = std.math.maxInt(u8);. It probably ended up that way because the code used to be @maxValue(u8) (a builtin function) however this builtin function was removed from the language in favor of the std lib function std.math.maxInt. The simplest way to update all the std lib code was to put const maxInt = std.math.maxInt; at the top and replace @maxValue(u8) with maxInt(u8).

                                                            1. 5

                                                              The design of Zig is such that types, functions, modules and constants are essentially all considered first class values. This is just the idiomatic way to do aliases as such, and whether you do this is a matter of personal preference. That said, it seems to be pretty common for Zig code I’ve seen to not use qualified lookups in code - maybe just to give it a more ‘C-like’ feel?

                                                            1. 9

                                                              Except that, sadly, the tooling is pretty weak.

                                                              I like the language but I don’t like the experience of developing with it. I’ve grown soft after a couple of years of Rust development.

                                                              1. 6

                                                                I don’t know where Rust thinks it’s going, though. I can’t even update to the latest version of Bitwarden_rs because it requires a rust-nightly compiler which won’t build on FreeBSD because it dies with an invalid memory reference

                                                                error: process didn’t exit successfully: /wrkdirs/usr/ports/lang/rust-nightly/work/rustc-nightly-src/build/bootstrap/debug/rustc -vV (signal: 11, SIGSEGV: invalid memory reference)

                                                                1. 5

                                                                  That’s Bitwarden_rs’s fault for using nightly, imo.

                                                                  Looks like this is bug has already been reported with bitwarden-rs though: https://github.com/dani-garcia/bitwarden_rs/issues/593

                                                                  1. 3

                                                                    Every non-trivial rust program I’ve tried to use so far requires a nightly compiler. This ecosystem is just a trash fire.

                                                                    1. 8

                                                                      I’ve got an 80k+ LOC Rust codebase I work with at Rust that doesn’t use nightly. In fact, we’ve never needed nightly… The program runs on production workloads just fine.

                                                                      1. 12

                                                                        I’m using Rust professionally and I don’t even have a nightly compiler installed on my computer. Almost all Rust large programs I see don’t require nightly compilers. Any that do tend to be OS kernels, with exception of a few web apps like this project that use Rocket, a web framework (with many good alternatives, I might add, not to disparage Rocket) that requires syntax extensions and loudly states it requires nightly Rust (and is planning to target stable Rust next release apparently). People who use nightly are generally already writing something experimental which is explicitly not production-quality, or they’re writing something that’s working towards being ready for an upcoming feature (which allows the ecosystem to develop well in response to ecosystem changes, vs waiting months or years for trickle down as is common in other languages), and they’re targeting what is explicitly an alpha-quality compiler to do so.

                                                                        1. 3

                                                                          People who use nightly are […] and they’re targeting what is explicitly an alpha-quality compiler to do so.

                                                                          Or they just want to write benchmarks ;)

                                                                          1. 7

                                                                            criterion.rs is a better harness and works on stable Rust. I’ve been slowly migrating all my crate benchmarks to it. The only advantage of the built-in harness (other than compile times) is the convenient #[bench] annotation. But criterion will get that too, once custom test framework harnesses are stabilized. See: https://bheisler.github.io/post/criterion-rs-0-3/

                                                                            1. 6

                                                                              …and don’t want to use excellent third party tools that function on stable, like Criterion. ;)

                                                                              I admit, the fact that Criterion works great on stable and the built in cargo bench doesn’t IS pretty dismal.

                                                                  1. 5

                                                                    maybe unpopular: i can read and write pretty much anything more easy than yaml. especially things which are braced like json or have similar open/close tags like.. apache config?

                                                                    even more unpopular: and i can use tabs for indentation with these formats! the character invented for indenting things! my editor from before i was born can display tabs with a width i like!

                                                                    back to topic: i think a small tcl would be a real good local optimum for configuration files. cf. Tcl the Misunderstood.

                                                                    1. 6

                                                                      Just to be pedantic, weren’t tabs intended for tabulation, rather than code indentation?

                                                                      1. 2

                                                                        TSV best SV.

                                                                        1. 4

                                                                          Let me introduce you to my friends in the ASCII table: 0x1c-0x1f; file, group, record and unit separator. Woefully underused.

                                                                          1. 2

                                                                            Woefully underused.

                                                                            For good reason. They are poorly supported by almost all tooling, and they don’t rigorously solve any additional problems over tabs (or any other delimiter).

                                                                            1. 1

                                                                              That sounds pretty cool. While not superior to TSV due to tooling, it’s still very nice to have explicit characters toward this end. It would be cool to have something like \fs \gs \rs \us as a way to type them. Even if just supported by a editor extension. I will say, in response to @burntsushi I think they do solve certain problems over tabs, most notably the ability to specify many tables in a file, and many “files” within a file. It also means one could have tabs, whitespace, etcetera without needing to escape it. If I could open up a single document that represents many text files transparently as many text files in my editor, that would be a pretty cool feature. Similarly I do think being able to represent many “sheets” in a csv is also probably very useful. What would this format be called? If it doesn’t already have a name I think “.dsv” is probably not a bad one, I’m also fond of “.gru” or “.gruf” . Sounds like a fun weekend project to make an extension that handles these gracefully, and has a “save as csv/tsv/etc”.

                                                                              1. 1

                                                                                It also means one could have tabs, whitespace, etcetera without needing to escape it.

                                                                                Right, but you then need to escape whatever delimiter you’re using, unless you ban it from being used. That’s kind of what I was getting at.

                                                                                1. 1

                                                                                  I think the whole idea of file, group, record, unit delimiter characters is as delimiters. The common use of comma and tab as punctuation characters means that we will have to escape them regularly. It’s much easier to ban the use of characters that are unused for any language construct.

                                                                                  1. 1

                                                                                    Yes, I understand the concept behind them. If you can really get away with banning them completely, then sure, they can solve a problem nicely that tabs/commas probably can’t (modulo the fact that tooling sucks for them). But personally, I’d be surprised if you could get away with such a ban. If you have to implement escaping even in some cases, then it pretty much drags everything down with it. Escaping is pretty much the only reason why CSV parsing is as complex as it is, and more than that, tends to put a cap on performance (depending on your parsing architecture).

                                                                                    1. 1

                                                                                      Why would you be surprised if I could “get away with such a ban”. We can do so by edict, and if you can’t handle it then use some other format. If you are storing \fs \gs \rs \us then it’s not the format for you. If you strip out these control codes, then this is indeed the format for you.

                                                                                      It looks like there’s already a precedence for how to type these.

                                                                                      ctrl-\ File
                                                                                      ctrl-] Group
                                                                                      ctrl-^ Record
                                                                                      ctrl-_ Unit
                                                                                      
                                                                                      1. 1

                                                                                        Because we are collectively (including myself) very bad at saying “No,” especially when someone comes to you with a valid use case.

                                                                                        I’m not really interested in discussing this further. Bottom line is if you can get away with that ban, then great. Your point stands. There’s really no point in debating why I personally would be surprised if you could.

                                                                                        1. 1

                                                                                          The statement just sounded like you had an example case in mind. I was hoping you were holding out talking about it because it takes effort to describe. Nebulous fears are valid. Often there are unknowns, I just thought you had something concrete in mind.

                                                                                          1. 1

                                                                                            Ah gotya. Yes, mostly nebulous at this point.

                                                                      1. 3

                                                                        What does this mean? 4.2.1 is a very old version of GCC (apparently released in 2007), what purpose did it still serve? Does this have any impact on current GCC versions?

                                                                        1. 7

                                                                          GCC 4.2.1 is still used as the compiler for some tier-2/tier-3 CPU architectures like mips or sparc64. These will need to migrate to Clang, to external GCC from ports, or be removed from the tree.

                                                                          1. 2

                                                                            In addition, I believe the reason they stuck with 4.2 in particular is because it’s the last version of GCC where the whole project is licensed under GPLv2, with the additional restrictions of v3 being considered unacceptable. (Note in the Android Honeycomb source tree that 4.3 and 4.4 have a COPYING3 which 4.2 does not)

                                                                        1. 5

                                                                          Is there any source for this? We’re using Layer’s API at my workplace and we’ve not heard anything on this. Their website says nothing and there’s no sources except this article. Even on Twitter the only person posting about it is the author of this article (who works for a direct competitor).

                                                                          1. 3

                                                                            Did you ever find out whether this is legit?

                                                                            One thing to note is that the author says Layer is only shutting down Layer Chat. But the way the email is phrased, it sounds like all of Layer is being shut down. It’s a strange post.

                                                                            1. 3

                                                                              No, I’ve not found anything to corroborate this so far.

                                                                              1. 1

                                                                                ~ tschellenbach just now | link | edit | delete | reply Author here, I received a notification about this from one of our customers who switched from Layer to Stream. I’ve since had a conversation with Layer to confirm.

                                                                                Yup it’s real

                                                                              2. 1

                                                                                Separate followup update: We had this confirmed to us today - apparently they missed us before.

                                                                              3. 1

                                                                                Author here, I received a notification about this from one of our customers who switched from Layer to Stream. I’ve since had a conversation with Layer to confirm.

                                                                              1. 6

                                                                                We’re currently transitioning to https://notion.so as our single source of truth on these things. It’s not perfect, but it’s discovery and collaboration is must better than Confluence or GDocs, from experience. This is from the perspective of a small engineering team, though.

                                                                                1. 1

                                                                                  That looks somewhat neat, but it looks like it has no Linux support and no web interface?

                                                                                  1. 3

                                                                                    Ah, I can see how you would get that impression from the landing page, but it does in fact have a web interface and I mostly just use it from the browser so that should solve the Linux support issue as well.

                                                                                    1. 2

                                                                                      It’s fully usable through a web browser, but there’s no Linux client, no.

                                                                                      1. 1

                                                                                        The native clients are basically web views. The web interface is what I use most of the time.

                                                                                        1. 1

                                                                                          It’s definitely web based and in browser, the desktop apps are just Electron wrappers. I think there may be an unofficial Linux Electron wrapper too (if you are willing to add to your collection)

                                                                                        2. 1

                                                                                          Thanks for the link. Definitely going to check it out!