1. 7

    App stores are broken package managers.

    1. 2

      Even if we assume that this is true, there still isn’t a package manager in the shared Linux platform. If you want to be a part of the Linux conversation at all, you need to produce at least two: RPM and DEB. And if you want to get more than a mere plurality of Linux installers then you need several more (Pacman, Guix, Nix, Emerge…)

      For someone that wants to distribute a package that works, this really sucks. I don’t like the Play Store’s lack of dependency management, either, but when I produce an APK, I appreciate knowing that I don’t need to produce anything else. The ideal package manager format is the same one everyone else uses.

      1.  

        For someone that wants to distribute a package that works, this really sucks.

        On the other hand, for someone who just wants to use an existing program and not have it break six months down the line, it’s fantastic.

        1.  

          I don’t understand what this has to do with having several widespread packaging formats. Having software that does “not break six months down the line” requires only that:

          • The package be OSS, or, at least, the distributor posess a copy of the source code and the legal right to do necessary integration patching.

          • The package be buildable without an internet connection, using only files that the distro has.

          • Any online services the package relies upon be available six months down the line, either because it’s not a networked application anyway, the organization behind its runtime dependencies is stable (the NTP Pool, the DNS Root, the various TLS CAs, detectportal.firefox.com), or because it relies on no mothership (BitTorrent, mDNS).

          I love the idea of distro-curated, 100%-OSS, repositories. In fact, I’d go farther and say that I like the way they’re implemented better than the app stores run by Google, Apple, and Microsoft. The fact that they are incompatible with each other, however, just seems like unnecessary friction.

      2. 1

        What do you mean?

      1. 9

        At least you aren’t using Chrome.

        1. 19

          Technically, Chrome uses you.

          1. 8

            It’s really more of a symbiote than a parasite…

        1. 10

          C++ is the only language I’ve worked in, where it is completely normal (and often preferred such as in the games industry) to completely ignore the standard library. It’s also the only language I’ve worked in where the standard library is not written in anything resembling a style you’d find nearly any normal code base written in.

          We have a committee with 300+ members. It seems that essentially every member has a feature or two that they’d like to get into the language, and many have several.

          This is the problem. I have to study this language constantly to stay up-to-date and know all the in’s and outs of bizarre rules and in that time I could have done so much more with my life.

          If we are not careful, C++ can still fail

          It’s already failing. I believe it’s pretty much dead on the table, with current projects keeping it alive. I take a lot of flak from a lot of people I know for still being the “C++” person. I love the power of the language, but can’t honestly recommend anyone use it over any other one. C++11 was a great step in the right direction, but the burden of more and more complexity from pet features while not helping me deal with the real complexity issues (#include’s, build systems, usability issues with the standard library) has me really struggling to keep wanting to put in the effort.

          1. 2

            C++ is the only language I’ve worked in, where it is completely normal (and often preferred such as in the games industry) to completely ignore the standard library. It’s also the only language I’ve worked in where the standard library is not written in anything resembling a style you’d find nearly any normal code base written in.

            This hasn’t been my experience. Other than one place with ancient code that avoided templates, I’ve never worked on a C++ codebase that didn’t use the standard library. I think a few high profile talks at C++ cons popularized the idea that the STL was too slow, but for most use cases it’s not really true.

            1. 10

              Gamedev reasons to not use the STL include:

              • Doesn’t play nice with custom allocators
              • Insanely bad APIs. C++ committee people love the everything is an iterator model but it’s just crap to use
              • Insanely bad compile times
              • Insanely bad debug perf
              • Insanely bad compile error messages
              • Insane algorithm choices. e.g. C++11 RNG stuff
              • Insane implementations
              • Spec enforced bad runtime perf. e.g. std::unordered_map is specced to be bad so all the implementations have to be bad too
              • Certain older consoles had “variable” implementation quality
              • It’s like hundreds to low thousands lines of mostly trivial code to write your own array/string/hashtable/etc so why not
              1. 1

                Doesn’t play nice with custom allocators

                What do you mean by this? Every single collection type in the STL supports custom allocators– what about their implementation is bad?

                1. 7

                  STL custom allocators are passed in as template parameters so they can’t have state, which makes them pretty much unusable.

                  1. 4

                    It also means containers with the same types but different allocators are different types.

                2. 0

                  I’m not arguing that it’s a great library, just that most projects don’t bother replacing it. It might be standard practice for game developers, but they’re a small niche.

                  1. 1

                    It might be standard practice for game developers, but they’re a small niche.

                    I’d bet even money that the majority of C++ software written these days is game code.

                    1. 2

                      I’m skeptical, but would be interested to see data if you have any.

                      My own evidence is anecdotal - I’ve worked primarily with C++ for 15 years and have never worked on or even been contacted/recruited by any game companies. I haven’t even worked with anybody (to my knowledge) who’s worked on game software. On the other hand I haven’t been seeking them out, so who knows.

                      1. 1

                        Steam has a pretty good list of programs.

                        1. 1

                          Do they have any numbers on how many of those games are based on Unity? The game logic in Unity-based games is written in C#, so I’m sure that would make up a good chunk of new games code written nowadays.

                3. 8

                  In my experience, its’ a good point.

                  • When I was at EA, we started using C++ but not STL. This was a long time ago, but I’m NOT sure STL got significantly better for the console use case.
                  • At Google string is not std::string. I think the interfaces were slightly different, although I don’t remember the details. This is tens of millions of lines of code linking against a non-standard string, and it’s one of the most common types in the codebase, etc. As far as I remember, the reason was performance and maybe code bloat.
                  • The fish shell is one of the only shells written in C++, but it explicitly avoids STL. Oil might go that way too. (Right now the only thing we use is std::vector as an implementation detail)

                  STL also uses exceptions, and many codebases compile with those off, e.g. all of Google’s.

                  There was a recent cppcon talk by Herb Sutter which noted that the exceptions issue essentially bifurcates the language. And he surveyed the audience and I think at least half of people had SOME restriction on exceptions in their company.

                  When you write all of your function signatures differently, you’re almost writing in a different language. Those two styles of code are cumbersome to bridge.

                  1. 1

                    STL also uses exceptions, and many codebases compile with those off, e.g. all of Google’s.

                    I’ve never worked with C++ - what does it mean to have exceptions off? You’ve disabled the possibility of raising / catching exceptions in the language? I’ve never worked with a language where something like that would be possible. Is error handling then done through a (result, maybeErr) = someFunction(); if (maybeErr) { ...} pattern?

                    And why do people turn exceptions off? Does it mean you then can’t use any library that uses exceptions (like STL)? Doesn’t that mean lots of libraries can’t be used together with lots of other libraries?

                    1. 3

                      Yes, C++ compilers generally allow you to turn off exception support. All the code that deals with handling exceptions is discarded; any code that attempts to raise an exception instead crashes the entire process.

                      One reason people turn off exceptions is for efficiency - when an exception is thrown, all the local variables allocated up to that point must be deallocated, which means there’s a bunch of bookkeeping that always has to be done, even though it’s almost never used.

                      Another reason people turn off exceptions is for simplicity. If you’re trying to understand what a given function does, you have to understand the control-flow pattern, and more complex patterns are more difficult to understand (Cyclomatic complexity). Exceptions add an extra “exception thrown” edge to the control-flow graph from every line of code to the exit, which makes things much more complex. In garbage-collected languages like Python or Java it’s not as big a deal, because most things will get cleaned up eventually anyway, but C++ expects you to care about more details.

                      Generally C++ code designed for “no exceptions” mode will make functions return error-codes, yes. Libraries that use exceptions aren’t incompatible - they just crash the process when something goes wrong, as I said - but if a project is designed for “no exceptions” then it would probably prefer libraries designed under the same constraint.

                      1. 1

                        Thanks :)

                      2. 3

                        Yes if you turn it off then the code just crashes. So you don’t want to use throw anywhere.

                        C and C++ error handling strategies are very diverse

                        • errno style which is a global / thread local
                        • boolean result and out params: bool myfunc(const Input& i1, const Input& i2, Output* o1, Output* i2)
                        • Result/Maybe objects (this is more “modern C++” that depends on templates)
                        • Exceptions

                        People turn off exceptions for at least two reasons:

                        1. Because they don’t like non-local reasoning. Classic article: https://www.joelonsoftware.com/2003/10/13/13/
                        2. Because exceptions bloat the generated code with exceptions tables. This may matter on embedded systems.

                        To answer the other questions:

                        1. You can use STL, but not in a way that throws an exception. For example, you have to check that a key is in a map first, rather than let it raise an exception.
                        2. Yes, lots of libraries can’t be used with others. But many codebases are written in a style where exceptions can be “local”. Most big codebases are a hybrid of C and C++, e.g. Firefox, Chrome, I assume Photoshop. So they use C error handling in some places but C++ in others. For example OpenSSL and sqlite are all C, and use C style error handling, but they’re often linked into C++ programs.
                        1. 1

                          So, C and C++ error handling strategies are really very different.

                          errno style is only used in C, and in practice is rare outside of the standard library and other very very old libraries. It’s not used at all in C++.

                          Return codes are used in both, sure. That’s the bog-standard way to handle errors in C or in C-style C++. Result/maybe objects and exceptions obviously don’t exist in C unless you count setjump/longjmp, which is reasonably obscure and doesn’t give you any opportunity to clean up state.

                          Result/maybe objects are not used in C++. You might be thinking of Rust. std::optional<T> isn’t a maybe<T> or a result<T, E>, it’s not for error handling but an alternative to using pointers to represent optional values. There’s nothing modern about something like this, it doesn’t ‘depend on templates’ any more than any other C++ code. Templates in C++ are pervasive. There’s nothing unusual or weird about writing or using templates. They’re as much a normal working part of the language as functions are.

                          1. 1

                            OK what I meant is “result/error parameterized by type”, of which there are many variations, e.g.

                            https://llvm.org/doxygen/classllvm_1_1Error.html

                            It’s more modern than error codes :) And more modern than exceptions.

                          2. 1

                            Thanks :)

                      3. 4

                        Bloomberg LP uses their own standard library, called the bde.

                        I think when Bloomberg adopted C++, there were some licensing issues surrounding the std library, so they implemented their own. I could be wrong about this – I’m not fully sure what the rationale was, but I don’t think the stl was dropped for the sake of speed. I’m a little bummed the README doesn’t have a rationale.

                        The library is largely compatible with the standard library, with a few key differences. The most important difference is that allocators are passed around as pointers, instead of being specified in templates. Here’s an example of the bsl::vector constructor taking an allocator, here’s an example of a Bloomberg specific bdlcc::Queue taking an allocator. All allocators implement this interface.

                        The bde also has a unique coding style (which I personally find extremely readable), and an almost absurd amount of comments. It’s one of the few C++ libraries that I think is easier to read than it was to write – which is quite an accomplishment!

                        The library also has tons of goodies in it, but isn’t as featureful as boost. My personal favorite the use of traits to encode/decode objects to arbitrary formats, in a fashion similar to rust’s serde.

                    1. 4

                      The objects thing is great but only when your programs know how to deal with those objects. “Text is the universal interface”. It’s an ease-of-life solution, not a short term ease-of-use solution.

                      1. 3

                        Indeed. .Net + COM makes that possible on Windows. It’s not possible on Unix at this time, and frankly, even if a universal object model were to be developed it would be hailed as the coming of the Antichrist by the text faithful.

                        1. 2

                          GObject or DBus are the closest analogies on the Linux side. I’m no Windows dev so I can’t be sure how similar they actually are, but it seems like something like PowerShell could be implemented on one or both of them.

                          And you’re right, any attempt to do something like that would result in wailing and gnashing of teeth the likes of which we have never heard before.

                        2. 2
                          1. In cases where it works, objects are more safe. Too many one- or few-liners that broke when on of the tools in the endless pipe chain change their output format.

                          2. The objects always are pasted as plain text to STDOUT, so every tool that cannot parse these objects natively should be able to parse the resulting string.

                          PowerShell is really useful hybrid of bash and Python, figuratively speaking. It’s a bit verbose for a interactive shell, but with tab completion that’s not such a big deal. And completion in ps is rock solid.

                          1. 2

                            PS can convert streams to CSV.

                            1. 1

                              It’s actually not that hard to parse the output of programs that just speak text. Think of it this way: you probably already do this today using cut or awk or something like that (at least that’s what I used to do). Instead of figuring out how to parse that output every time I enter a command, just make one wrapper that turns the output into objects with fields with names. Then, like OP said, you can interrogate it, transform it, etc., without having to remember so much (it even has tab completion in most cases, where powershell can figure out what you are doing.)

                              I usually end up writing a little wrapper that does this for any text-output programs I use more than 5 times a month for this reason, and so I have tab completion of arguments and can add reasonable defaults for arguments I get sick of typing all the time.

                              On the other side (pipeline stuff into a program) it already does what you would expect (just sends text to the other program) and it’s very easy to put something in a foreach to operate on each of the objects in the pipeline if you want.

                              So, powershell is still pretty great even at dealing with text-output programs.

                            1. 44

                              Google search results have got worse and worse over the years. It used to be that the first result of my search was nearly always what I wanted. Now Google insists on trying to be clever, and often the MOST IMPORTANT keyword in the search isn’t even there at all.

                              1. 16

                                A million times this. Google seems far less useful today than it did 10 years ago. Most of the time, I get search results for what Google thinks I’m trying to search for based on popular searches, rather than what I am actually searching for. Basically if your search query is fairly uncommon, Google won’t show you any relevant results, period.

                                There is a big gaping vacuum in the market for a search engine specifically focused on technical users looking for technical content. Who wants to start a company with me?

                                1. 6

                                  Isn’t that … almost literally what DuckDuckGo is?

                                  1. 3

                                    No, DuckDuckGo pulls from Bing, and both tend to change what you searched for to what it thinks you want instead. Even the old trick of +wanted_keyword -unwanted_keyword does not guarantee it will honor your request (they are treated as ‘suggestions’ instead of rules now), but it does help a lot.

                                  2. 3

                                    I’m sure it started with the demise of: https://www.google.com/bsd

                                    1. 3

                                      google.com/linux was literally my first contact with Google. I was attending a local Linux user group (are those still a thing?) in the city I grew up in back in South America, and someone told us we should check that out next time we were looking for Linux resources. I remember the quality and breadth of the results was mind blowing, and I immediately stopped using any other search engines. Never thought I would end up working for them about 15 years later, heh.

                                  3. 3

                                    Catering to the lowest common denominator rather than to people who actually know how to structure search queries.

                                  1. 3

                                    In my opinion async io makes little sense for python, they could have instead made an async runtime and kept the code exactly the same.

                                    1. 3

                                      It’s not quite that simple, though I actually do agree with you that cooperative multitasking makes little sense for a dynamically typed, interpreted language with little concern for speed (yes, I also think Node is stupid). The problem is that changing Python to a language with a concurrent runtime will affect FFI. And Python uses FFI heavily enough that alternative implementations of the language have to implement the FFI layer the same way to be usable.

                                      1. 1

                                        Here’s a good piece on why you really want asynchronous call explicitly visible in your code: https://glyph.twistedmatrix.com/2014/02/unyielding.html

                                        (It starts out slow, but gets to the point in the end.)

                                        1. 1

                                          Following that to the logical extreme gets you effect based programming, which is so unusable in practice that not even Haskell people try to pretend it’s ready for prime time.

                                          And certainly it’s not appropriate in Python. In theory being able to tell where every possible effect could happen is useful. In practice it means having to write the same function for every combination of effects which is awful.

                                      1. 1

                                        For C projects, I use a very very simple Makefile that starts off as basically the following:

                                        ifeq ($(BUILD),release)
                                        	CFLAGS += -O3 -s -DNDEBUG
                                        else
                                        	CFLAGS += -O0 -g
                                        endif
                                        
                                        TARGET    := a.out
                                        
                                        PC_DEPS   := sdl2
                                        PC_CFLAGS := $(shell pkg-config --cflags $(PC_DEPS))
                                        PC_LIBS   := $(shell pkg-config --libs $(PC_DEPS))
                                        
                                        SRCS      := $(shell find src -name *.c)
                                        OBJS      := $(SRCS:%=build/%.o)
                                        DEPS      := $(OBJS:%.o=%.d)
                                        
                                        INCS      := $(addprefix -I,$(shell find ./include -type d))
                                        
                                        CFLAGS    += $(PC_CFLAGS) $(INCS) -MMD -MP -pedantic -pedantic-errors -std=c89
                                        LDLIBS    += $(PC_LIBS) -lm
                                        
                                        build/$(TARGET): $(OBJS)
                                        	$(CC) $(OBJS) -o $@ $(LDFLAGS) $(LDLIBS)
                                        
                                        build/%.c.o: %.c
                                        	mkdir -p $(dir $@)
                                        	$(CC) -c $(CFLAGS) $< -o $@
                                        	@$(RM) *.d
                                        
                                        .PHONY: clean syntastic
                                        clean:
                                        	rm -f build/$(TARGET) $(OBJS) $(DEPS)
                                        
                                        syntastic:
                                        	echo $(CFLAGS) | tr ' ' '\n' > .syntastic_c_config
                                        
                                        release:
                                        	-$(MAKE) "BUILD=release"
                                        
                                        -include $(DEPS)
                                        

                                        That should be sufficient for pretty much any C project, frankly.

                                        If I need anything extra, it’s very simple to add. For example, one project needs to run a third-party assembler to generate object files, then uses a python script to convert those binary object files to hexadecimal text files that can be #included into a .c file. If the assembly file is changed, it should be handled automatically like any other dependency. For that I only need to add the following, and add $(EX_BINS) to clean:

                                        EX_SRCS   := $(shell find asm -name *.dasm16)
                                        EX_BINS   := $(EX_SRCS:.dasm16=.bin)
                                        EX_HEXS   := $(EX_BINS:.bin=.hex)
                                        
                                        %.bin: %.dasm16
                                        	dtasm --binary $< -o $@
                                        
                                        %.hex: %.bin
                                        	python3 utils.py $< > $@
                                        

                                        Which compiler does your Makefile support?

                                        Does your Makefile support Windows at all?

                                        Is your Makefile a GNU makefile, or BSD makefile?

                                        GNU/GCC and everything that supports their most basic extensions, like -MMD -MP. A lot of GNU extensions have become the de facto standard. I have zero interest in writing a more complicated Makefile or using the overly complicated mess that is CMake to support Windows or barebones hobbyist project C compilers.

                                        Does your Makefile support out-of-source build?

                                        Yes. build/.

                                        Does your Makefile support cleaning the project from all autogenerated artifacts?

                                        Do you support a situation when the compiler/SDK will be upgraded on the system?

                                        Do you track the dependencies on the libraries installed in the system?

                                        Yes. clean.

                                        Do you support setting a Release/Debug build of your project?

                                        Does your Makefile support passing custom CFLAGS or LDFLAGS?

                                        Does your Makefile support showing the full command line used to compile a compilation unit?

                                        Yes.

                                        Are you using thirdparty libraries in your project?

                                        Yes. Add the library to PC_DEPS. pkgconfig is used, hence the name.

                                        I don’t want to use CMake, because the project is small and it’s not worth it.

                                        The example given doesn’t support third-party libraries, doesn’t support generating .syntastic_c_config so that my editor knows what cflags to compile files with to generate the correct set of syntax errors in my editor, doesn’t have any CFLAGS set, etc.

                                        What if someone just wants to use Eclipse, Xcode, Visual Studio, CodeBlocks, etc?

                                        Do they support running make? Yes if so.

                                        1. 1

                                          Do they support running make? Yes if so.

                                          Useful IDE support is not that simple.

                                          In order to perform autocomplete, the IDE needs to be able to load all of the header files that will be included from your currently open C file, so that it can figure out all functions that are in-scope.

                                          • This means it needs to be able to figure out all of the -I parameters that you are passing to your compiler, to actually find the headers.

                                          • It is also useful to know all preprocessor definitions, like the -D flags and platform-default ones like __linux (meaning the IDE needs to know the target platform), and the version of C that you’re targeting, so that it can actually parse those header files correctly.

                                          • The IDE also wants to know the full set of C source files, for jump-to-definition purposes.

                                          In other words, an IDE basically has to have access to all of the information that’s necessary to actually compile your code, in order to correctly parse your C code the same way a compiler does, in order to provide correct and complete autocomplete and jump-to-definition. If I don’t get those two features, there’s not much point in using an IDE.

                                          A naive approach would be to override PATH before running make, so that instead of calling the compiler, make would instead call a wrapper that sneaks the compiler flags over to the IDE for its use. I doubt actual IDEs do this, if only because that would prevent you from editing the project until after the code has been compiled at least once.

                                          1. 1

                                            autocomplete, jump-to-definition

                                            That’s what ctags is for.

                                            It is also useful to know all preprocessor definitions, like the -D flags and platform-default ones like __linux (meaning the IDE needs to know the target platform), and the version of C that your targeting, so that it can actually parse those header files correctly.

                                            Conditionally including headers is a kind of fucking awful idea that leads to people wanting to use overly complicated build systems.

                                            1. 1

                                              That’s what ctags is for.

                                              Let’s say my application wants to link to the Ruby interpreter. I would, presumably, add it to the PKG_SRC section of the makefile. This results in -I/usr/include/x86_64-linux-gnu/ruby-2.3.0/ being passed to my compiler parameters. My code can now use #include <ruby/config.h>, and the compiler will find it. Awesome.

                                              But your makefile only makes that information available to the compiler. How’s ctags going to know to include the symbols in the ruby installation in its tagslist?

                                              Conditionally including headers is a kind of fucking awful idea that leads to people wanting to use overly complicated build systems.

                                              You got a better way to have my application use epoll on linux and kqueue on freebsd?

                                              1. 1

                                                You got a better way to have my application use epoll on linux and kqueue on freebsd?

                                                include/networking.h
                                                src/networking-linux.c
                                                src/networking-freebsd.c
                                                

                                                The whole point of header files is that they’re an interface that can have more than one implementation.

                                                But your makefile only makes that information available to the compiler. How’s ctags going to know to include the symbols in the ruby installation in its tagslist?

                                                .PHONY ctags
                                                ctags:                                                                                                                                                     
                                                        gcc -M $(INCS) $(PC_CFLAGS) $(SRCS) | sed -e 's/[\ ]/\n/g' | \                                                                                     
                                                                sed -e '/^$$/d' -e '/\.o:[ \t]*$$/d' | \                                                                                                   
                                                                ctags -L - $(CTAGS_FLAGS)
                                                
                                                1. 1

                                                  And now you’re adding ctags support to the makefile. Which was exactly my point; either the makefile needs to support the IDE, or the IDE needs to do a lot more than “just call make.”

                                                  1. 1

                                                    ctags isn’t an IDE. It’s a universal standard, much like pkg-config. Adding a couple of lines to a Makefile to support something like ctags or pkg-config is fine to me because it’s the same for every project and it should work with every development environment, integrated into one program or not.

                                                    The real question is: does your proprietary IDE support ctags? Does it support running make? Does it support lots of other universal standards? Or does it insist on proprietary crap like Visual Studio does?

                                          2. 1

                                            I’ve no idea why you think your GNU-only UNIX-only Makefile template is better than a universal all-system 3-line CMakeLists.txt script.

                                            The example given doesn’t support third-party libraries

                                            This is a common problem in C and C++. It is handled by CMake by using add_subdirectory (concept similar to recursive make), or e.g. externalproject_add if the thirdparty library uses a different build system than cmake.

                                            doesn’t support generating .syntastic_c_config

                                            It supports generation of compile_commands.json by adding 1 line:

                                            set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
                                            

                                            It seems that syntastic supports compile_commands.json generated by CMake as well as 10 other tools… IDEs included.

                                            Does your Makefile support out-of-source build?

                                            Yes. build/.

                                            I’m not sure this counts, as it depends on generation of dependency files inside the source tree and deletes them after generation.

                                            That should be sufficient for pretty much any C project, frankly.

                                            Not if you try to build multi-platform software that supports non-Unix systems.

                                            By releasing your Makefile in your projects you’re simply making it harder for people to use your project.

                                            1. 0

                                              I’ve no idea why you think your GNU-only UNIX-only Makefile template is better than a universal all-system 3-line CMakeLists.txt script.

                                              Because in practice, a GNU-only UNIX-only Makefile works on everything that isn’t Windows. And because the CMakeLists.txt script doesn’t do anything this does. It doesn’t support dependencies, multiple source files, pkg-config or anything else it needs to support.

                                              This is a common problem in C and C++. It is handled by CMake by using add_subdirectory (concept similar to recursive make), or e.g. externalproject_add if the thirdparty library uses a different build system than cmake.

                                              That requires the third-party library to support CMake: you have to have a FindXXXX.cmake file. Meanwhile, using pkg-config, which is the standard for handling dependencies, is trivial. Relying on third-party projects to support every list of random build systems is crazy, and relying on the build system and community around it to support every library under the sun is also crazy.

                                              There’s already a standard: pkg-config. Use it.

                                              I’m not sure this counts, as it depends on generation of dependency files inside the source tree and deletes them after generation.

                                              No, it does not. The dependency files go in build/. I can show you the output of ls -lR if you’d like. You should probably learn how it works before you comment inaccurately about it.

                                              Not if you try to build multi-platform software that supports non-Unix systems.

                                              So in other words: not if you write software for Windows that for some reason can’t rely on WSL. Even Microsoft understand that Windows needs to catch up with the rest of the universe and support basic standard shit every other operating system supports.

                                              By releasing your Makefile in your projects you’re simply making it harder for people to use your project.

                                              My projects don’t support Windows anyway. Why would I care if my build script works on Window if the bloody software it’s building doesn’t? Try to think before you comment next time.

                                              1. 1

                                                Your lack of comprehension of this issue is troubling. You’re asserting arguments which are simply far from truth. I’ll refrain from answering those arguments because up to the half of your comment, literally every sentence is dead wrong. You also seem to have little idea how Windows development looks like.

                                                It’s true that I’ve only skimmed over your build script instead of analyzing it, because I don’t want to waste time on it. It might be a surprise to you, but your script contains lots of hidden complexity in it, and lots of hidden concepts. If you fail to see this, it might suggest you simply don’t have as much experience in professional programming as you think, and I’m not saying this to try to offend you. This is the reason I prefer cleaner solutions that abstract complexity as much as possible. Tools like CMake or Meson offer many tools to do this. They’re far from perfect, but they’re a start.

                                                The thing is, I’m not trying to convince you to switch to CMake. I’m simply not interested in what you use. I’m interested in not having to rewrite build scripts of people who think other people should use their programming environment, and who think that introducing a mammoth dependency like WSL even resembles a sane idea. I’ll benefit if I’ll be able to convince a large group of people to use tools like CMake. You don’t have to be a part of this group, and that’s fine.

                                                1. 0

                                                  God you really are incredibly confused, aren’t you? You’re reminding me why I stopped commenting here a while ago: it’s full of people that can’t read. I don’t care how Windows development looks or works. I don’t do it. I don’t intend to do it. I don’t use a Makefile for Windows development, clearly and obviously. I have stated that extremely clearly, and here you are continuing to claim that it’s wrong because it doesn’t support Windows.

                                                  Well guess what, buddy, there’s no point making your Makefile more portable than the software it’s building! Not sure why you fail to understand this so spectacularly. I don’t need to know ‘how Windows development looks like’ to know that writing a perfectly portable build system for software that isn’t actually portable is pretty bloody pointless.

                                                  Literally every sentence I wrote in my entire comment were objectively correct, and if you pick one I will happily explain to you why. I’m not going to go through and defend them without you prompting me to properly though. There’s very little point explaining why they’re correct to someone so stupid that the best thing they can come up with is simply describing them as ‘dead wrong’. If you think they’re wrong, explain to me why you think they’re wrong and it’ll be a learning experience for at least one of us. :)

                                                  If you genuinely believe that there’s value in having a build system more portable than the software it actually builds, at the cost of introducing a huge overly complex dependency like CMake that requires lines upon lines upon lines of code to do anything remotely useful vs. just using a simple tool like make that does everything CMake does in what, 20 lines? Then I don’t know what to say. You must have some serious issues. CMake does not ‘abstract complexity’ and nothing about it is ‘clean’. It litters useless shitty files all over the place, it requires every single dependency you have to also use it, it’s just a mess. A damn mess.

                                          1. 8

                                            In short, Mozilla won’t be happy with us applying patches and modifications to their trademarked language without “explicit approval”, except for non-commercial usage, so it is a freedom issue.

                                            The language is not trademarked. Only the name is. The wiki author acknowledges the option of changing the name later in this article, so they clearly know this. That makes this thing misleading.

                                            However, we would need patches to adapt all Rust-dependant applications to the modified version of Rust, since it is a programming language.

                                            IceCat, as you can read in their debranding script, still uses Firefox’s User-Agent string. This is probably allowed because it’s an interop hazard, and isn’t really user-facing.

                                            Why would analogous workarounds, like binary aliasing rustc and cargo to the rebranded ones and interpreting RUST_ env variables, not be allowed? I’m pretty sure IceCat’s extensions system still respect Firefox names in all the spots that matter. Just don’t call the package rust and don’t call yourself rust when the user passes --version. And don’t use the chainring R, of course.

                                            WARNING: I am not a lawyer.

                                            We would also need to maintain a list of nonfree cargo packages to blacklist those for your-freedom.

                                            Cargo.toml has a license field. You can fetch it, without downloading the whole package, by hitting crates.io’s API. I assume you already have a blacklist of non-Free licenses. That will cover the big issues, ensuring you don’t accidentally ship non-Free code in your distribution.

                                            I assume you want to maintain your blacklist of Free packages that are tied to non-Free servers (stuff like hubcaps that communicates with GitHub’s API), but you really don’t need that to ship Tor.

                                            If you’ve found a crate that is erroneously marked as Free (that is, where the Cargo.toml says its free but the LICENSE file in the repository says it’s not), then contact the crates.io team. They’re not very proactive, but they will yank packages that are at risk for getting people in legal trouble.

                                            1. 7

                                              It seems to go against most tech security narratives, but IMO, having a hard-to-crack password is of little to no real value. What’s actually valuable and important is to have a unique password for every site, no matter how trivial.

                                              It seems to me that if any attacker gets either a system interface where they can try a huge number of passwords without getting locked out, or retrieve the hash of the password to crack at their leisure, then that system is essentially already compromised beyond the hope of protecting anything on it. There’s no point in worrying whether cracking your password would take a millisecond or a month, on a Raspberry Pi or a ten thousand dollar AWS cluster.

                                              1. 2

                                                It’s not that unusual. It’s literally another xkcd comic

                                              1. 7

                                                I should point out that XKCD was popularizing a technique called diceware, repeating a proposal from 1995. He was also doing it wrong: in diceware, you were supposed to have a delimiter between words, in order to increase the difficulty for a cracker that used dictionaries as input (since, without a delimiter, the same string can be interpreted as multiple different combinations of words, multiplying the number of opportunities for the cracker to get a hit).

                                                As of 2014, the original diceware author specifically recommended moving to six words. Nothing in particular happened between 2012 and 2014 other than XKCD’s popularization of this nearly-twenty-year-old technique, so odds are that doing it with four words was not terribly secure in 2012 either.

                                                (I don’t really blame Munroe for this problem. He probably didn’t remember where he got the technique himself – just part of ambient internet security lore. His popularization probably led to marginally better passwords and less password reuse, up until registries for auto-generated passwords started becoming common features of browsers.)

                                                1. 2

                                                  without a delimiter, the same string can be interpreted as multiple different combinations of words, multiplying the number of opportunities for the cracker to get a hit

                                                  I’ve thought about this a little bit and I’m not seeing how the delimiter makes a difference. Can you provide an example?

                                                  1. 13

                                                    makedicespacespare could mean “make-dice-space-spare”, or “make-dices-paces-pare”, or a few other options. This reduces the number of possible passwords that you could have generated, reducing the entropy of your password slightly.

                                                    1. 1

                                                      Makes perfect sense. I was thinking of compound words not borrowing a letter or two from a neighboring word.

                                                    2. 3

                                                      Let’s say that somebody builds a password cracker specifically to hit diceware passwords. (Such a cracker probably exists – diceware was pretty common in the late 90s, and dictionary-based crackers like john the ripper are more elaborate versions of the same thing.) Such a cracker, if naively written, will brute force passwords the same way we generate them: pick four words, try them with a variety of delimiters and cases, & see if the thing hashes the same. With delimiters, every time it tries this, it will match if and only if all the words it chose are the same as the ones you chose – the basis for the entropy calculation. Without delimiters, there’s the possibility that two selections of words, when concatenated, will produce the same string (ex., “godisnowhere” can be read as “god is now here” or “god is nowhere”). Every such collision is an extra opportunity for the cracker to select a matching combination of words.

                                                      Let’s say you’ve got a dictionary of 2048 words, and you are using four words to produce the password, and you have no delimiters, and the cracker knows this. If there are no collisions, the cracker has a 1 in 2048 chance to pick each right word, for a 1 in 2048^4 chance to get all four right. If there are four collisions, you have a 4 in 2048^4 chance to get all four right.

                                                      Increasing that numerator enough to bring the ratio down to a reasonable number is hard for english, but if your password is in romanized japanese or korean (where all combinations of a relatively small set of sounds are more or less equally likely to be real words because of sound starvation & the transliteration of weird loan words into the syllablry) or in a language like german (where compound words are often composed of large sets of regular words strung together in an arbitrary order, rather than having distinct set of prefixes and suffixes), collisions become a lot more likely.

                                                      This matters a lot for a naive/brute force dictionary cracker. I don’t think such crude tools are used much anymore, though, & somebody more familiar with modern crypto & modern security techniques can tell you whether or not it matters for rainbow tables & other more esoteric things. My knowledge of cryptography is decidedly limited & casual.

                                                  1. 0

                                                    Error codes are easier to understand than exceptions or Result types, but they don’t carry much information. You’re trading ease of comprehension for difficulty of debugging. Exceptions carry a great deal of information but break the sequentiality of the code. Result types can carry information and preserve sequentiality, but can require a lot of “plumbing” in order to compose and handle different types of errors.

                                                    Isn’t at least this battle won? Rust showed that using Results pervasively works.

                                                      1. 3

                                                        MLs, Haskell, partially Erlang. All of them had something like Result (in Erlang it is just tuple {ok, Result} or {error, Reason}). But from what I see on Wikipedia then the first language supporting ADT was either Hope or ISWIM.

                                                    1. 1

                                                      Please turn off justified text. It looks like ass. There’s too much space between the words.

                                                      The rule of thumb for traditional typesetting is that the column of text should be wide enough to fit the whole alphabet twice, and your column isn’t quite wide enough by that rule. But since you’re not using LaTeX, but rather the dumb* text layout algorithm built into the web browser, you really need a wider column than that.

                                                      * If the LaTeX algorithm were used in a browser, the text would jump around horribly during progressive loading, so it’s probably the right call. But it does make CSS text justification rather useless, since you usually don’t want to use a wide enough column for it to look good anyway.

                                                      1. 3

                                                        These seems rather needlessly picky and nonconstructive.

                                                        1. 2

                                                          I think GP formulated a bit too confrontational but I agree. I really liked how thoughtful the author of the article was for all kinds of things but the text column is infuriatingly narrow so for a better experience I would need to switch into Reader mode (which does very little to the site except make the text more readable). A shame since the rest is very nice and could be easily improved.

                                                      1. 10

                                                        Some additional info here: https://mdsattacks.com/#ridl-ng

                                                        In reality, this is no new vulnerability. We disclosed TAA (and other issues) as part of our original RIDL submission to Intel in Sep 2018. Unfortunately, the Intel PSIRT team missed our submitted proof-of-concept exploits (PoCs), and as a result, the original MDS mitigations released in May 2019 only partially addressed RIDL.

                                                        Oof.

                                                        We are particularly worried about Intel’s mitigation plan being PoC-oriented with a complete lack of security engineering and underlying root cause analysis, with minor variations in PoCs leading to new embargoes, and these “new” vulnerabilities remaining unfixed for lengthy periods.

                                                        Double oof.

                                                        Might be time to get an AMD system sooner than I was planning. Hoping apple does an arm laptop or something sooner rather than later too.

                                                        1. 2

                                                          I’m not sure if ARM or AMD would be better, really.

                                                          If I was looking at a root cause, it would be it’s hard for software developers to detect when speculation happens. All of these systems try to do speculation invisibly, which means that when it goes wrong, it’s invisible.

                                                          A more proper fix would be an exposed-pipeline system like The Mill CPU (yes, I know it’s vaporware, but shipped-for-real exposed pipeline systems already exist, it’s just that they aren’t being used in server or desktop workloads). You can detect when speculated loads are performed by inspecting the compiled machine code, and since the ultimate choice of when speculative memory loads is entirely in software, bug fixes are just software updates.

                                                          1. 5

                                                            AMD has not implemented TSX while Intel has. TSX (Hardware Transactional Memory) is incredibly hard to get right - these vulnerabilities are the result of implementing it in hardware prematurely.

                                                            So technically AMD/ARM have it better by virtue of not releasing buggy implementations for the past three years that have to be disabled by Intel.

                                                        1. 2

                                                          One other thing you might want to do: check against a bloom filter before performing cache eviction. This ensures that “one-hit wonders” don’t evict more popular items.

                                                          A web object is cached only when it has been accessed at least once before, i.e., the object is cached on its second request. The use of a Bloom filter in this fashion significantly reduces the disk write workload, since one-hit-wonders are never written to the disk cache. Further, filtering out the one-hit-wonders also saves cache space on disk, increasing the cache hit rates.

                                                          1. 23

                                                            I think Josh addresses a good point here: systemd provides features that distributions want, but other init systems are actively calling non-features. That’s a classic culture clash, and it shows in the systemd debates - people hate it or love it (FWIW, I love it). I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                                            Still, it’s always important to have a way out of a component. But the problem here seems to be that the scope of an init system is ill-defined and there’s fundamentally different ideas where the Linux world should move. systemd moves away from the “kernel with rather free userspace on top” model, others don’t agree.

                                                            1. 17

                                                              Since systemd is Linux-only, no one who wants to be portable to, say, BSD (which I think includes a lot of people) can depend on its features anyway.

                                                              1. 12

                                                                Which is why I wrote “Linux world” and not “Unix world”.

                                                                systemd has a vision for Linux only and I’m okay with that. It’s culture clashing, I agree.

                                                                1. 6

                                                                  What I find so confusing - and please know this comes from a “BSD guy” and a place of admitted ignorance - is that it seems obvious the natural conclusion of these greater processes must be that “Linux” is eventually something closer to a complete operating system (not a bazaar of GNU/Linux distributions). This seems to be explicitly the point.

                                                                  Not only am I making no value judgement on that outcome, but I already live in that world of coherent design and personally prefer it. I just find it baffling to watch distributions marching themselves towards it.

                                                                  1. 6

                                                                    But it does create a monoculture. What if you want to run service x on BSD or Redox or Haiku. A lot of Linux tools can be compiled on those operating systems with a little work, sometimes for free. If we start seeing hard dependencies on systemd, you’re also hurting new-OS development. Your service wont’ be able to run in an Alpine docker container either, or on distributions like Void Linux, or default Gentoo (although Gentoo does have a systemd option; it too is in the mess of supporting both init systems).

                                                                    1. 7

                                                                      We’ve had wildly divergent Unix and Unix-like systems for years. Haiku and Mac OS have no native X11. BSDs and System V have different init systems, OpenBSD has extended libc for security reasons. Many System V based OSes (looking at you, AIX) take POSIX to malicious compliance levels. What do you think ./configure is supposed to do if not but cope with this reality?

                                                                  2. 2

                                                                    Has anyone considered or proposed something like systemd’s feature set but portable to more than just linux? Are BSD distros content with SysV-style init?

                                                                    1. 11

                                                                      A couple of pedantic nits. BSDs aren’t distros. They are each district operating systems that share a common lineage. Some code and ideas are shared back and forth, but the big 3, FreeBSD, NetBSD and OpenBSD diverged in the 90s. 1BSD was released in 1978. FreeBSD and NetBSD forked from 386BSD in 1993. OpenBSD from NetBSD in 1995. So that’s about 15 years, give or take, of BSD before the modern BSDs forked.

                                                                      Since then there has been 26 years of separate evolution.

                                                                      The BSDs also use BSD init, so it’s different from SysV-style. There is a brief overview here: https://en.m.wikipedia.org/wiki/Init#Research_Unix-style/BSD-style

                                                                      1. 2

                                                                        I think the answer to that is yes and no. Maybe the closets would be (open) solaris smf. Or maybe GNU Shepherd or runit/daemontools.

                                                                        But IMNHO there are no good arguments for the sprawl/feature creep of systemd - and people haven’t tried to copy it, because it’s flawed.

                                                                    2. 6

                                                                      It’s true that systemd is comparatively featureful, and I’ll extend your notion of shipping a software suite by justifying some of its expansion into other aspects of system management in terms of it unifying a number of different concerns that are pretty coupled in practice.

                                                                      But, and because of how this topic often goes, I feel compelled to provide the disclaimer that I mostly find systemd just fine to use on a daily basis: as I see it, the problem, though, isn’t that it moves away from the “free userspace” model, but that its expansion into other areas seems governed more by political than by technical concerns, and with that comes the problem that there’s an incentive to add extra friction to having a way out. I understand that there’s a lot of spurious enmity directed at Poettering, but I think the blatant contempt he’s shown towards maintaining conventions when there’s no cost in doing so or even just sneering at simple bug reports is good evidence that there’s a sort of embattled conqueror’s mindset underlying the project at its highest levels. systemd the software is mostly fine, but the ideological trajectory guiding it really worries me.

                                                                      1. 1

                                                                        I’m also highly sympathetic to systemd’s approach of shipping a software suite.

                                                                        What do you mean here? Bulling all distro maintainers until they are forced to setup your software as default, up to the point of provoking the suicide of people who don’t want to? That’s quite a heavy sarcasm you are using here.

                                                                        1. 11

                                                                          up to the point of provoking the suicide of people who don’t want to

                                                                          Link?

                                                                          1. 23

                                                                            How was anyone bullied into running systemd? For Arch Linux this meant we no longer had to maintain initscripts anymore and could rely on systemd service files which are a lot nicer. In the end it saved us work and that’s exactly what systemd tries to be a toolkit for initscripts and related system critical services and now also unifying Linux distro’s.

                                                                            1. 0

                                                                              huh? Red Hat and Poettering strongarmed distribution after distribution and stuffed the debian developer ballots. This is all a matter of the public record.

                                                                              1. 10

                                                                                stuffed the debian developer ballots

                                                                                Link? This is the first time I am hearing about it.

                                                                                1. 4

                                                                                  I’m also confused, I followed the Debian process, and found it very through and good. The documents coming out of it are still a great reference.

                                                                            2. 2

                                                                              I don’t think skade intended to be sarcastic or combative. I personally have some gripes with systemd, but I’m curious about that quote as well.

                                                                              I read the quote as being sympathetic towards a more unified init system. Linux sometimes suffers from having too many options (a reason I like BSD). But I’m not sure if that was the point being made

                                                                              Edit: grammar

                                                                              1. 5

                                                                                I value pieces that are intended to work well together and come from the same team, even if they are separate parts. systemd provides that. systemd has a vision and is also very active in making it happen. I highly respect that.

                                                                                I also have gripes with systemd, but in general like to use it. But as long as no other project with an attitude to move the world away from systemd by being better and also by being better at convincing people, I’ll stick with it.

                                                                              2. 2

                                                                                I interpreted it as having fewer edges where you don’t have control. Similar situations happen with omnibus packages that ship all dependencies and the idea of Docker/containers. It makes it more monolithic, but easier to not have to integrate with every logging system or mail system.

                                                                                If your philosophy of Linux is Legos, you probably feel limited by this. If you philosophy is platform, then this probably frees you. If the constraints are doable, they often prevent subtle mistakes.

                                                                            1. 8

                                                                              Well, that was worth reading.

                                                                              I’d really like to see some benchmarks that compare the overall system performance of monomorphism vs polymorphism for generics. There are so many confounding factors (“inlining is the gateway optimization” after all), so I’m not sure how you’d make it fair or what it would even mean.

                                                                              1. 9

                                                                                People who’ve been around the block typically hate having their senses hit with crystallized manipulation. It’s a shame to me that anyone likes any ads.

                                                                                1. 3

                                                                                  I think it’s less “like” than “are annoyed by.” I don’t particularly like print ads in the sense that I don’t seek them out, but they annoy me a lot less than the typical online ad network.

                                                                                  1. 3

                                                                                    I don’t like spending more money than I actually have, so I keep check of all my expenses separately from the bank. This makes adding new subscriptions a monthly PITA, so I’m hesitant to do it. If a piece of media is subscription-funded, I will probably only buy it if it’s really good and/or popular (like LWN, Play Music, and IntelliJ, not like ehsanakhgari.org). I am far more likely to read your blog if it’s ad-funded than if it’s behind a paywall. This low barrier to entry makes things a lot easier for new entrants in the field.

                                                                                    In other words, I like ads more than I like paywalls.

                                                                                    Malvertising broke this compromise. I would much rather pay for a subscription than be subject to fraudulent or malware-infested ads. I use an ad blocker because online ad networks have not effectively self-regulated.

                                                                                  1. 5

                                                                                    This definitely sounds great; I like the async/await paradigm a lot.

                                                                                    Now, let me try to phrase this in a way where I won’t get downvoted:

                                                                                    Is Rust going to “stabilize” at some point? Just looking at it as an outsider, I feel like Rust moves and changes very quickly, especially for non-major-version changes.

                                                                                    One of the reasons that I haven’t gotten more into Rust (which seems to tick a lot of my boxes) is that I feel (rightly or wrongly, correct me please!) that the language is constantly in flux and there’s no point in learning how to write “idiomatic Rust” because what’s “idiomatic” won’t be in six months.

                                                                                    Looking at the release history, it seems like every version has either a list of “Breaking Changes” or “Compatibility Notes” or both.

                                                                                    Again, that’s just what I get from seeing release notes every so often with seemingly major features added or deprecated every so often. Someone correct me if I’m wrong.

                                                                                    1. 5

                                                                                      async/await is a bit of an exception. It’s probably the first change since 2015 (1.0 release) that actually changes how a lot of Rust code will look and work. The rest of changes was pretty minor and incremental.

                                                                                      From insider perspective, Rust fills in gaps that were in the MVP and adds tiny amounts of syntax sugar. You couldn’t initialize many global variables. Now you can (const fn). You couldn’t call certain methods in match arms, now it just works (moves in match guards). The compiler was very very pedantic about order of variable definitions, but now it’s less (smarter “NLL” borrow checker). The compiler was very pedantic about use of & and ref in patterns, but these have been made optional, because they were annoying, and compiler knows what you mean without them. I consider this bugfixes/polishing of the language, but not really changes to what Rust is.

                                                                                      In terms of breaking changes Rust is less of a hassle than GCC updates.

                                                                                      1. 4

                                                                                        Everything on stable rust is stabilised. Stabilisation is a commitment that code written against that feature won’t break in the future. Code written again Rust 1.0 should still compile fine with Rust 1.39.0, although it may emit warnings now. The project takes this commitment very seriously.

                                                                                        The one exception to this guarantee is bugs/unsoundness. If these things are discovered they will be fixed, possibly breaking code. As pointed out in the post this is typically done in a phased approach and doesn’t actually impact that much code.

                                                                                        There are regular “crater runs” Done that build every crate on crates.io to measure the impact of a change/ensure that it does not break things.

                                                                                        1. 3

                                                                                          Everything on stable rust is stabilised.

                                                                                          I phrased it poorly. I realize that these features are “in stable”…what I meant was “are they going to stop or slow down on pushing new major features into stable so that I can say I write in ‘Rust’ and not ‘Rust 1.32.0’”?

                                                                                          In other words, it’s great that my 1.0 code will compile on 1.39.0, but I feel like 1.39 has a lot of differences from 1.0 and from many versions in between. I start to learn 1.39, and then 1.40 comes out and there’s a whole lot more to learn, and when I get comfortable with 1.40, 1.41 comes out with more changes, etc…

                                                                                          I’m not saying that’s what’s happening, I’m saying that’s the perception I get from the release notes. I’m happy to be proven wrong.

                                                                                          (PS> I realize Python is sorta-kinda guilty of this too, but I feel like maybe to not the same degree. Maybe I’m just old and set in my ways, I don’t know.)

                                                                                          1. 4

                                                                                            In other words, it’s great that my 1.0 code will compile on 1.39.0, but I feel like 1.39 has a lot of differences from 1.0 and from many versions in between.

                                                                                            The largest changes since 1.0 were in the Rust 2018 edition, I feel like there haven’t been any big changes since then (until async + await if that affects you). Also, 1.0 code may look a bit strange with its use of the try! macro, etc. But I wouldn’t say that if you wrote Rust 1.0 code now it would be unidiomatic.

                                                                                            At any rate, if you don’t like churn, it guess it is completely fine to write against Rust editions. So, write what was Rust 2018 until the next edition is out ;). I have been doing that is much as possible to support older Rust compiler versions.

                                                                                        2. 2

                                                                                          I don’t think there are a great number of such changes scheduled, other than non-lexical lifetimes. I’m an actively-monitoring keen outsider, rather than a daily user though, so take it with a grain of salt.

                                                                                          1. 3

                                                                                            other than non-lexical lifetimes

                                                                                            That’s been shipped, for both editions, since July

                                                                                        1. 2
                                                                                          <button type="button" aria-pressed="false">
                                                                                            Unpressed
                                                                                          </button>
                                                                                          <button type="button" aria-pressed="true">
                                                                                            Pressed
                                                                                          </button>
                                                                                          

                                                                                          I didn’t know about the ARIA attributes, compare MDN.

                                                                                          But what’s the advantage of encoding a toggle button like this instead of as an <input type="checkbox">?

                                                                                          1. 2

                                                                                            I don’t know of any hard-and-fast rules around this, but I think they just have slightly different semantics, same as the visual representation of a checkbox compared to a toggle. The Aria Practices document describes a mute button as a good example of a toggle button: http://w3c.github.io/aria-practices/#button

                                                                                            Incidentally there is also a proposal for a “switch” element, which would have its own slightly different semantics: https://github.com/tkent-google/std-switch/blob/master/README.md

                                                                                            1. 2

                                                                                              https://lobste.rs/s/yvs2xp/don_t_use_checkboxes_2007 pretty much answers why the author (and I) recommend not using checkboxes.

                                                                                              Though, most of the time, you’d probably be better off with radio buttons or a <select> menu.

                                                                                              1. 1

                                                                                                Hmm, but don’t those arguments apply equally to toggle buttons? I don’t see a fundamental difference between toggle buttons and checkboxes.

                                                                                            1. 5

                                                                                              Proposal: just allow arbitrary tags already. This bikeshedding about whether a thing “deserves” a tag is so silly. Everything deserves a tag! Or two!

                                                                                              1. 6

                                                                                                How do you do this without it devolving rapidly into instagram where the kids just dump a dozen completely #irrelevant #tags into every #blessed #winning post, or just spamming the system in general? Tags should serve readers first, authors second.

                                                                                                1. 3

                                                                                                  Laarc supports putting in whatever tags we wanted. Worked well enough in practice. It’s a tiny site with the advantages that brings. I don’t know what would happen if it was as large as Lobsters.

                                                                                                  I really did like how tagging wasn’t a burden. I just put in what I wanted cross-referenced against what I saw people were already doing (i.e. consistency).

                                                                                                  1. 3

                                                                                                    There was an article I can’t find now, about some online community that allowed any kind of tags. The beauty of the system comes from the moderation team that tries to link similar tags together, enabling better search. Seems like something worth having.

                                                                                                    1. 5
                                                                                                      1. You’re thinking of https://archiveofourown.org, I think.

                                                                                                      2. I’m guessing you found it through lobsters, because it’s right here: https://lobste.rs/s/mubgr2/fans_are_better_than_tech_at_organizing

                                                                                                      3. As you can find by looking through the comments, it is hardly a utopia. Because of course the question of whether two tags are synonymous, or whether a tag should be considered a subset of another tag, is not a value-neutral choice with a single right answer. Just off the top of my head: is #Electron a subset of the #DesktopApp tag, is #GoLang a #SystemsLanguage, should #RaspberryPi be considered #Embedded, should #LLVM be a subset of the #Apple tag, and should #RustLang be a subset of the #Mozilla tag?

                                                                                                      1. 2

                                                                                                        Yes, you’re absolutely right, that’s the thing. I couldn’t find it via Google or DuckDuckGo, tho.

                                                                                                        Also, maybe going for such binary decisions is not the right way to do it. #DesktopApp and #Electron are certainly related – over time one can see the intensity of that relationship.

                                                                                                        1. 2

                                                                                                          The more “nuanced” your system tries to be, the more difficult it will be for people who don’t spend all day using it to understand. The Lobsters model, for all its faults, is trivial for newcomers to grok and long-timers to predict; everybody can figure out what its faults are in two seconds, and we know how to work around them. On the other end of the complexity scale, you get stuff that works like general web search engines, which are basically impossible for anyone to predict even if they do work on it.

                                                                                                          1. 2

                                                                                                            Yeah, with that I can completely agree. Simple things go a long way.

                                                                                                      2. 3

                                                                                                        This sounds like Pinboard.

                                                                                                        https://blog.pinboard.in/2011/10/the_fans_are_all_right/

                                                                                                        The primary(?) use of tags here is for filtering, not finding though.

                                                                                                        1. 2

                                                                                                          No, not Pinboard. It was some small community but I forgot of what. Need to dig up the article from the history, once I’m in front of my computer

                                                                                                          1. 3

                                                                                                            An Archive of Our Own? Let us know when you find the link!

                                                                                                            1. 1

                                                                                                              Maybe LibraryThing?

                                                                                                        2. 2

                                                                                                          Its fundamentally different though, because tags are for filtering things out rather than promoting.

                                                                                                          1. 2

                                                                                                            No they do both. That’s why you can click a tag to find related content. Example.

                                                                                                            1. 2

                                                                                                              Ah I see, thanks for pointing that out.

                                                                                                            2. 2

                                                                                                              For you they might be for filtering stuff out, for Sarah down the block it might be for finding more about a particular topic.