1. 13

    “This is in essence how GPT-3, or for that matter, all of what you call AI works. By all means, a very complex process, but one void of magic there or signs of thought emergence. It’s a strictly definite and discrete problem, and the machines of today seem to be doing a good job of solving it.”

    A college prof of mine back in the ‘80s pointed this out as a paradox of AI: as soon as we figure how to make a computer do something difficult, we stop thinking of it as a sign of intelligence; it’s just a clever trick. In the 1950s it was chess; now it’s recognizing faces and generating high-school-level English prose (or poetry!)

    The word “magic” in the quote above is telling — implying that to be intelligence it has to be like magic. I don’t buy it.

    The rest of the arguments are similar to Searle’s old “Chinese Room” argument: that because we can’t point to some specific part of GPT3 that’s an “English recognizer” or “English generator”, it can’t be said to “know” English in any sense.

    Obviously GPT3 isn’t a true general AI. (For one thing, it’s got severe short-term memory issues!) And I don’t think this approach could simply be scaled up to produce one. But I think (as a non-AI-guru) that the way it works has some interesting similarities to the way human consciousness may have evolved. Once we came up with primitive forms of language to communicate with other people, it was a short step to using language to communicate with ourselves, a feedback loop that creates an internal stream of consciousness. So the brain is generating words and thinking about them, which triggers likely successor words, etc.

    I’m not saying our brains are doing the same thing as GPT3, just as I don’t think our visual centers do exactly what Deep Dream does. But the similarities at a high level are striking.

    1. 6

      The problem isn’t that the goalposts move, it’s that the goals end up being tractable to approaches that don’t get us as much as we expected. For example, Go AI was supposed to be a revelation, but in fact it turns out that random playouts are more than enough to get stronger than human pros… but just like with chess, that’s not how humans play or think, so we can achieve the simple goal of “win a game” but can only derive patterns/insight from that strong play with human analysis.

      It seems to me that the patterns and insight are what we’re really after, not the wins.

      1. 3

        Woah, wait a minute… professional Go players are (consistently) worse than random?!

        That seems like a very important insight, albeit much bleaker than the sort AI researchers were looking for.

        1. 3

          I think what asthasr is referring to is the way Alpha Go iteratively played against itself to gain ground. My understanding is that it started out with ~random noise vs ~random noise and improved by figuring out which side did better and repeating that process an inhuman number of times.

          It’s not entirely unlike how a (human) novice might get more with the game taken to the limit. We got some novel game states that humans hadn’t (yet) stumbled on to, but as far as I’m aware Alpha Go provides very little insight into how (human) professionals approach the board.

          1. 1

            kavec’s comment is correct, but even later engines use random playouts, pruned by the results of playouts in similar positions, to choose their next move. It works. It’s led to some interesting analysis (by humans), but the AI in itself isn’t doing that analysis.

            1. 1

              I believe what you meant is the Monte-Carlo Tree Search part. I don’t think that is uniform randomizaton. Reading page 3: https://arxiv.org/pdf/1712.01815.pdf, it suggests to expand the nodes biased by DNN’s evaluation rather than uniform random rollout.

              1. 1

                It’s not uniform randomization. Go is too “big” for that. However, it’s essentially treating positional analysis as a function from board position to board position, without any heuristics or sector analysis. That’s not how people play or think about the game; in essence it’s very good because it can run Monte Carlo playouts fast and figure out, given the entire board position, what the next move ought to be… but it has no “why.”

                1. 1

                  Because it is not uniform random and the node expansion biased by DNN’s evaluation output, the heuristics or sector analysis could simply be moved to the DNN (the convolutional neural net is translation-invariant, and we don’t have internals of the DNN to poke with). The heuristics from neural nets is essential for AlphaZero’s sucess. I won’t discount that and say random rollout from MCTS, which has been in-use for Go since 2000s is as crucial. MCTS is important to explore the state space, but the “intuition / memorization” from neural nets is crucial.

                  1. 1

                    It’s possible that generalizations can be teased out. There are people trying and I await the results eagerly. But crucially, once again, it’s not the AI that’s capable of doing it. If it’s accomplished, it will be the humans running the AI who do it.

        2. 1

          I’ve thought about the same thing. A form of (at least apparent) “consciousness”, it seems to me, could be built out of a “language generator” like GPT-3, with a feedback loop, and with a way to feed in information about the outside world.

          How much research has there been on this field? Surely someone has tried to feed GPT-3 into itself and seen what happens?

          1. 3

            Sort of like that very creepy video that starts with a frame buffer of random noise and iteratively applies Deep Dream, zooms slightly, and repeats. After a few minutes you get an H.R. Giger nightmare of malignant dog noses; that model they used really has some deep seated dog issues it needs to work out in therapy.

            In the messy neuro-chemical domain, dissociative psychedelics like ketamine, DMT and salvia divinorum seem to work by blocking out the sensorium and amplifying feedback in the stream of consciousness, producing very real-seeming but bonkers dream worlds.

            1. 3

              You’re right, this really does end in a nightmare of dog noses and eyeballs! Some of these are really horrifying.

              https://youtu.be/SCE-QeDfXtA

          2. 1

            Here, have a book from a (recent) prior generation of AI optimists. Hawken’s thing didn’t quite work out like he was hoping, but it’s a good stepping stone toward current theories of embedded cognition. We’ve got a long way to go, still.

          1. 2

            The “goroutine-per-request” model and GC overhead greatly increase memory requirements in high-connection services like ours.

            I wonder if it would be possible to use one goroutine-per-core in order to avoid context-switching just like nginx does. (I’ve never used Go)

            1. 4

              Possible, but not natural in Go. Goroutines are lightweight threads and all libraries are written assuming this and most I/O is sync. You’d have to reinvent a lot to make I/O async in each per core goroutine.

              More likely someone will write a proxy like that in Rust. I’m eagerly awaiting…

              1. 2

                Linkerd is written in Rust.

              2. 3

                Go already does one OS thread per core (ish), goroutines are entirely a userspace thing without (kernel level) context switching. Still, there’s obviously some overhead associated with them.

              1. 3

                I, for one, will keep using scp. Rsync’s interface is too different, with a lot of subtle differences which constantly catches me off guard.

                When was, for example, the last time a non-rsync tool cared about whether you included the trailing slash or not for a path to a directory on a GNU system? Why did rsync decide to completely break with the convention? For most people, I bet the behavior of rsyncing directories mostly depends on whether their tab completion happens to include a trailing slash or not.

                1. 1

                  Why did rsync decide to completely break with the convention?

                  What convention? It follows the SUS/Posix convention, try mv’ing a file to dir or dir/ and note the difference. dir/ is the same as dir/. and implies something different. Its also to handle cases where symlinks may be present to return ENOTDIR.

                  Ref: https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap04.html#tag_04_13 https://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xbd_chap04.html#tag_21_04_13 https://pubs.opengroup.org/onlinepubs/9699919799/xrat/V4_xbd_chap03.html#tag_21_03_00_75

                  1. 4

                    I don’t see a difference between mv <file> <dir> and mv <file> <dir>/? In both cases, <file> is ends up as <dir>/<file>.

                    I’m not talking about destinations though, but source. cp foo/ bar and scp foo/ bar will both copy the directory foo to bar such that bar/foo is a directory with the contents of the old foo directory, but rsync foo/ bar will copy all the contents of foo into bar, like cp foo/* bar would do.

                    1. 1

                      I don’t see a difference between mv and mv /? In both cases, is ends up as /.

                      Depends on if something exists or not.

                      $ mkdir /tmp/demo
                      $ cd !$
                      cd /tmp/demo
                      $ touch foo
                      $ mkdir dir
                      $ mv foo bar/
                      mv: rename foo to bar/: No such file or directory
                      zsh: exit 1     mv foo bar/
                      

                      Specifying bar/ here indicates there should be a directory to move to. But its not there.

                      but rsync foo/ bar will copy all the contents of foo into bar, like cp foo/* bar would do.

                      Correct, becuase foo/ is the same as foo/. which means copy the contents of foo/. to the destination. Just as foo without a / indicates you want the directory itself (and contents therein) copied to the destination. That follows cp as well.

                      cp foo/ bar and scp foo/ bar will both copy the directory foo to bar such that bar/foo is a directory with the contents of the old foo directory

                      You sure about that? Show your cards as thats not how cp works on any unix i’ve used example from my laptop:

                      $ mkdir -p input/{a,b}                                                                                                                                                                                    
                      $ mkdir output
                      $ cp input/. output
                      cp: input/. is a directory (not copied).
                      zsh: exit 1     cp input/. output
                      $ cp -r input/ output
                      $ find output
                      output
                      output/a
                      output/b
                      $ rm -fr output; mkdir output
                      $ cp -r input output
                      $ find output
                      output
                      output/input
                      output/input/a
                      output/input/b
                      
                      1. 1

                        I don’t know man. All I can say is that rsync works differently from cp and scp and it constantly messes me up. If you’re unhappy with my explanation of how it’s different, I’m sorry.

                1. 5

                  I am puzzled why these even exist. What is the point? To have the browser be an OS?

                  1. 9

                    Yes, the dream of a PWA revolution requires the browser to have access to everything like the underlying OS does but that will never happen because it’s too easy to make malicious PWAs when there’s no central store/authority to police them.

                    I want freedom too, but the world is full of idiots who still click on “your computer is infected” popups and voluntarily install malware.

                    1. 4

                      They exist to allow web pages controlled and sand-boxed access to resources otherwise only available to black-box native apps which also happen to award Apple 30% of their revenue, so me personally, I’m taking that privacy argument with a grain of salt.

                      1. 11

                        Web apps are just as black-box as native apps. It’s not like minified JavaScript or WebAssemly is in any reasonable way comprehensible.

                        1. 8

                          I would somewhat agree if Apple was the only vendor who doesn’t support these APIs, but Mozilla agrees with Apple on this issue. That indicates that there’s some legitimacy to the privacy argument.

                          1. 2

                            The privacy reason seem not too opaque, as the standard way of identifying you is creating an identifier from your browser data. If you have some hardware attached and exposed, it makes identification more reliable, doesn’t it?

                            1. 2

                              Apple led the way early on in adding APIs to make web apps work like native mobile apps — viewports and scrolling and gesture recognition — and allowing web apps to be added as icons to the home screen.

                              1. 2

                                Originally iPhones apps were supposed to be written in html/js only, but then the app store model became a cash cow and that entire idea went down the drain in favor of letting people sharecrop on their platform.

                                1. 9

                                  I mean, too, the iOS native ecosystem is much, much, much richer and produces much better applications than even the modern web. So, maybe it’s more complicated?

                                  1. 1

                                    Agreed. I think that native apps were the plan all along; progressive-ish web app support was just a stop-gap measure until Apple finalized developer tooling. Also, given that most popular apps (not games) are free and lack in-app purchases, odds are that the App Store isn’t quite as huge of a cash cow as it is made out to be. The current top 10 free apps are TikTok, YouTube, Instagram, Facebook, Facebook Messenger, Snapchat, Cash App, Zoom, Netflix, and Google Maps. The first six make money through advertisements. Cash App (Square) uses transaction fees and charges merchants to accept money with it. Zoom makes money from paid customers. Netflix used to give Apple money but has since required that subscriptions be started from the web (if I remember correctly). Google Maps is free and ad-supported.

                            2. 1

                              The browser already is an OS. The point of these is to have it be a more capable and competitive OS. Just so happens that at present there’s only one player who really wants that… but they’re a big one, and can throw their weight around.

                            1. 28

                              To be fair, it’s not just Apple, mostly Mozilla takes a similar approach: https://twitter.com/voxpelli/status/1286230638526435329

                              1. 10

                                So, we have the browser engine vendors today: Apple with Safari/WebKit, Mozilla with Firefox/Gecko, and Google with Chrome/Blink. Isn’t it kind of weird that so many web standards are being standardized with 2/3 of vendors unwilling to implement them? What’s the process here?

                                1. 13

                                  They are drafts, drafts don’t necessarily get passed to be standards. For an example at hand, Geolocation API is a standard (ratified in 2016). Geolocation Sensor is not, it is a draft (last updated in 2018).

                                  1. 28

                                    Aha. From the title and article, it sounded like Apple refuses to implement standard APIs. So the real story is just that Apple and Mozilla won’t let some harmful APIs get standardized.

                                    1. 25

                                      Yes.

                                      1. 2

                                        Only it doesn’t really matter. Since Chrome is so big, whatever it does is a de-facto standard because Web developers are going to use those APIs, and users are going to blame other browsers for “not working”, which is going to maintain its share because of it.

                                        1. 9

                                          I would think that Safari on iPhone has enough market share to force web developers to support it. It would surprise me if a commercial website intentionally disregarded MobileSafari support.

                                          1. 3

                                            They’ll just try to push the users to their mobile apps.

                                            1. 3

                                              Market share of mobile safari is actually quite poor. It’s usually supported despite the market share, as iPhone users are widely regarded valuable users (e.g., more likely to spend money online)

                                              1. 3

                                                I think it might also depend on where your customers are–even if iOS is only around 15% of the worldwide smartphone market, it’s 58% of the US, 51% of North America, and 26% of Europe.

                                            2. 3

                                              So the real story is that Google is using its near-monopoly power to circumvent the standards process? There’s some kind of irony here, but I just can’t tell WHAT.

                                              1. 1

                                                WHAT was a great force when Mozilla needed to pry the Web from Microsoft. It created a standard on which Firefox and later Chrome could build better browsers than IE and win users over. But then Google got big and took the process over, so here we are.

                                                1. 2

                                                  No disagreement here! But worth pointing out that Mozilla could only make that move because Apple and Opera were backing them. I just think the important things to keep in mind about standards organizations is that they are inherently political, and that those with a seat at the table are generally large corporations who answer only to their shareholders. As such, they should be understood as turf where players jockey for competitive advantage by forming temporary strategic alliances. I think everyone paying attention to these things understands how this works, except for some programmers, who I guess are conditioned to treat even draft standards as holy writ descended directly from the inscrutable heavens, or maybe take the rhetoric about “serving users” a little too literally.

                                                  But as consolidation erodes consumer choice, there’s less of a game to play, and thus standards become less relevant.

                                    1. 2

                                      What does it mean for things like Python asyncio / coroutines or even nodejs?

                                      1. 4

                                        Not much, because while this IS better, it is Linux-only and Python and Node.js need to support Windows, macOS, etc. It is also pretty different programming model, so it is hard to abstract over using a portability layer.

                                        1. 2

                                          I’m also not sure if it’s that relevant for node. From what I’m reading, it looks perfect for the likes of Go, where you want to have threads of execution which can block but where those threads of execution aren’t represented by OS threads. From what I can see, node’s programming model fits very comfortably on top of the existing epoll.

                                          1. 6

                                            FWIW I’m pretty sure the history is (roughly) that C++ programmers at Google wanted to use Go-like concurrency, hence support for user level threading. There are a bunch of CppCon videos about it that may give more color on that.

                                        2. 3

                                          My thought is that for languages that already made the big investment in userspace threading, it’s unlikely they will rip the scheduler out.

                                          I think this is most interesting for mostly automatically making “regular” threaded languages “just work”. Think: a more powerful language-agnostic gevent.monkeypatch_all()

                                          1. 3

                                            If my understanding of the Gil problem is correct, this might mean very little to nothing. As far as I understand the problem with the main python interpreter is that a significant number of its internal data structures and functions are not thread-safe.

                                            If such problems were solved we could as well run multithreaded python off regular pthreads.

                                          1. 16

                                            Most text editors (and even IDEs) have a surprising lack of semantic knowledge. Editing programs as flat text is brittle and prone to error. If we had better, language-aware transforms and querying systems built into our text editors, we’d be able to more easily build interactive tools/macros/linters rather than relying on the batch processing we use these days.

                                            Some cool, language-aware tools that exist today (ish) are:

                                            1. 17

                                              The problem is that plain text is proven to carry information across thousands of years, whereas custom formats rot. I can read a paper from 1965 and understand the Fortran source in it, but it’s next to impossible to read many binary formats from the 90s without custom code.

                                              I think that we need to focus on simplifying the analysis of languages: effect systems, and limiting global state should make it easier to analyze the semantics of syntactic structures and thus make structure and semantics easier to highlight. (I certainly don’t need syntax highlighting working in haskell, but it’s hard not to miss in C-likes).

                                              1. 2

                                                Well, ASCII has only been around for a few decades, so I’m not sure it’s been shown to last thousands of years yet ;)

                                                Granted, there haven’t been any graph (for program ASTs) or table-like (for other program data) data structures that are as pervasive as ASCII or UTF-8 plaintext, and if you want to argue that it makes sense to keep the serialization format plaintext so it’s human readable (like JSON or graphviz or CSV), that’s fine. It still doesn’t prevent us from storing more rich and semantic information beyond just flat symbolic source code.

                                                The problem with source code is it’s difficult to build a parsers for it, and there’s only one representation for code. For instance, if all source code was stored as an AST in json, think of how easy it would be for you to build custom tools to analyze and transform your code? (This is a terrible idea for other reasons, but it illustrates the idea).

                                                1. 2

                                                  True, I’m using a wider definition of “plain text” than just ASCII.

                                                  You’re right about being able to deserialize plain text into more semantically interesting structures, of course. Then, though, you’re tying visualization (or, at least, editing) to a probably-limited set of tools. I think about the XML ecosystem, which fifteen years ago probably seemed unstoppable, a sure bet for further tool development… but these days the only really powerful one of which I’m aware is Oxygen, which is dated and costs $lots for business licenses.

                                                  Other problems are possible as well, such as vulnerability to deserialization attacks, like CVE-2017-2779.

                                                  Ultimately I think that many things could be helped by plain text structures that allow more sophisticated namespacing and structuring than the usual function/class/const options we get: first class modules, as in OCaml, for example. I think these sorts of things are coming, but it’s a slow process.

                                              2. 10

                                                Editing programs as flat text is brittle and prone to error.

                                                I strongly disagree. I work on a rather large code base and nearly everyone on my team prefers to use vim or emacs. There’s something to be said for walking through a neighbourhood rather than driving through one when you want to buy a house. The vast majority of our time (99%?) is spent reading code or debugging rather than writing code. Every line of code should be thoughtful and we should ALWAYS optimize for readability. Not just the semantics of variables and objects but the design of the whole system.

                                                Languages like java are impossible to write without tool assistance. They’re aggregating large and miserable frameworks where code refers to variables in other files through inheritance and all that other stuff. Just trying to figure out which implementation of foo() an object will call can be difficult or impossible without assistance. That sort of complexity now needs to be internalized in your limited human memory banks as you try to make sense of it all.

                                                1. 2

                                                  Oh, I use Vim too – I dislike IDEs for their bloat, and I also prefer languages that are more oriented towards small, compact solutions (and even have an interest in taking it to an extreme with, e.g. APL). If the entire program can be kept in a single file (or even better a single page of text), all the better. Spatial compactness is useful for understanding and debugging, and less code means less bugs.

                                                  My original point still stands though. Having better tools doesn’t mean code quality has to suffer. The fact of the matter is that we end up having larger codebases that require more complicated code transforms or linting checks. At minimum, having a syntax aware way of doing variable or function renaming in a text editor is superior to blindly running sed over a arbitrarily[1] line-oriented character array. Even from a programmer’s perspective, I’m not convinced a purely symbolic representation of code is always superior. It’s certainly a compact and information dense way of viewing small pieces of code, but it quickly becomes overwhelming when coming to grips with larger systems. Plus, there’s only so much info you can cram into one screenful of code.

                                                  I think, ideally, we’d have multiple ways of viewing the same code depending on the context we’re working in. For instance, when trying to jump into a new codebase to add a feature, data flow is more important than directly understanding the specific implementations of any function. It would be useful to be able to take a function, and view it in the context of a block diagram to see how it fits into the rest of the system and all code paths that lead to it. In another situation, you may want to view it from a documentation perspective that allows you to semantically tie documentation, proofs, formulas, or diagrams directly into the code, even to specific expressions (kind of like docstrings, but more structured and format rich). Or in a situation where you’re working with a protocol, rather than having an implicit finite state machine that’s only viewable from a code point of view (with a switch statement or through functions that are tail called), you could flip into a graphical view of the FSM or a tabular view of the state transitions.

                                                  Some of the things I’ve mentioned above are somewhat possible today with external tools, but the problem is they each construct their own AST and semantic knowledge of the source (sometimes incorrectly). There’s no communication between the tools, no referential integrity (if you update the source, do you have to rebuild an index for each tool from scratch?). A standardized, semantic storage format for code would help to address some of these issues.

                                                  [1]: I say arbitrarily here, because sometimes the line-oriented nature of sed or grep conflicts with the true expression oriented structure of the code. For instance, if a function signature is split across two lines, trying to search for all return types with grep -e '\w+ .*(.*).*{ wouldn’t work. Besides, most syntax structures are recursive which regexs are inherently limited at parsing.

                                                  1. 4

                                                    It would be useful to be able to take a function, and view it in the context of a block diagram to see how it fits into the rest of the system and all code paths that lead to it.

                                                    I think it would be more useful if a function had to consider less and less the rest of the system. Otherwise you have a poor contract and high coupling. I think code and architecture need to blend together and if you need a tool to make sense of it all then you’ve failed.

                                                    This is a perfect example of nightmarish code for me. There’s about 200 methods and maybe 6 or 7 deep on the inheritance chain. It’s barely possible to manage even with an IDE and a WPF textbook sitting on your desk. https://docs.microsoft.com/en-us/dotnet/api/system.windows.shapes.rectangle?view=netcore-3.1

                                                2. 8

                                                  I strongly agree with you. We’ve been hamstrung by the primitive editors for decades. This fixation on text cripples other tools like version control as well - semantic diffs would be an obvious improvement but it’s rarely available. (The usual counterarguments about the universality and accessibility of text don’t stack up to me.)

                                                  1. 2

                                                    The insistence on using plain text for canonical storage, API interface, and user interface is IMO the thing most holding us back (some other top contenders being the pursuit of “performance” and compilation-as-DRM).

                                                    1. 10

                                                      Looking at the current web, I would have to disagree with the idea that the pursuit of performance is holding anything or anyone back…

                                                      1. 1

                                                        If you’d seen all the node-gyp build failures I had, you might think differently. But I’m thinking more about stack busting and buffer overruns at runtime and hobbled tooling at devtime in this case.

                                                        1. 2

                                                          Native modules and the whole node-gyp system is horrible, but I don’t think that’s due to pursuing performance? Most of the time, packages with native code seem to just have taken the easiest path by creating node bindings for an existing library, and I don’t think node-gyp itself is bad due to a pursuit of performance…

                                                          AFAIK, though this could be wrong, the main reason for node’s horrible native code support is that people just use the V8 C++ API directly, and Google is institutionally incapable of writing stable interfaces which other people can depend on. They constantly rename methods, rename or remove classes, move header files around, even deprecate functionality before replacements exist. Even that isn’t just due to a pursuit of performance though, but due to a fear of tech debt and a lack of care for anyone outside of Google.

                                                  2. 2

                                                    Comby is definitely a huge upgrade from writing regexps. There’s also Retrie for Haskell and Coccinelle + coccigrep for C. I’d really love to see a semantic search/replace/patch tool for Rust…

                                                  1. 15

                                                    IPv6 is just as far away from universal adoption…as it was three years ago.

                                                    That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                    (And as an aside, it’s sort of interesting to note the obvious effect of the pandemic pushing the weekday troughs in that graph upward as so many people work from home.)

                                                    1. 7

                                                      I wouldn’t count it as “adoption” if it’s basically a hit or miss if your provider does it or not. So they do the natting for you?

                                                      Still haven’t worked at any company (as an employee or being sent to the customer) where there was any meaningful adoption.

                                                      My stuff is available via v4 and v6, unless I forget, because I don’t have ipv6 at home, because I simply don’t need it. When I tried it, I had problems.

                                                      Yes, I’m 100% pessimistic about this.

                                                      1. 13

                                                        I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                        The “increase” in “adoption” is likely just more mobile traffic, and some providers a have native v6 and NAT64 and… shocker… it sucks.

                                                        IPv4 will never go away and Jeff Huston is right: the future is NAT, always has been, always will be. The additional address space really isn’t needed, and every device doesn’t need its own dedicated IP for direct connections anyway. Your IP is not a telephone number; it’s not going to be permanent and it’s not even permanent for servers because of GEODNS anyway (or many servers behind load balancers, etc etc). IPs and ports are session identifiers, no more, no less.

                                                        You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                        The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                        DNSSEC and IPv6 are failures. 20+ years and still not enough adoption. Put it in the bin and let’s move on and focus our efforts on better things that solve tomorrow’s problems.

                                                        1. 21

                                                          What I find so annoying about NAT is that it makes hard or impossible to send data from one machine to another, which was pretty much the point of the internet. Now you can only send data to servers. IPv6 was supposed to fix this.

                                                          1. 8

                                                            Now you can only send data to servers

                                                            It’s almost as if everyone that “counts” has a server, so there’s no need for everyone to have one. This is coherent with the growing centralisation of the Internet.

                                                            1. 18

                                                              It just bothers me that in 2020 the easiest way to share a file is to upload to a server and send the link to someone. It’s a bit like “I have a message for you, please go to the billboard at sunshine avenue to read it.”.

                                                              1. 4

                                                                There are pragmatic reasons for this. If the two machines are nearby, WiFi Direct is a better solution (though Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things). If the two machines are not near each other, they need to be both on and connected at the same time for the transfer to work. Sending to a mobile device, the receiver may prefer not to grab the file until they’re on WiFi. There are lots of reasons either endpoint may remove things. Having a server handle the delivery is more reliable. It’s more analogous to sending someone a package in a big truck that will wait outside their house until they’re home and then deliver it.

                                                                1. 3

                                                                  Bittorrent and TCP are pretty reliable. You’re right about the ‘need to be connected at the same time’ though.

                                                                  1. 2

                                                                    Apple’s AirDrop is the only reliable implementation I’ve seen and doesn’t work with non-Apple things

                                                                    Have you seen opendrop?

                                                                    Seems to work fine for me, although it’s finicky to set up.

                                                                    https://github.com/seemoo-lab/opendrop

                                                                  2. 2

                                                                    I think magic wormhole is easier for the tech crowd, but still requires both systems to be on at the same time.

                                                                    1. 1

                                                                      https://webwormhole.io/ works really well!

                                                                  3. 7

                                                                    This is coherent with the growing centralisation of the Internet.

                                                                    My instinct tells me this might not be so good.

                                                                    1. 4

                                                                      So does mine. So does mine.

                                                                    2. 2

                                                                      Plus le change…

                                                                      On the other hand, servers have never been more affordable or generally accessible: all you need is like $5 a month and the time and effort to self-educate. You can choose from a vast range of VPS providers, free software, and knowledge sources. You can run all kinds of things in premade docker containers without having much of a clue as to how they work. No, it’s not the theoretical ideal by any means, but I don’t see any occasion for hand-wringing.

                                                                      1. 1

                                                                        I’ve always assumed the main thing holding v6 back is the middle-men of the internet not wanting to lose their power as gatekeepers.

                                                                      2. 6

                                                                        Nobody in their right mind is going to use client machines without a firewall protecting them and no firewall is going to by default accept unsolicited traffic form the wider internet.

                                                                        Which means you need some UPnP like mechanism on the gateway anyways. Not to map a port, but to open a port to a client address.

                                                                        Btw: I’m ha huge IPv6 proponent for other reasons (mainly to not give centralized control to very few very wealthy parties due to address starvation), but the not-possible-to-open-connections argument I don’t get at all.

                                                                        1. 8

                                                                          Nobody in their right mind would let a gazillion services they don’t even know about run on their machines and let those services be contacted from the outside.

                                                                          Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure. The correct solution is to remove those services, not add a firewall or NaT that requires traversing.

                                                                          Though you were talking about UPnP, so the audience there is clearly the average non-technical Windows user, who doesn’t know how to configure their router. I have no good solution for them.

                                                                          1. 8

                                                                            Why do (non-technical) people need a firewall to begin with? Mainly because they don’t trust the services that run on their machines to be secure

                                                                            Many OSes these days run services listening on all Interfaces. Yes, most of them could be rebound to localhost or the local network interface, but many don’t provide easy configurability.

                                                                            Think stuff like portmap which is still required for NFS in many cases. Or your print spooler. Or indeed your printer’s print spooler.

                                                                            This stuff should absolutely not be on the internet and a firewall blanket-prevents these from being exposed. You configure one firewall instead of n devices running m services.

                                                                            1. 3

                                                                              Crap, good point, I forgot about stuff on your local network you literally cannot configure effectively. Well, we’re back to configuring the router, then.

                                                                          2. 1

                                                                            If the firewall is in the gateway at home, then you can control it, and you can decide to forward ports and allow incoming connections to whatever machine behind it. If your home NAT is behind a CGNAT you don’t control, you are pretty much out of options for incoming connections.

                                                                            IPv6 removes the need for CGNAT, fixing this issue.

                                                                            1. 2

                                                                              Of course but I felt like my parent poster was talking from an application perspective. And for these not much changes. An application you make and deploy on somebodies machine still won’t be able to talk to another instance of your application on another machine by default. Stuff like STUN will remain required to trick firewalls into forwarding packets.

                                                                          3. 3

                                                                            Yeah but this is not a fair statement. If we had no NAT this same complaint would exist and it would be “What I find so annoying about FIREWALLS is they make it hard or impossible to send data from one machine to another…”

                                                                            But do you really believe having IPv6 would allow arbitrary direct connections between any two devices on the internet? There will still have to be some mechanism for securely negotiating the session. NAT doesn’t really add that much more of a burden. The problem is when people have terribly designed networks with double NAT. These same people likely would end up with double firewalls…

                                                                            1. 2

                                                                              Of course, NAT has been invented for a reason, and I’d prefer having NAT over not having NAT. But for those of us that want to play around with networks, it’s a shame that we can’t do it without paying for a server anymore.

                                                                              1. 1

                                                                                I really do find it easier to make direct connections between IPv6 devices!

                                                                                Most of the devices I want to talk to each other are both behind an IPv4 NAT, so IPv6 allows them to contact each other directly with STUN servers.

                                                                                Even so, Tailscale from the post linked is even easier to setup and use than IPv6, I’m a fan.

                                                                            2. 17

                                                                              The “increase” in “adoption” is likely just more mobile traffic

                                                                              Even if so, why the scare quotes? They’re network hosts speaking Internet Protocol…do they not “count” for some reason?

                                                                              You’ll never get rid of the broken middle boxes on the Internet, so stop believing you will.

                                                                              Equipment gets phased out over time and replaced with newer units. Devices in widespread deployment, say, 10 years ago probably wouldn’t have supported IPv6 gracefully (if at all), but guess what? A lot of that stuff’s been replaced by things that do. Sure, there will continue to be shitty middleboxes needlessly breaking things on the internet, but that happens with IPv4 already (hard to think of a better example than NAT itself, actually).

                                                                              It’s uncharacteristic because I’m generally a pessimistic person (and certainly so when it comes to tech stuff), but I’d bet that we’ll eventually see IPv6 become the dominant protocol and v4 fade into “legacy” status.

                                                                              1. 4

                                                                                I participated in the first World IPv6 Day back in 2011. We begged our datacenter customers to take IPv6. Only one did. Here’s how the conversation went with every customer:

                                                                                “What is IPv6?”

                                                                                It’s a new internet protocol

                                                                                “Why do I need it?”

                                                                                It’s the future!

                                                                                “Does anyone in our state have IPv6?”

                                                                                No, none of the residential ISPs support it or have an official rollout plan. (9 years later – still nobody in my state offers IPv6)

                                                                                “So why do I need it?”

                                                                                Some people on the internet have IPv6 and you would give them access to connect to you with IPv6 natively.

                                                                                “Don’t they have IPv4 access too?”

                                                                                Yes

                                                                                “So why do I need it?”

                                                                                edit: let’s also not forget that the BCP for addressing has changed multiple times. First, customers should get assigned a /80 for a single subnet. Then we should use /64s. Then they should get a /48 so they can have their own subnets. Then they should get a /56 because maybe /48 is too big?

                                                                                Remember when we couldn’t use /127 for ptp links?

                                                                                As discussed in [RFC7421], "the notion of a /64 boundary in the
                                                                                address was introduced after the initial design of IPv6, following a
                                                                                period when it was expected to be at /80".  This evolution of the
                                                                                IPv6 addressing architecture, resulting in [RFC4291], and followed
                                                                                with the addition of /127 prefixes for point-to-point links, clearly
                                                                                demonstrates the intent for future IPv6 developments to have the
                                                                                flexibility to change this part of the architecture when justified.
                                                                                
                                                                              2. 10

                                                                                I adopted IPv6 around 2006 and finally removed it from all my servers this year.

                                                                                Wait, you had support for IPv6 and your removed it? Did leaving it working cost you?

                                                                                1. 3

                                                                                  Yes it was a constant source of failures. Dual stack is bad, and people using v6 tunnels get a terrible experience. Sixxs, HE, etc should have never offered tunneling services

                                                                                  1. 8

                                                                                    I’m running dual stack on the edge of our production network, in the office and at my home. I have never seen any interference of one stack with another.

                                                                                    The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved. The reverse has also been true in the past (broken v4, working v6), so I wouldn’t count that against v6 in itself, though I do agree that it probably takes longer for the counter party to notice v6 issues than they would v4 ones.

                                                                                    But I absolutely cannot confirm v6 to be a “constant source of failures”

                                                                                    1. 3

                                                                                      The only problem I have seen was that some end-users had broken v6 routing and couldn’t reach our production v6 addresses, but that was quickly resolved.

                                                                                      This is the problem we constantly experienced in the early 2010s. Broken OSes, broken transit, broken ISPs. The customer doesn’t care what the reason is, they just want it to work reliably 100% of the time. It’s also not fun when due to Happy Eyeballs and latency changes the client can switch between v4 and v6 at random.

                                                                                    2. 1

                                                                                      Is there any data on what the tunnelling services are used for though? Just asking because some friends were just using them for easier access to VMs that weren’t public per se, or devices/services in a network (with the appropriate firewall rules to only allow trusted sources)

                                                                                  2. 2

                                                                                    This is the first time I downvoted a post so I figure I’d explain why.

                                                                                    For one, you point to a future of more of the status quo: More NAT, IPv4. But at the same time you also claim the world is going to drop one of the biggest status quo’s of DNS for a wholly brand new name resolution service? Also, how would a decentralized networking layer be able to STUN/TURN the 20+ layers of NAT we’re potentially looking at in our near future?

                                                                                    1. 1

                                                                                      Oh no, we aren’t going to drop DNS, we will just not use it for the new things. Think Tor hidden services, think IPFS (both have problems in UX and design, but are good analogues). These things are not directly tied to legacy DNS; they can exist without it. Legacy DNS will exist for a very long time, but it won’t always be an important part of new tech.

                                                                                    2. 2

                                                                                      The future is name-based addressing – separate from our archaic DNS which is too easily subverted by corporations and governments, and we will definitely be moving to a decentralized layer that runs on top of IP. We just don’t know which implementation yet. But it’s the only logical path forward.

                                                                                      So this would solve the IPv4 addressing problem? While I certainly agree with “every device doesn’t need its own dedicated IP”, the amount us usable IPv4 addresses is about 3.3 billion (excluding multicast, class E, rfc1918, localhost, /8s assigned to Ford etc.) which really isn’t all that much if you want to connect the entire world. It’ll be a tight fit at best.

                                                                                      I wonder how hard it would be to start a new ISP, VPS provider, or something like that today. I would imagine it’s harder than 10 years ago; who do you ask for IP addresses?

                                                                                      1. 1

                                                                                        Some of the pressure on IPv6 addresses went away with SRV records. For newer protocols that baked in SRV from the start, you can run multiple (virtual) machines in a data center behind a single public IPv4 address and have the service instances run on different ports. For things like HTTP, you need a proxy because most (all?) browsers don’t look for SRV records. If you consider IP address + port to be the thing a service needs, we have a 48-bit address space, which is a bit cramped for IoT things, but ample for most server-style things.

                                                                                    3. 5

                                                                                      That graph scares me tbh. It looks consistent with an S-curve which flattens out well before 50%. I hope that’s wrong, and it’s just entering a linear phase, but you’d hope the exponential-ish growth phase would at least have lasted a lot longer.

                                                                                      1. 3

                                                                                        Perhaps there’s some poetic licence there, but 13% in 3 years isn’t exactly a blazing pace, and especially if we assume that the adoption curve is S-shaped, it’s going to take at least another couple of decades for truly universal adoption.

                                                                                        1. 7

                                                                                          It’s not 13%, it’s 65%. (13 percentage points.)

                                                                                          1. 1

                                                                                            Yup, right about two decades to get to 90% with S-curve growth. I mean, it’s not exponential growth, but it’s steady and two decades is about 2 corporate IT replacement lifecycles.

                                                                                          2. 2

                                                                                            That seems…pretty easily demonstrably untrue? While it’s of course not a definitive, be-all-end-all adoption metric, this graph has been marching pretty steadily upward for quite a while, and is significantly higher now (~33%) than it was in 2017 (~20%).

                                                                                            I think that’s too simplistic of an interpretation of that chart; if you look at the “Per-Country IPv6 adoption” you see there are vast differences between countries. Some countries like India, Germany, Vietnam, United States, and some others have a fairly significant adoption of IPv6, whereas many others have essentially no adoption.

                                                                                            It’s a really tricky situation, because it requires the entire world to cooperate. How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                            So I’d argue that “IPv6 is just as far away from universal adoption” seems pretty accurate; once you start the adoption process it seems to take at least 10-15 years, and many countries haven’t even started yet.

                                                                                            1. 1

                                                                                              How do you convince Indonesia, Mongolia, Nigeria, and many others to use IPv6?

                                                                                              By giving them too few IPv4 blocks to begin with? Unless they’re already hooked on carrier grade NAT, the scarcity of addresses could be a pretty big incentive to switch.

                                                                                              1. 1

                                                                                                I’m not sure if denying an economic resource to those kind of countries is really fair; certainly in a bunch of cases it’s probably just lack of resources/money (or more pressing problems, like in Syria, Afghanistan, etc.)

                                                                                                I mean, we (the Western “rich”) world shoved the problem ahead of us for over 20 years, and now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6? Meh.

                                                                                                1. 2

                                                                                                  My comment wasn’t normative, but descriptive. Many countries already starve for IPv4 addresses.

                                                                                                  now suddenly the often lesser developed countries actually using the least amount of addresses need to spend a lot of resources to quickly implement IPv6?

                                                                                                  If “suddenly” means they were knew it would happen like 2 decades ago, and “quickly” means they’d have over 10 years to get to it… In any case, IPv6 has already been implemented in pretty much every platform out there. It’s more a matter of deployment now. The end points are already capable. We may have some routers who still aren’t IPv6 capable, but there can’t be that many by now, even in poorer countries. I don’t see anyone spending “a lot” of resources.

                                                                                            2. 1

                                                                                              perhaps the author is going by the absolute number of hosts rather than percentage

                                                                                            1. 7

                                                                                              I think this is the first explanation that kinda makes sense and shows very well how the module system works.

                                                                                              One thing I’m still wondering is: Why does it work this way?

                                                                                              I’m using Rust since a while and I still haven’t seen any benefits of Rust’s module system over more “traditional” ones. Rust’s is just more complicated and that’s pretty much it, from my point of view.

                                                                                              1. 5

                                                                                                What even is a “traditional” module system? There are vast differences between Python’s module system, Java’s module system, JavaScript’s dozen unofficial module systems, Go’s module system, C’s module system and C++‘s two module systems. It’s not obvious to me that this is a “solved” problem with one “traditional” solution which everyone uses.

                                                                                                1. 2

                                                                                                  Let’s pick an average one from Java/C#, i. e. where there is direct relationship between file path, file name, the file’s package/namespace declaration and the type contained in that file.

                                                                                                  Might be the case that Rust’s approach feels familiar if everything one knew was how C header files work, but I would have expected that people building Rust’s module system had stronger requirements than “slightly better than C header files”.

                                                                                                  1. 8

                                                                                                    Let’s pick an average one from Java/C#, i. e. where there is direct relationship between file path, file name, the file’s package/namespace declaration and the type contained in that file.

                                                                                                    I can’t speak much to Java, but for C# this is absolutely not the case, both in theory and practice. A given source file may have multiple classes, a given class may be split across multiple source files (a feature usually reserved for UI related code), a given assembly may expose types in multiple namespaces, a given namespace may have multiple assemblies that comprise it. Assemblies exist in a flat namespace, while they can have .s in their names it doesn’t have much or anything to do with their location in the filesystem relative to each other.

                                                                                                    Actual C# source code does not refer to assemblies at all generally.

                                                                                                    1. 1

                                                                                                      Why Java/C#, instead of node.js or python or the new C++ module system or Go or something new? The Java solution probably makes sense if you know Java, but I don’t think Java developers are even the main target audience, given that Rust is mainly developed as an alternative to C++.

                                                                                                      1. 7

                                                                                                        C++ modules are basically its designers saying “whatever we can manage to do, after 30 years of bad decisions”, so I wouldn’t consider it to have many worthwhile design lessons that are applicable outside C++.

                                                                                                        I think Java/C# are interesting design points, because they had the requirements that the main way of distributing the artifacts could not require recompilation/linking/etc. unlike Rust or Go.

                                                                                                        It works reasonably well, the artifacts are rather small compared to other languages, and compatibility is well-understood.

                                                                                                        I think that’s a reasonable starting point from which one can ask “why has Rust all this complexity, that doesn’t seem to buy me anything, while having to satisfy less requirements than Java?”.

                                                                                                  2. 4

                                                                                                    Most of the answers to why in the rust module system are “to avoid whole-program recompilations or dependency cycles”.

                                                                                                    1. 4

                                                                                                      I don’t see the complexity, really. Modules mostly map to files (if you keep things simple), like in python; except you replace __mod__.py or whatever is it with mod.rs. You can also nest modules in other modules, giving you easy namespacing. I have a hard time imagining a simpler system that would still be reasonably expressive (i.e. not “one class per file” like java, or no namespacing at all like C).

                                                                                                      1. 2

                                                                                                        To me, the benefits are:

                                                                                                        • It’s self-describing and doesn’t require scanning the file system. Lack of implicitly-discovered files means you can define modules conditionally (without module-specific features in the language). Stray files from botched git merges or editor backups don’t get included accidentally.

                                                                                                        • It is simple. People get confused because it’s different from other module systems, but by itself the rule of mod defines a scoped item the same way as fn, struct, and enum do” makes a lot of sense. In other languages modules/packages are a meta-language on top of the language, often with their own syntax and rules. In Rust mod and struct are both “items” that behave similarly in many ways.

                                                                                                        • pub use is very neat for designing your library’s public API separate from your library’s internal structure (e.g. I may want to have one widget per file for my convenience, but for user it’d be silly to import library::widgets::foo::Foo. I can make it library::FooWidget or library::widgets::Foo.) What is also cool about it is that it’s a design pattern that comes from a logical combination of two features (pub and use), rather than being a special-case export feature.

                                                                                                        1. 1

                                                                                                          It’s self-describing

                                                                                                          Having to hand-craft the public API because the library’s structure has no bearing on it – I believe that’s the opposite of self-describing.

                                                                                                          doesn’t require scanning the file system

                                                                                                          Agreed, though there are other approaches that do this just as well.

                                                                                                          you can define modules conditionally

                                                                                                          I think there are better approaches for that. Rust’s approach tends to result in #ifdef’ed files all other the place.

                                                                                                          In my experience, for instance Scala’s approach of having a directory tree with “shared source” and then having “conditional sources” in a separate directory tree (controlled by the build system) is much cleaner and easier to understand.

                                                                                                          It is simple

                                                                                                          I wouldn’t say that. The structure of the source does not reflect the API exposed, I’d say that’s rather unintuitive.

                                                                                                          And having to make decisions between pub mod foo and pub use self::foo::{...} is not great either.

                                                                                                          I may want to have one widget per file, but for user it’d be silly to import …

                                                                                                          I guess this is where we disagree – I don’t think this benefit (to library authors) is worth the costs (to library consumers) that creating choice and adding flexibility create.

                                                                                                          (I’d rather have the “inconvenience” of having to write some things in the same file, than having to wonder where things are every time I read a library by someone else.)

                                                                                                          1. 2

                                                                                                            It’s self-describing in the sense that the source code describes the crate structure, not an external build system or the filesystem.

                                                                                                            The structure of the source does not reflect the API exposed, I’d say that’s rather unintuitive.

                                                                                                            Users never access files of libraries, only the public API, so there’s nothing to be confused about.

                                                                                                            And for contributors, it’s not a problem to find what is where, because the source code contains the mod and use declarations which are sort-of like waypoints or table of contents for navigating the source.

                                                                                                            1. 1

                                                                                                              It’s self-describing in the sense that the source code describes the crate structure, not an external build system or the filesystem.

                                                                                                              Other approaches also do this, just without the additional indirection Rust affords.

                                                                                                              Users never access files of libraries, only the public API, so there’s nothing to be confused about.

                                                                                                              I have found it more productive (given the state of crate documentation and rust docs in particular) to simply read the source, instead of hunting for documentation that may or may not exist at some unknown place.

                                                                                                              1. 1

                                                                                                                instead of hunting for documentation that may or may not exist at some unknown place

                                                                                                                docs.rs/<name of a crate>

                                                                                                                If something isn’t there, run cargo doc --open from the source to get the same.

                                                                                                                1. 1

                                                                                                                  That’s not what I meant – my problem is that many times the documentation just hasn’t been written, or is missing crucial aspects.

                                                                                                                  1. 1

                                                                                                                    For that I recommend clicking [src] links in rustdoc output, which take you straight to relevant implementation.

                                                                                                                    1. 1

                                                                                                                      Yes, that’s basically my recommendation – skip the docs, just read the source.

                                                                                                      1. 7

                                                                                                        and the import function downloads the file and caches it to ~/.import-cache, forever.

                                                                                                        Hmm, wouldn’t it make more sense to use ~/.cache instead of littering in ~?

                                                                                                        Anyways, I think using URL imports is way more appropriate in shell scripts than in Go.

                                                                                                        1. 6

                                                                                                          Very nice to see the fully-LLVM thing, especially libc++.

                                                                                                          1. 10

                                                                                                            Why’s that nice? I have nothing against LLVM, but I don’t feel it’s better than GNU either, so I’m curious about your reasoning.

                                                                                                            1. 12

                                                                                                              I use FreeBSD, so my main selfish reason: I want Linux people to adopt libc++ more so that they stop writing software that fails on libc++! Often that’s due to silly stuff like missing includes (relying on libstdc++’s incidental transitive includes).

                                                                                                              1. 1

                                                                                                                I thought they fixed that in gcc 10?

                                                                                                                1. 1

                                                                                                                  That’s a cool change, but I’m not sure stdexcept is the only such thing. And of course not every developer has tested everything on this version of libstdc++ yet..

                                                                                                              2. 4

                                                                                                                I like to see software built with many different kind of compiler, linkers, assemblers, kernels, libcs, other libs… as a way to reveal bugs or unportable features.

                                                                                                                Also, I like to see multiple implementations of some essential components, partially an indicator that the good abstraction was found as it is easy to re-implement.

                                                                                                                1. 2

                                                                                                                  LLVM has nicer debugging tools for certain things, a C++ interpreter among them :)

                                                                                                                  1. 1

                                                                                                                    Not the OP, but the fact that there are plenty of other distributions stressing the GNU toolchain means that LLVM gets short shrift, to some degree.

                                                                                                                1. 5

                                                                                                                  This is very cool. It’s unfortunate that the very first example is the Lena image, though. :-(

                                                                                                                  1. 7

                                                                                                                    I wish people would stop using that image as an example. Not because of its content—though that part isn’t ideal either—but because the source file is so old and mediocre that it isn’t remotely representative of the images we routinely handle today. Worst of all the extreme colour cast makes it useless for judging skin tones.

                                                                                                                    1. 4

                                                                                                                      The author doesn’t seem to be from the US, and the last commit appears to be from 2016.

                                                                                                                      1. 3
                                                                                                                        1. 3

                                                                                                                          I find very unfortunate that there are people who find this unfortunate. You are literally spreading cancer culture.

                                                                                                                          1. 5

                                                                                                                            I agree, the whole “Losing Lena” movement was a total beat-up of what is really just a small number of people using this image essentially as a in-group meme. The cropped version (the only one I’ve ever seen used in the past 20+ years) is ridiculously tame compared to images that continuously bombard us in contemporary media, especially marketing targeted at women.

                                                                                                                            That’s not to say we should continue using the image. We shouldn’t. We should stop using it because it’s grossly unrepresentative of photographic depictions of human skin. We should stop using it because the image is horrifically poor in quality. We should also stop using it because there’s no reason to use something with even a jot of sexual content.

                                                                                                                            1. 4

                                                                                                                              No one is being cancelled, though?

                                                                                                                              1. 3

                                                                                                                                I mean… I’m pretty skeptical of a lot of this stuff, but afaik Lena has personally asked people to stop doing it. It’s not some “hypothetically she might be upset”, it’s “the subject of this picture has requested that you find an alternative”.

                                                                                                                                1. 3

                                                                                                                                  Do you have a reference on Lena asking people to stop using it? I couldn’t find anything in that Wikipedia article, and it would change the situation considerably.

                                                                                                                                  I mean, regardless of whether she actually said anything or not, I think I would be on the side of “maybe we shouldn’t be using that image everywhere”. A lot of my work involves working with raw pixel data, and I find the tool at https://rawpixels.net super useful. However, it uses the Lena image as a default/placeholder image, and looking at very obviously sensual pictures of women, who show no signs of having clothes on, on a 28” display at work, in an open office space, is pretty awkward.

                                                                                                                                  1. 10

                                                                                                                                    I’m Lena. I retired from modeling a long time ago. It’s time I retired from tech too.

                                                                                                                                    https://vimeo.com/372265771

                                                                                                                                    1. 0

                                                                                                                                      Do you have a reference on Lena asking people to stop using it? I couldn’t find anything in that Wikipedia article, and it would change the situation considerably.

                                                                                                                                      Hrm. I recalled it was when she went to the “Conference of the Society for Imaging Science in Technology”. I’ve gone and dug up what she actually said, and it would take a real effort to twist it to the interpretation I’d heard. Sounded to me more like she’s bemused by the popularity of the image than upset.

                                                                                                                                      Deliberately not reposting what she said here as I’m disinterested in a long thread of people dissecting it.

                                                                                                                                      1. 4

                                                                                                                                        She does ask for folks to stop using her photo for this purpose in the video at https://www.losinglena.com/.

                                                                                                                                        1. 0

                                                                                                                                          Ah. I didn’t watch it because I dislike getting info via video and they’ve declined to offer any other format.

                                                                                                                                          Can you suggest a timestamp to make checking easy?

                                                                                                                              1. 2

                                                                                                                                However, there are a few options for users to protect themselves from ETag tracking:

                                                                                                                                • Disable cache in the browser settings […]
                                                                                                                                • Modify headers with a browser add-on […]

                                                                                                                                Or the simplest of them all, refresh the page while holding down [Shift].

                                                                                                                                1. 1

                                                                                                                                  That protects from one isolated tracking incident. If you want to generally be protected from this kind of tracking, you should find some way to automate refreshing/loading without cache.

                                                                                                                                1. 35

                                                                                                                                  Cookies Are On Their Way Out

                                                                                                                                  No, not really.

                                                                                                                                  It’s true many front end devs are relying more and more on JWTs, Paseto, etc, but cookies are in many cases the best option.

                                                                                                                                  If you use these options when creating the cookie you will prevent most security problems:

                                                                                                                                  • secure: the cookie will only be sent via https
                                                                                                                                  • sameSite: the cookie will only be sent to the domain that created it. This prevents all cross site attacks.
                                                                                                                                  • httpOnly: the cookie will not be available to JS. No more XSS stealing JWTs attacks.

                                                                                                                                  Also, if you read the EU GDPR documentation on cookies, you do not need to show the “accept cookies” button on authentication cookies:

                                                                                                                                  Strictly necessary cookies — These cookies are essential for you to browse the website and use its features, such as accessing secure areas of the site. Cookies that allow web shops to hold your items in your cart while you are shopping online are an example of strictly necessary cookies. These cookies will generally be first-party session cookies.

                                                                                                                                  And later on:

                                                                                                                                  To comply with the regulations governing cookies under the GDPR and the ePrivacy Directive you must: Receive users’ consent before you use any cookies except strictly necessary cookies.

                                                                                                                                  1. 55

                                                                                                                                    Also privacy laws are about data collection and data processing in general. They don’t care which HTTP header you use and how cleverly you obtain the data.

                                                                                                                                    If you’re collecting non-essential information, you need consent, whether that’s cookie consent, etag-tracking consent, or consent to be followed by an RFC 1149 homing pigeon.

                                                                                                                                    1. 21

                                                                                                                                      One of the great successes of the advertising and privacy invasion industry is convincing people that the EU cookie law is ridiculous and forces every website to present annoying pop-ups. The same has been tried (though less successfully in my experience) with the GDPR and its annoying pop-ups.

                                                                                                                                      More people need to know that actually, no website has to show GDPR or cookie banners. Websites only have to show those banners and pop-ups when they actually violate your privacy in really creepy ways.

                                                                                                                                      1. 9

                                                                                                                                        many front end devs are relying more and more on JWTs, Paseto, etc

                                                                                                                                        That’s very apples to oranges. JWT is how you generate a token, cookies is how you store/transfer a token — you could put a JWT into a cookie. It’s kinda annoying when people say “JWTs etc” to really mean “localStorage”..

                                                                                                                                        1. 3

                                                                                                                                          And a cookie, with the flags mentioned in the root comment (secure, httpOnly, sameSite), is the best way to store a JWT in the browser. localStorage is insecure and should be avoided; it’s like using cookies without those options:

                                                                                                                                          https://snyk.io/blog/is-localstorage-safe-to-use/

                                                                                                                                          1. 1

                                                                                                                                            You’re absolutely right, but I think we can at least agree storing JWTs in localStorage and sending those as an auth header is by far the most popular way of using JWTs.

                                                                                                                                          2. 7

                                                                                                                                            Also, if you read the EU GDPR documentation on cookies, you do not need to show the “accept cookies” button on authentication cookies:

                                                                                                                                            I always supposed it was the case, so I’m glad it’s actually in the official language, thanks for quoting it here!

                                                                                                                                            It almost looks like commercial companies are deliberately hiding behind a neutral technological term “cookies” to not scare off customers by calling them “personal tracking markers from surveillance 3rd parties”.

                                                                                                                                            1. 2

                                                                                                                                              This is great info, thanks for sharing!

                                                                                                                                            1. 4

                                                                                                                                              Because Go doesn’t support operator overloading or define operators in terms of methods, there’s no way to use interface constraints to specify that a type must support the < operator (as an example). In the proposal, this is done using a new feature called “type lists”, an example of which is shown below:

                                                                                                                                              // Ordered is a type constraint that matches any ordered type.
                                                                                                                                              // An ordered type is one that supports the <, <=, >, and >= operators.
                                                                                                                                              type Ordered interface {
                                                                                                                                                  type int, int8, int16, int32, int64,
                                                                                                                                                      uint, uint8, uint16, uint32, uint64, uintptr,
                                                                                                                                                      float32, float64,
                                                                                                                                                      string
                                                                                                                                              }
                                                                                                                                              

                                                                                                                                              In practice, a constraints package would probably be added to the standard library which pre-defined common constraints like Ordered. Type lists allow developers to write generic functions that use built-in operators:

                                                                                                                                              // Smallest returns the smallest element in a slice of "Ordered" values.
                                                                                                                                              func Smallest(type T Ordered)(s []T) T {
                                                                                                                                                  r := s[0]
                                                                                                                                                  for _, v := range s[1:] {
                                                                                                                                                      if v < r { // works due to the "Ordered" constraint
                                                                                                                                                          r = v
                                                                                                                                                      }
                                                                                                                                                  }
                                                                                                                                                  return r
                                                                                                                                              }
                                                                                                                                              

                                                                                                                                              Hmm, to me this looks like the Smallest function would only accept the built-in types with a less-than operator <. Don’t you lose a whole lot of usefulness if you’re not allowed to write your own Ordered type which can be used with a Smallest function?

                                                                                                                                              1. 3

                                                                                                                                                Yes.

                                                                                                                                                People will forced to decide whether their ordering-based data structures (TreeMaps, RB-Trees, …) will use Ordered and therefore only work on a small set of “blessed” types, or whether they define their own, which … well … just imagine how great it will be to have two dozen replacement types for Ordered-for-my-own-types.

                                                                                                                                                (All modulo embedding, as far as I remember.)

                                                                                                                                                1. 1

                                                                                                                                                  This is not true. Using the type syntax with builtin types allows for operator usage on all types (both builtin and user-defined) where the either the builtin type appears in the constraint list, or a custom type with the same underlying type.

                                                                                                                                                  https://go2goplay.golang.org/p/59oHxvIEp0A

                                                                                                                                                  This doesn’t work for struct types, but Go never had operating overloading in the first place, so no struct types would implement these operators anyways.

                                                                                                                                                  1. 7

                                                                                                                                                    Yeah, but soc’s point stands: if you’re building an ordering-based function like Smallest() or data structure like a binary tree, you’ll either:

                                                                                                                                                    1. Choose the much more limiting Ordered constraint and use <, but then your function or tree will only be able to use built-in ordered types (or types with an underlying type that’s a built-in ordered type). That’ll work great for Tree(int) or Tree(MyInt), but won’t work for Tree(MyStruct).
                                                                                                                                                    2. Or you’ll have the constraint be an interface with a Less method, and you won’t be able to use the built-in types like Tree(int) at all … you’d be required to define a MyInt type that implements Less, and convert everything to MyInt before putting it into the tree.

                                                                                                                                                    Option #2 is definitely more flexible as it least allows any type to go into the map, but with the conversions it’ll be more boilerplate than you want for simple types.

                                                                                                                                                    The latest generics draft handles this by passing in a “compare function”, for example see Map in the Containers section. That kind of skirts the constraint/interface issue … but maybe that’s okay?

                                                                                                                                                    Here’s a Go2Go Playground example that shows what I’m talking about: https://go2goplay.golang.org/p/Rbs374BqPWw

                                                                                                                                                    1. 1

                                                                                                                                                      This is not true. Using the type syntax with builtin types allows for operator usage on all types (both builtin and user-defined) where the either the builtin type appears in the constraint list, or a custom type with the same underlying type.

                                                                                                                                                      Exactly as I said:

                                                                                                                                                      (All modulo embedding, as far as I remember.)

                                                                                                                                                      1. 1

                                                                                                                                                        I see. I misunderstood what you meant there because that is not the correct Go terminology. Embedding is only possible when defining struct and interface types. In the type newtype oldtype definition, oldtype is referred to as the “base type” and doesn’t inherit any of its methods, but does gain its operators.

                                                                                                                                                1. 4

                                                                                                                                                  Will be interesting whether they stick with (type T) – I’d assume so, but this further decreases Go’s readability.

                                                                                                                                                  I think the alternatives don’t have a lot of chances here:

                                                                                                                                                  • <> – Bad choice, also NIH.
                                                                                                                                                  • [] – But but fAmiLiArItY, also hard to pull off given the existing misuse of [] in Go.

                                                                                                                                                  Nice to see though that Phil Wadler could talk some sense in them.


                                                                                                                                                  Here are my cliff notes from the last time I read the current design proposal:

                                                                                                                                                  Although methods of a generic type may use the type’s parameters, methods may not themselves have additional type parameters. Where it would be useful to add type arguments to a method, people will have to write a suitably parameterized top-level function.

                                                                                                                                                  This is not a fundamental restriction but it complicates the language specification and the implementation.

                                                                                                                                                  Preferring the convenience of a dozen compiler writers over solving users’ pains sounds like a bad idea.

                                                                                                                                                  Especially considering that if it were added later, it would be a breaking change for every user who wants to clean up his code and make use of this feature.

                                                                                                                                                  we introduce a new predeclared type constraint: comparable

                                                                                                                                                  Not sure why this one written in lower case. (Skipping comment on poor naming.)

                                                                                                                                                  It’s also kinda inconsistent with the approach to comparisons earlier on, which don’t get their special builtin constraint.

                                                                                                                                                  Both ==/!= and the </<=/… earlier could have been dealt with less special casing by defining a suitable constraint for each with the corresponding methods, and – if desired – restricting the set of types that are allowed to satisfy them.

                                                                                                                                                  The rule is that if a type contraint has a single type parameter, and it is used in a function’s type parameter list without an explicit type argument, then the type argument is the type parameter being constrained.

                                                                                                                                                  Unnecessary.

                                                                                                                                                  Therefore, we propose that the language change so that func(x(T)) now means a single parameter of type x(T). This will potentially break some existing programs, but the fix will be to simply run gofmt.

                                                                                                                                                  Good.

                                                                                                                                                  Values of type parameters are not boxed

                                                                                                                                                  Good.

                                                                                                                                                  It’s impossible for non-generic code to refer to generic code without instantiating it, so there is no reflection information for uninstantiated generic types or functions.

                                                                                                                                                  Good.

                                                                                                                                                  No covariance or contravariance of function parameters.

                                                                                                                                                  Will be interesting to see how this works out in practice.

                                                                                                                                                  No operator methods. You can write a generic container that is compile-time type-safe, but you can only access it with ordinary methods, not with syntax like c[k].

                                                                                                                                                  Makes sense – this kind of syntax sugar has been a mistake in most languages.

                                                                                                                                                  And, of course, there is no way to write a constraint to support either return nil or return 0.

                                                                                                                                                  One could use a Zero constraint that is automatically implemented (and cannot be implemented manually) for all types.

                                                                                                                                                  Lots of Irritating Silly Parentheses

                                                                                                                                                  Agreed. Go is already looking … less-than-structured with its ident Type syntax and having () double for types only makes it worse. If the language didn’t add special builtin [] operators, they could have gone with [] for generics, which is generally the optimal design in bracket-based languages.

                                                                                                                                                  The design has no way to express convertability between two different type parameters.

                                                                                                                                                  Sounds like a job for a function from T1 to T2.

                                                                                                                                                  We would need an untyped boolean type for operations such as ==(T) untyped bool.

                                                                                                                                                  This is unclear to me.

                                                                                                                                                  … untyped constants …

                                                                                                                                                  Seems like they have caused nothing but trouble – was probably a mistake to add them in the first place.

                                                                                                                                                  Map keys must be comparable, so key has the predeclared constraint comparable.

                                                                                                                                                  This doesn’t seem to resolve the issues around floating point numbers, but they existed like this before. Could have been an opportunity to do better.


                                                                                                                                                  Thanks for reading, and yes, it’s normal that the comment gets flagged almost immediately.

                                                                                                                                                  1. 2

                                                                                                                                                    Will be interesting whether they stick with (type T) – I’d assume so, but this further decreases Go’s readability.

                                                                                                                                                    I think the alternatives don’t have a lot of chances here:

                                                                                                                                                    • <> – Bad choice, also NIH.
                                                                                                                                                    • [] – But but fAmiLiArItY, also hard to pull off given the existing misuse of [] in Go.

                                                                                                                                                    Are you arguing only about the syntax or about some semantic issues as well? If just syntax, then why in your view is [] better than <>?

                                                                                                                                                    1. 5

                                                                                                                                                      In general, using <> is really hard to lex. What stream of tokens do you generate for Foo<Bar<10>>? Until C++11, C++ lexers would generate the stream Foo, ‘<’, Bar, ‘<’, 10, ‘>>’. The last >> was parsed as one right shift token rather than two closing ’>’s.

                                                                                                                                                      You can either make the language really awkward to write by requiring a space between the ’>’s like C++ did before 11, or you can make a super complex system to feed parse information back to the lexer to lex “>>” as one or two tokens depending on context. Neither is an awesome solution.

                                                                                                                                                      1. 2

                                                                                                                                                        why in your view is [] better than <>?

                                                                                                                                                        1. <> is hard to read for humans
                                                                                                                                                        2. <> is hard to parse for compilers
                                                                                                                                                        3. It allows [] to be (ab)used for syntax “conveniences”

                                                                                                                                                        I wrote about it here.

                                                                                                                                                        1. 1

                                                                                                                                                          So, given a readable font, and a language where [] is used for arrays and/or can be overloaded for something else (like in Rust), there wouldn’t be a difference between [] and <>?

                                                                                                                                                          1. 4

                                                                                                                                                            If only proper angle brackets such as ⟨ ⟩ were usable (if they were easily typed, had universal font coverage, possibly were in ascii), I feel the programming world would be a much better place. Trying to imbue three different types of brackets (()[]{}) with many more than three meanings, often based on context, just makes things more difficult for everybody, language writers and users included.

                                                                                                                                                            So, given a readable font, and a language where [] is used for arrays and/or can be overloaded for something else (like in Rust), there wouldn’t be a difference between [] and <>?

                                                                                                                                                            I think the big difference is that even when abused for multiple different purposes, [ & ] still tend to come in matched pairs, which makes parsing them simple. < & > are comparison operators, which don’t usually appear in matched pairs, so if you have something like this < x > y > z, it’s much much harder for the reader, compiler, ide or vim plugin to tell whether this should be understood as ⟨x⟩ y > z or ⟨x > y⟩ z or whatever (That example is meaningless, but that’s part of the point: you can do a lot of sensible stuff with matched pairs of brackets without having the slightest idea what they mean).

                                                                                                                                                            1. 2

                                                                                                                                                              You still have the issue of bitshifts (<<, >>) and binary comparison operators (<, >, …).

                                                                                                                                                              You are still using something that is not a bracket – both as not-a-bracket and as a bracket. That’s not good.

                                                                                                                                                              Using () brackets for terms, [] brackets for types and not using <> as brackets, but as the thing people learned to read them since kindergarten: That’s good.

                                                                                                                                                      1. 33

                                                                                                                                                        Someone on Reddit put it best:

                                                                                                                                                        Java devs never fail to be parodies of themselves.

                                                                                                                                                        This is so spot on.

                                                                                                                                                        Java is actually a good language, it’s the ecosystem that kills it for me. By “the ecosystem”, I don’t just mean the tooling (e.g. Java build tools are universally awful AFAICT), but the developers themselves. So much Java I read is just “magic”. Magic annotations, magic dependency injection, interfaces over classes that there is only one of etc. etc.

                                                                                                                                                        The author points out very good failures in the way its been architected, but the code in the OP isn’t all that strange looking to me as Java, and that’s a pretty damning statement.

                                                                                                                                                        I wish Java had a better audience around it. The Kotlin ecosystem seems better, but I’ve never used it.

                                                                                                                                                        1. 24

                                                                                                                                                          (edit: my first pass at this came off a little overly negative in a way that I think betrays the seriousness of my point)

                                                                                                                                                          I’m not sure Java actually is a good language. My first introduction to Java was as the primarily language that was used in college, and even as a pretty green programmer them (I’d started writing C and C++ a few years earlier in high school, but I was definitely not a good programmer), I found it awfully questionable. In the intervening 15 years I’ve managed to avoid it, until quite recently. It’s been really eye-opening to see how the language has evolved in that time, but not really in a good way. Perhaps it’s because I’ve largely written OOP off as a bad idea that shouldn’t have ever taken off like it did, and is certainly outstaying it’s welcome, but I find that Java, and even the JVM itself, to be a masters class in solving the wrong problem in the most complex possible way. The complexity in java seems to be like glitter, you can’t touch anything with getting covered in it, and once it’s on you you’ll never get it off. Even working with people that I generally hold in high regard as developers, I see that the ecosystem has forced them into patterns and architecture that I think is questionable- except it’s not because to do anything better would be to try to work against every single design decision in the language and ecosystem. There’s simply no reasonable way to write good Java, the best you can reasonably hope for is to write as little java as possible, and hope the absurd complexity giltter doesn’t spread to all of your connected services by way of the blind “the whole world is Java” assumptions that the JVM ecosystem wants to make on your behalf.

                                                                                                                                                          I say Java here, but realistically I think that all JVM languages end up falling into the same gravitational well. I’ve been using Kotlin lately, and from what I’ve seen of Scala and Clojure they are all infected by the same inescabable fractally wrong view of the world that is imposed by the JVM, by way of the JVM itself being born from the primordeal ooze of bad decisions and oop-kool-aid that led to Java in the first place. Kotlin in particular suffers from being not only unable to escape the Java ecosystem, but also from generally being a poorly designed language. Everything it adds to java, it adds in such a superficial and impotent way that a slight breeze knocks over the facade and you realize you’re stuck back in the Kingdom of the Nouns all over again.

                                                                                                                                                          1. 7

                                                                                                                                                            I tend to agree about the java part. The constructs at your disposal requires you to write very very verbose code, even for simple things. But I disagree about the JVM bit. I find it a pretty good runtime system. Although it tend to eat its fair share of RAM, the GCs and the JIT are first class. Overall, you get pretty decent perf without too much thought. Also, Having written a lot of clojure, it’s vastly different from java, couldn’t be further from the Kingdom of the Nouns.

                                                                                                                                                            1. 14

                                                                                                                                                              The JVM feels to me like it was written for a world that just didn’t really ever happen. It promised cross platform compatibility, that never really materialized since there are only two real meaningful places where the jvm is heavily used these days (x86 linux servers and arm Linux phones). Even where the jvm itself is running on multiple platforms, it’s not running the same workloads across it. We would have been every bit as well off with a toolset that made native cross compilation feasible (go and rust), and probably would have been no worse off even with the old C and C++ cross compilation story. Love it or hate it, JavaScript is what actually fulfilled the promises that java made and never was able to keep.

                                                                                                                                                              Other promises that JVM made either never made sense- language interoperability always existed before java, and exists outside of it now. All the JVM did was fracture the environment by making it nearly impossible to produce native code- it’s a vampire if you look at it in terms of interoperability, unless you want to use gcc to compile your java code. The isolation and runtime management is, consistently, 90% of the work involved in deploying any java application I’ve used, and at the end of the day everyone does that work twice now because most workloads are getting deployed in cloud native containers anyway- so the JVM is superfluous there. GC is a pain in the JVM and has been done as well elsewhere without requiring the rest of the baggage of its runtime and jitter.

                                                                                                                                                              Looking at performance, I’m dubious that it has much going for it. It’s still not a contender in the same space as C or C++, and in many cases the pain of native interop make it slower than even python because python can rely on native code for a lot of heavy lifting. I’ve even seen fairly well optimized JVM code fail to keep up with reasonably (perf) naive Haskell.

                                                                                                                                                              Even with instrumentation, the supposed killer feature of the jvm, I have yet to see anything I can’t get out of a native application with native tooling and instrumentation, and the case is getting weaker by the day as more and more application telemetry moves up and down the stack away from the application itself and into either tracing layers in front of services, or tooling built around things like ebpf that live very low down in the system and allow you to instrument everything.

                                                                                                                                                              The JVM is at best a middle-of-the road performance language with a largely superfluous ecosystem. It might have been a good idea when it was created, and I have no doubt a lot of smart engineering went into its implementation, but it’s time we give it up and realize it’s a sunken cost that we need to leave to the history books as a quirky artifact of the peculiar compute and business environment of the early 90s.

                                                                                                                                                              1. 3

                                                                                                                                                                Clojure’s okay (and still way better than Java, IMO) but suffers from pretty poor error handling compared to other Lisp environments.

                                                                                                                                                              2. 4

                                                                                                                                                                I really don’t like how Java makes optimization of its runtime overcomplicated, then has the gall to make you deal with the complexity. There is no reason to be manually tuning GC and heap sizes when every other runtime, including CLR implementations, can deal with this efficiently and automatically. They might be complex, unlike the JVM, they’re not complex and making you deal with that complexity.

                                                                                                                                                                1. 3

                                                                                                                                                                  Just curious what you dislike about Kotlin’s design? It seems like you make two points: that Kotlin can’t escape Java and, separately, that it’s poorly designed. I agree with the former, but in light of the former, I find Kotlin to be pretty well-designed. They fixed Java’s horrible nullness issues and even kludged in free functions to the JVM, which is neat. Data classes are a band-aid, but still help for the 80% of cases where they can apply. Same with sealed classes (I’d much prefer pattern matching akin to Rust, Swift, OCaml).

                                                                                                                                                                  1. 13

                                                                                                                                                                    My biggest issue is that everything feels like a kluge. Null tracking at the type level is fine, but they didn’t really go far enough with the syntax to make it as useful as it could have been- rust does better here by allowing you to lift values from an error context inside of a function. The language tries to push you toward immutability with val and var, but it’s superficial because you’re getting immutable references to largely mutable data structures without even getting a convenient deep copy. Extension methods are a fine way of adding capabilities to an object, but you can’t use them to fulfill an interface ala go, or outright extend a class with an interface implementation ala Haskell typeclasses, so you’re left with basically a pile of functions that swap an explicit argument for a this reference, and in the process you are conceptually adding a lot of complexity to the interface of an object with no good associated abstraction mechanism to be explicit about it. Even the nature of the language as a cross platform language that can target jvm, llvm, and web asm seems fundamentally flawed because in practice the language itself seems to lack enough of a stand alone ecosystem to ever be viable when it’s not being backed up by the jvm, and even if you did have a native or web ecosystem the design choices they made seem to be, as far as I can tell, about the worst approach I’ve ever seen to cross platform interoperability.

                                                                                                                                                                    Ultimately the features they’ve added all follow this pattern of having pulled a good idea from elsewhere but having an implementation that seems to not fulfill the deeper reason for the feature. The only underlying principle seems to be “make java suck less”. That is, of course, a bar buried so low in the ground it’s in danger of being melted by the earths core, and I would say they did cross over that bar- kotlin does suck less than Java, but what’s astonishing to me is how for such a low bar they still seem to have managed to cross over it just barely.

                                                                                                                                                                    1. 4

                                                                                                                                                                      I share every single one of those sentiments, but I’ve excused many of them specifically because of the limitations of being a JVM language. (interfaces at class definition, no const, no clones)

                                                                                                                                                                      I’ve almost taken it upon myself to periodically go and correct people in the Kotlin subreddit that val does not make things immutable, and that Kotlin has not cured the need for defensive copies in getters.

                                                                                                                                                                      I think the idea of making Kotlin cross-platform is totally stupid for those same reasons you point out. All of that is a limitation of wanting to be on the JVM and/or close to Java semantics. Why they hell would you want to export that to non-JVM platforms?

                                                                                                                                                                      Thanks for the response.

                                                                                                                                                                2. 12

                                                                                                                                                                  I have a completely opposite opinion. Java is not the best language out there, I prefer Scala and Kotlin, but the selling point for me is the ecosystem: great tooling (tools simply work in lots of cases), great platform (lots of platforms are covered, really awesome backward compatibility, stability), great API (it might be far fetched, but I have a feeling that Java’s stdlib is one of the most feature-packed runtimes out there, if not the most?). The “magic” is the same problem as everywhere else; it’s magic until you know the details. Also just because there’s a dependency injection trend in the backend development world, it doesn’t mean that you should use DI in different projects. Interfaces of classes are a Java thing; it wouldn’t exist if the language was more sophisticated.

                                                                                                                                                                  Maybe I’m comparing Java’s ecosystem to C++ – because with C/C++, the tooling is in appalling state, the standard library is awful and I’m not sure what it tries to achieve at times. So I guess I have a very low standards to compare to :P

                                                                                                                                                                  1. 3

                                                                                                                                                                    Java has an incredibly rich ecosystem, that’s true. What annoys me though, is that every single tool in the Java ecosystem is written in Java, meaning you have a ton of CLI programs which take a couple of seconds just to heat up the JVM. Once the JVM is hot and ready to actually do work, the task is over and all that JIT work is lost.

                                                                                                                                                                    At least C/C++ tooling is fast :p

                                                                                                                                                                    1. 2

                                                                                                                                                                      That’s true, JVM startup time is a pain point. But there are several walkarounds for that:

                                                                                                                                                                      • some tools use build server approach (gradle), so that startup time is less slow ;)
                                                                                                                                                                      • some tools like sbt (scala build tool) use build server + shell approach, and it’s possible to use a thin client to invoke a command on this build server (e.g. ‘sbt-client’ written in rust). This makes Scala compilation take less time than compiling a C++ application.
                                                                                                                                                                      • GraalVM native-image is pushed right now, which allows to compile JVM (java, kotlin, scala) application to native code without the use of JRE. This allows writing tools that have non-noticeable startup time, just like tools written in e.g. Go. I was testing some of my small tools with it and it was able to compile a small Clojure app to native code. This tool had same startup speed than a C++ application. Unfortunately, GraalVM can’t compile every app yet, but they’re working on it ;)

                                                                                                                                                                      Also C/C++ tooling is fast, but C++ compilation is nowhere near being fast. Changing one header file often means recompilation of the first half of the project. Bad build system (e.g. in manually written Makefiles) that doesn’t track dependencies properly sometimes produces invalid binaries that fail at runtime, because some of the compilation units weren’t recompiled when they should be. It can be a real mess.

                                                                                                                                                                  2. 10

                                                                                                                                                                    I wish Java had a better audience around it.

                                                                                                                                                                    That doesn’t seem terribly likely to happen.

                                                                                                                                                                    1. Java was never aimed at programmers who value power, succinctness, and simplicity - or programmers who want to explore paradigms other than OO (although newer versions of the lanaguage seem to be somewhat relaxing the Kingdom of Nouns[1] restrictions). It was intended to improve the lives of C++ programmers and their ilk[2].

                                                                                                                                                                    2. Java is frequently used in large, corporate, environments where programmers are considered (and treated as) fungible. “The new COBOL”, as it were[3].

                                                                                                                                                                    3. The JVM itself allows programmers not falling into (1) and (2) to abandon the Java language itself - Clojure, Scala, Kotlin, and Armed Bear Common Lisp spring (heh) to mind. Most of the best JVM programmers I know aren’t actually using Java. Most of the ‘Java shops’ I’ve worked with in the past decade are now, really, ‘JVM shops’.

                                                                                                                                                                    My observation is that most - to be clear, not all - people who continue using Java in 2020 are forced to do so by legacy codebases, and / or companies that won’t let them adopt new languages, even JVM languages. I honestly believe this is the proximate cause of the audience problem you describe. (Not the root cause, mind you).

                                                                                                                                                                    Edited: I’m amused by the fact that the first two, nearly concurrent, replies both reference Yegge’s nouns blog post :)

                                                                                                                                                                    [1] http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom-of-nouns.html

                                                                                                                                                                    [2] “We were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp.” - Gosling. http://www.paulgraham.com/icad.html

                                                                                                                                                                    [3] https://www.infoworld.com/article/3438158/is-java-the-next-cobol.html

                                                                                                                                                                    1. 5

                                                                                                                                                                      Java is a horrible language. No mortal can mentally hold on to a class hierarchy where inheritance is more than a few levels deep. Furthermore it is a bad way to add “another layer of abstraction” , because it just paints you more and more into a corner.

                                                                                                                                                                      (Clojure is a great language, you can add layers of abstraction to solve problems, without just digging yourself deeper.)

                                                                                                                                                                      1. 3

                                                                                                                                                                        But one can write Java programs without abusing inheritance, and even pretty much without inheritance.

                                                                                                                                                                        1. 1

                                                                                                                                                                          Yes. I agree that Java is a horrible language, but class inheritance doesn’t even make the list of things I find poor about it.

                                                                                                                                                                      2. 3

                                                                                                                                                                        I don’t agree that Java is a good language at all, but I wanted to hard-agree at the distaste for magic annotations and DI frameworks.

                                                                                                                                                                        1. 1

                                                                                                                                                                          Java is actually a good language, it’s the ecosystem that kills it for me. By “the ecosystem”, I don’t just mean the tooling (e.g. Java build tools are universally awful AFAICT), but the developers themselves. So much Java I read is just “magic”. Magic annotations, magic dependency injection, interfaces over classes that there is only one of etc. etc.

                                                                                                                                                                          I don’t agree with this - in my experience, “magic” is “code that integrates my application-specific functionality with a massively feature-rich general purpose framework”. It’s magic in the sense that you need to understand the enclosing framework to understand why those annotations are there and their semantics, but they do real work. Work I’d have to do myself if I didn’t use them.

                                                                                                                                                                          You don’t see this much in other languages, but it’s because “massively feature-rich general purpose frameworks” aren’t common outside of Java. The ones that do exist seem to have punted on important architectural decisions - you don’t need a dependency injection framework if your data layer objects are just global values (I’m looking at you, Django).

                                                                                                                                                                          I’ve definitely felt this urge before - why do I need all this spring crap? Then I end up re-implementing half of that magic myself and not as well.

                                                                                                                                                                          1. 1

                                                                                                                                                                            What language has tooling that you like? Curious what you are comparing the Java build tools with

                                                                                                                                                                            1. 2

                                                                                                                                                                              Rust and Go (at least since Go modules) I find both intuitive and fast. I am still not sure how to properly build a Java application without an IDE.

                                                                                                                                                                              1. 1

                                                                                                                                                                                $ ./gradlew build

                                                                                                                                                                            1. 2

                                                                                                                                                                              It seems to imply that it is 66.94%, with mainly Safari doing its own thing. Am I reading it wrong?

                                                                                                                                                                              1. 1

                                                                                                                                                                                It’s 66.94% of users, but 35% of the browsers.

                                                                                                                                                                                1. 2

                                                                                                                                                                                  But is that really a useful distinction when many of those browsers are at less than 1% market share (and possibly no longer receive updates)? This is not a feature critical to the functioning of a website, so adopting it won’t break anything other than the presence of an icon for a minority of users. It’s still a choice to be made of course, I just think it’s a perfectly valid choice either way. It’s very different to adopting some new JavaScript syntax with the potential to completely break your site for many users, or CSS changes without a fallback that break the layout when not supported.

                                                                                                                                                                            1. 2

                                                                                                                                                                              I know that floating-point arithmetic is a bit crazy on modern computers. For example, floating-point numbers are not associative

                                                                                                                                                                              Interestingly, integers aren’t associative either: (a + b) - c might give different results than a + (b - c). Specifically, for some values of a, b and c, a + b might overflow, while a + (b - c) might not. (a=INT_MAX, b=1, c=1 is a trivial set of numbers where the first expression is UB while the second is well-defined.)

                                                                                                                                                                              Basically, computers are weird.