Threads for massemanet

    1. 18

      I have made plenty of terrible technical decisions in my 25 years, but I am so, so, so happy to have completely disassociated myself from web stuff before all this node/Mongo business took off.

      1. 7

        I have a counter-point: I worked on some terrific MongoDB/Node apps. All those had one thing in common “easy developer setup” was never their reason to choose them. One of them was a high-performance system that made great use of capped collections and crash safety was not their concern: if the system crashed, all data would be “old” anyways and not needed anymore. Node doesn’t get as much credit as it deserves: It’s the PHP of the new age and both have undeniably moved the expected level of technology forward.

        The biggest problems of both systems is that they have become kind of a FOSS Oracle: no one ever gets fired for choosing them.

        1. 6

          “It’s the PHP of the new age” - apparently you mean that as praise?

          1. 16

            I actually do. PHP did bring web development with easy deployment to the masses.

            There’s a lot to not like about the language, but I also think its often undersold.

            1. 12

              There is a lot to like also. A lot of people jumped on the hate bandwagon just because it’s popular, but…

              • Closures were there all along. It’s basically Scheme (not really of course :D)
              • Nice integration of async/await into the whole thing (especially compared to the mess with event loops in Python 3)
                • most APIs were always asynchronous (with callbacks), so there’s no “we have to reinvent the whole world in async” problem
              • Template literals are nice (examples: SQL on the server, HTML in the browser)
              • I like the syntax in general, it’s pretty clean — automatic semicolon insertion, powerful object literals, no dollar signs…
              • Multiple great runtimes with fast JIT compilers, quick start time, relatively small memory footprint and decent portability

              As for the downsides… well, there’s nothing nearly as offensive as PHP’s “errors are not exceptions” and “there’s an operator that silences errors but you can override that in global settings but you can override that in the current state of the interpreter…”. Just the regular weak dynamic typing stuff. I like strong static and very expressive type systems, but I also like taking a break from them sometimes :)

              1. 2

                no dollar signs

                Are you talking about PHP?

                1. 3

                  JS of course, compared to PHP (and Perl in this case, though Perl has even more signs :D)

          2. 11

            While PHP looks like a big hairy mess now, you could see things rather differently in the 90s. Instead of comparing to other popular scripting languages now and how web development has evolved, back in the 90s, the competition would be C/C++, Perl, and Shell. Compared to how error-prone and detail-oriented C family languages are and how Perl and Shell then had even messier syntax and less organization, PHP looks like a savior.

            1. 2

              And ColdFusion if one had money.

              1. 3

                ColdFusion, please no more… Once upon a dark day in hell I ended up moving a primeval web app from ColdFusion to Perl to get around the atrocious limitations ColdFusion posed (at least back then, no idea if it evolved later), e.g. no nested queries. Doing the same thing in Perl was a breeze compared to the mess that was ColdFusion. This made it all the more clear that the ’net was ‘owned’ by free software and that all those commercial wanna-be’s had no place there. This project ended up being the start of a rather big move from commercial to free software at a former state-monopolist Telco in Europe, at least for where it concerned internet operations.

                1. 4

                  I wrote a Cold Fusion web app in 1999 that had loads of frames, because it was 1999 and frames were cool and the simplest way to get a layout with various panels updating independently. It was a shopping app and so it had a shopping cart, and it used a server-side session to manage the cart.

                  At certain points in the app, seemingly somehow connected but never actually pin-down-able, it would just crash - and I don’t mean the page or the app, I mean the Cold Fusion server would throw hairy black-and-white error pages with C++ stack traces warning about scary-sounding things that didn’t seem like things a Cold Fusion developer should be having to worry about, semaphores and mutexes and ritual sacrifice.

                  We had a Cold Fusion support contract and so I spent a while trying to get them to explain what was going on. It took a lot less time than I expected; they came pretty much straight out and said “oh, yah, that’s because the session implementation’s not thread-safe, the server uses threads to manage connections but there’s no real locking or anything in the session code. We did that on purpose to make it faster, it was really slow when we tried doing it. Just don’t use sessions in frames and it’ll be fine”.

                  It was a shame because at that time it seemed like there were some good things about CF; in some senses the dev experience was quite polished, and there was a lot that came out of the box. Also it was neat that it had all this stuff designed to make it easy & familiar for web devs just using tags, but you could still drop into CFSCRIPT and use the same objects in an ECMA style. But of course all that’s for nothing if you do stuff like the above.

                2. 1

                  Glad yall were able to get off it haha. The software was a waste of a perfectly-good name, too. Maybe something using metaprogramming that fused many styles at once with zero-cost abstractions and optimizing compiler that kept your CPU from running hot for no reason. Missed opportunity.

        2. 2

          I don’t doubt it! But I’ve never had to touch Javascript or Mongo or <insert web tech stack thing here> and have still been able to footgun myself over and over. I shudder to think what enormities I could have committed.

        3. 1

          I read this as satire and irony. Surely you’re not serious?

          1. 3

            I am.

      2. 10

        MongoDB turned out to be an expensive (thanks to needing to keep so much index in RAM) decision for one project I worked on, but I stand by the choice at the time. In 2011, MongoDB was the only product that met our needs, which were mainly automated unattended failover and high availability so I could run ops by myself (and by “by myself” I mean ops would run themselves so I could focus on the app backend) and still get some sleep at night.

        The product is shut down now, and I haven’t used MongoDB for a big project since then, but those automatic elections and remastering saved us many times.

    2. 5

      While functional programming languages like Haskell are conducive to modularity and otherwise generally good software engineering practices, they are unfit as implementation languages for what I will call interactive systems. These are systems that are heavily IO bound and must provide some sort of guarantee with regard to response time after certain inputs. I would argue that the vast majority of software engineering is the engineering of interactive systems, be it operating systems, GUI applications, control systems, high frequency trading, embedded applications, databases, or video games. Thus Haskell is unfit for these use cases. Haskell on the other hand is a fine implementation language for batch processing, i.e. non-interactive programs where completion time requirements aren’t strict and there isn’t much IO.

      It’s not a dig at Haskell, this is an intentional design decision. While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program. These are design trade-offs, not strict wins.

      1. 5

        While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program.

        Haskell makes it necessary to explicitly mark code which must be performed in sequence, which, really, is a friendlier way of doing things than what C effectively mandates: In C, you have to second-guess the optimizer to ensure your sequential code stays sequential, and doesn’t get reordered or removed entirely in the name of optimization. When the IO monad is in play, the Haskell compiler knows a lot of its usual tricks are off-limits, and behaves itself. It’s been explicitly told as much.

        Rust made ownership, previously a concept which got hand-waved away, explicit and language-level. Haskell does the same for “code which must not be optimized as aggressively”, which we really don’t have an accepted term for right now, even though we need one.

        1. 8

          The optimiser in a C implementation absolutely won’t change the order in which your statements execute unless you can’t observe the effect of such changes anyway. The definition of ‘observe’ is a little complex, but crucially ‘my program is faster’ isn’t an observation that counts. Your code will only be reordered or removed in the name of optimisation if such a change is unobservable. The only way you could observe an unobservable change is by doing things that have no defined behaviour. Undefined behaviour exists in Haskell and Rust too, in every language.

          So I don’t really see what this has to do with the concept being discussed. Haskell really isn’t a good language for expressing imperative logic. You wouldn’t want to write a lot of imperative logic in Haskell. It’s very nice that you can do so expressively when you need to, but it’s not Haskell’s strength at all. And it has nothing to do with optimisation.

          1. 3

            What if you do it using a DSL in Haskell like Galois does with Ivory? Looks like Haskell made their job easier in some ways.

            1. 1

              Still part of Haskell and thus still uses Haskell’s awful syntax. Nobody wants to write a <- local (ival 0). or b' <- deref b; store a b' or n `times` \i -> do when they could write int a = 0;, a = *b; or for (int i = 0; i < n; i++).

              1. 8

                “Nobody wants to”

                You’re projecting your wishes onto everybody else. There’s piles of Haskell code out there, many DSL’s, and some in production. Clearly, some people want to even if some or most of us don’t.

                1. 1

                  There is not ‘piles of Haskell code out there’, at least not compared to any mainstream programming language. Don’t get confused by its popularity amongst people on lobsters, hackernews and proggit. It’s an experimental research language. It’s not a mainstream programming language. It has piles of code out there compared to Racket or Idris or Pony, but compared to Python or C or C++ or Ruby or Java or C# or god forbid Javascript? It might as well not exist at all.

                  1. 2

                    Im not confused. Almost all languages fail getting virtually no use past their authors. Next step up get a few handfuls of code. Haskell has had piles of it in comparison plus corporate backing and use in small scale. Then, there’s larger scale backings like Rust or Go. Then, there’s companies with big market share throwing massive investments into things like .NET or Java. There’s also FOSS languages that got lucky enough to get similarly high numbers.

                    So, yeah, piles of code is an understatement given most efforts didnt go that far and a pile of paper with source might not cover the Haskell out there.

                    1. 1

                      I don’t care how popular Haskell is compared to the vast majority of languages that are used only by their authors. That’s completely irrelevant to the discussion at hand.

                      Haskell is not a good language for expressing imperative concepts. That’s plainly and obviously true. Defending it on the basis that it’s widely used ignores that firstly languages aren’t better simply because they’re widely used, secondly that languages can be widely used without necessarily being good at expressing imperative concepts, and thirdly that Haskell isn’t widely used.

              2. 4

                int a = 0 is okay, but not great. a = *b is complete gobbledygook that doesn’t look like anything unless you already know C, but at least it’s not needlessly verbose.

                for (int i = 0; i < n; i++) is needlessly verbose and it looks like line noise to anyone who doesn’t already know C. It’s a very poor substitute for actual iteration support, whether it’s n.times |i| or for i in 0..n or something else to express your intent directly. It’s kind of ridiculous that C has special syntax for “increment variable by one and evaluate to the previous value”, but doesn’t have special syntax for “iterate from 0 to N”.

                All of that is kind of a minor nit pick. The real point is that C’s syntax is not objectively good.

                1. 2

                  How in the world are people unfamiliar with ruby expected to intuit that n.times|i| means replace i with iterative values up to n and not multiply n times i?

                  1. 2

                    A more explicit translation would be 0.upto(n) do |i|.

                2. 0

                  You do know C. I know C. Lots of people know C. C is well known, and its syntax is good for what it’s for. a = *b is not ‘gobbledygook’, it’s a terse way of expressing assignment and a terse way of expressing dereferencing. Both are very common in C, so they have short syntax. Incrementing a variable is common, so it has short syntax.

                  That’s not ridiculous. What I am saying is that Haskell is monstrously verbose when you want to express simple imperative concepts that require a single character of syntax in a language actually designed around those concepts, so you should use C instead of Haskell’s weird, overly verbose and syntactically poor emulation of C.

        2. 3

          How does Haskell allow you to explicit mark code that must be performed in sequence? Are you referring to seq? If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad. This sort of thing creates a burden when programming Haskell, at least for me. I don’t want to have to constantly wonder if I’ll need to port my elegant functional code into sequential IO Monad form in the future. C++/Rust address this sort of decision paralysis via “zero-cost abstractions,” which make them both more fit to be implementations languages, according to my line of reasoning above.

          1. 5

            Personally, I dislike discussions involving “the IO Monad”. The key point is that Haskell uses data flow for control flow (i.e. it’s lazy). We can sequence one thing after another by adding a data dependency (e.g. making bar depend on the result of foo will ensure that it runs afterwards).

            Since Haskell is pure, compilers can understand and optimise expressions more thoroughly, which might remove ‘spurious’ data dependencies (and therefore sequencing). If we want to prevent that, we can use an abstract datatype, which is opaque to the compiler and hence can’t be altered by optimisations. There’s a built-in datatype called IO which works well for this (note: none of this depends at all on monads).

            1. 3

              The trouble is that oftentimes when you’re building time-sensitive software (which is almost always), it’s really inconvenient if the point at which a function is evaluated is not clear from the source code. Since values are lazy, it’s not uncommon to quickly build up an entire tree of lazy values, and then spend 1-2 seconds waiting for the evaluation to complete right before the value is printed out or displayed on the screen.

              You could argue that it’s a matter of setting correct expectations, and you’d be right, but I think it defeats the spirit of the language to have to carefully annotate how values should be evaluated. Functional programming should be about functions and pure computation, and there is no implicit notion of time in function evaluation.

              1. 4

                I agree that Haskell seems unsuitable for what is generally called “systems programming” (I’m currently debugging some Haskell code that’s been over-complicated in order to become streaming). Although it can support DSLs to generate suitable code (I’ve not experience with that though).

                I was just commenting on using phrases like “the IO Monad” w.r.t. evaluation order, etc. which is a common source of confusion and hand-waving for those new to Haskell, or reading about it in passing (since it seems like (a) there might be something special about IO and (b) that this might have something to do with Monads, neither of which are the case).

              2. 2

                building time-sensitive software (which is almost always)

                Much mission-critical software is running in GC’d languages whose non-determinism can kick in at any point. There’s also companies using Haskell in production apps that can’t be slow. At least one was using it specifically due to its concurrency mechanisms. So, I don’t think your “almost always” argument holds. The slower, less-predictable languages have way too much deployment for that at this point.

                Even time-sensitive doesn’t mean what it seems to mean outside real-time since users and customers often tolerate occasional delays or downtime. Those they don’t might also be fixed with some optimization of those modules. Letting things be a bit broken fixing them later is default in mainstream software. So, it’s not a surprise it happens in lots of deployments that supposedly are time-critical as a necessity.

                In short, I don’t think the upper bounds you’ve established on usefulness match what most industry and FOSS are doing with software in general or timing-sensitive (but not real-time).

                1. 2

                  Yeah it’s a good point. There certainly are people building acceptably responsive apps with Haskell. It can be done (just like people are running go deployments successfully). I was mostly speaking from personal experience on various Haskell projects across the gamut of applications. Depends on cost / benefit I suppose. For some, the state of the art type system might be worth the extra cycles dealing the the occasional latency surprise.

                  1. 2

                    The finance people liked it because it was both closer to their problem statements (math-heavy), the apps had lower defects/surprises vs Java/.NET/C, and safer concurrency. That’s what I recall from a case study.

          2. 1

            If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad

            Lmao what? You can define >>= for any data type effectively allowing you to create a DSL in which you can very precisely specify how the elements of the sequence combine with neat do notation.

            1. 2

              Yes that’s exactly the problem to which I’m referring: Do notation considered harmful. Also do notation isn’t enough to specify evaluation sequencing since values are lazy. You must also carefully use seq

              1. 1

                Ah well I use a Haskell-like language that has strict-by-default evaluation and seems to be able to address a lot of those other concerns at least by my cursory glance:)

                Either way the benefits of do, in separating the logic and execution of procedures, look great to me. But I may be confusing them with the benefits of dependent typing, nevertheless the former facilitates the latter when it comes to being able to express various constraints on a stateful system.

      2. 3

        For systems Haskell, you might like Habit from the people behind House, a Haskell OS. I just found some answers to timing part that I’ll submit in morning.

        1. 1

          The House website seems incredibly out of date!

          1. 3

            Oh yeah. It’s mostly historical. They dropped the work for next project. Then dropped that for even better one. We get some papers and demos out of it.

          2. 2

            But so damn cool.

            1. 2

              Exactly! Even more so, there’s a lot of discussion of how to balance the low-level access against Haskell’s high-level features. They did this using the H Layer they describe in some of their papers. It’s basically like unsafe in Rust where they do the lowest-level stuff in one way, wrap it where it can be called by higher-level Haskell, and then do what they can of the rest in Haskell. I figured the concepts in H Layer might be reusable in other projects, esp safe and low-level. The concepts in Habit might be reusable in other Haskell or non-Haskell projects.

              It being old doesn’t change that. Good example is how linear logic was in the 1980’s, That got used in ML first I think years later, then them plus singleton types in some safer C’s in the 2000’s, and an affine variant of one of them in Rust. They make a huge splash with “no GC” claim. Now, linear and affine types are being adapted to many languages. The logic is twenty years old with people talking about using it for language safety for 10-20 years. Then, someone finds it useful in a modern project with major results.

              Lots of things work that way. It’s why I submit older, detailed works even if they have broken or no code.

      3. 1

        none of the examples of “interactive systems” you mention are nomally io bound. sub-second response time guarantees, otoh, are only possible by giving up gc, and use a real-time kernel. your conclusion that Haskell is unusable for “these use cases” seems entirely unfounded. of course, using Haskell for real time programming is a bad idea, but no less bad than anything that’s, essentially, not C.

        1. 2

          I’ve had a few personal experiences writing large Haskell applications where it was more trouble than I thought it was worth. I regularly had to deal with memory leaks due to laziness and 1-5 second stalls at io points where large trees of lazy values were evaluated last minute. I said this in another thread: it can be done, it just requires a bit more effort and awareness. In any case, I think it violates the spirit of Haskell programming to have to carefully consider latency issues, GC times, or lazy value evaluation when crafting pure algorithms. Having to trade off abstraction for performance is wasteful IMO, i think Rust and C++ nail this with their “zero cost abstractions.”

          I would label most of those systems IO bound. My word processor is normally waiting on IO, so is my kernel, so is my web app, so is my database, so is my raspberry pi etc.

          1. 1

            I guess I’m picking nits here, but using lots of working memory is not “memory leaks”, and a program that is idling due to having no work to perform is not “io bound”. Having “to carefully consider latency issues, GC times, [other tradeoffs]” is something you have to do in every language. I’d venture that the ability to do so on a subconcious level is what distinguishes a skilled developer from a noob. This also, I think, plays a large part in why it’s hard for innovative/weird languages to find adoption; they throw off your sense of how things should be done.

            1. 1

              Yes you have to consider those things in all languages which is precisely my point. Haskell seeks to abstract away those details but if you want to use Haskell in any sort of “time-sensitive” way, you have to litter your pure, lazy functional code with annotations. That defeats the purpose of the language being pure and lazy.

              And yes, waiting on user input does make your program IO bound. If your program is spending more time waiting on IO and less time churning the CPU, it is IO bound. IO bound doesn’t simply mean churning the disk.

          2. 1

            I brought that up before as a counterpoint to using Haskell. A Haskeller gave me this link which is a setting for making it strict by default. Might have helped you out. As a non-Haskeller, I can’t say if it makes the language harder to use or negates its benefits. Worth looking into, though, since it was specifically designed to address things like bang patterns that were cluttering code.

    3. 1

      My impressions are as follows.

      The interface is bad, the email notifications are useless and don’t distinguish between ‘hey, someone sent you a direct message/mentioned you on a channel’ and ‘here is a dump of messages from last week’.

      Handling e2e encryption keys and device verification is terrible, including tying the device key to the browser user agent - I had to re-authenticate my browser after Chrome UA changed on OpenBSD.

      There are some messages that nothing except my phone can decrypt, and ‘requesting’ keys doesn’t help with it at all.

      The interface and service feels sluggish.

      e2e encryption is not enabled by default.

      I love the idea of matrix & the riot client - it lacks a lot of polish at this point in time. It’s annoying enough that I do not use it daily, I take a look at the openbsd riot channel every few weeks - that’s all.

      1. 1

        e2e encryption is not enabled by default.

        I agree with a lot of what you said (although I disagree with the degree which it is a problem). For this one, I’m not sure is a negative. E2E encryption still is in beta, so turning that on by default would probably produce the opposite complaint from a lot of people, possibly even you given your earlier statements on its quality. It also cannot be undone, so making public channels would be annoying. I also don’t really think public channels probably need e2e given anyone can join them. Maybe direct chats should be e2e by default once it’s ready, I’m not sure. But I do believe there is a valid argument for e2e encryption being off by default.

        1. 1

          For group channels - sure. I do believe however that direct messages e2e should be on by default. Especially if they consider e2e encryption still a beta - this needs huge usage exposure before people start relying on it in the real world for serious stuff.

          1. 1

            I find your statement kind of confusing. You are suggesting we opt people into e2e encryption by default but at the same time it’s not ready for serious stuff. IMO, letting people opt themselves in and slowly work out bugs and eventually transition people into it by default sounds like a more pleasant user experience than dropping everyone into a buggy solution. I can see merits to your suggestion, but my values prefers a slower solution.

            1. 2

              IMO, letting people opt themselves in and slowly work out bugs and eventually transition people into it by default sounds like a more pleasant user experience than dropping everyone into a buggy solution.

              I think it will lead to it remaining non-default forever and people sending messages without turning on e2e encryption on. Defaults matter.

              I also believe it’s better to expose as many users as possible to the e2e feature now - people using matrix today are most likely technical already. It’s harder to change defaults when things go mainstream.

              1. 1

                I think it will lead to it remaining non-default forever and people sending messages without turning on e2e encryption on

                Maybe! It’s hard to tell the future. At least anyone who is sufficiently motivated can write a client which does default to e2e encryption or can make a PR to Riot that defaults to it, etc etc (it’s a client decision not a server decision). I feel like you’re being overly pessimistic, but we’ll find out!

            2. 2

              IMO, letting people opt themselves in and slowly work out bugs and eventually transition people

              They need to just fix the bugs so we don’t have to slowly opt people in. Most of the private or FOSS alternatives to proprietary software fail due to user experience. Those developing them should’ve learned by now. I’d hold off on new features where possible to just fix everything people have reported. Then, do iterations as follows: build some stuff with good diagnostics or logging built-in; fix the bugs people report; build some more stuff; fix some more stuff; maybe trim anything that turned out unnecessary. Just rinse repeat maintaining good user experience with core functionality that works well. If there’s bugs, they should be in rarely-used features.

              1. 4

                They need to just fix the bugs so we don’t have to slowly opt people in.

                This statement is ridiculous. It’s an open source project with limited resources. Yes, it would be nice if they could just fix the bugs. Wouldn’t life be great in every project if that could just happen.

                Those developing them should’ve learned by now.

                It’s new people developing every project, it’s not Ocean’s 11 where the same crew gets together on every project. Those who can program are growing at an insane rate, most of them are green.

                Then, do iterations as follows: …

                Feel free to run an open source project like this. But this isn’t a company with top-down management, it’s a bunch of actors in the world doing whatever they are doing and things happen. There is no-one in control.

                1. 2

                  This statement is ridiculous. It’s an open source project with limited resources. Yes, it would be nice if they could just fix the bugs. Wouldn’t life be great in every project if that could just happen.

                  There’s open source projects that fix their bugs. There’s others that ignore the bugs to work on other parts of the project like new features. So, it’s not ridiculous: it’s a course of action proven by other projects that focus on quality and polishing what they have. Many projects and products do ignore that approach, though, for endless addition of features.

                  Now, it might be acceptable to ignore bugs if users love the core functionality enough to work around them. Maybe the new features would be justified. Happens with a lot of software. However, bugs in basic use of a chat client that is not in wide demand which its competitors don’t have are going to be a deal-breaker for a wide audience. It’s already a hard, uphill sell to get people to use private, encrypted clients like Signal that work. People mostly cite network effects of existing ecosystems but also things like visuals and breakage of some features. Really petty given the benefits and developers available but gotta play to the market’s perception. Leaving the alternatives broken in whatever ways you were noticing just makes that hard sell worse both for that project and any others that get mentally associated with that experience down the line. As in, people stop wanting to try encrypted, chat programs when the last two or three were buggy as hell or had poor UI. It can even hurt credibility of people recommending them.

                  “Feel free to run an open source project like this.”

                  There’s groups that do. They have less contributors but higher quality. Another alternative is one person who does care spending extra time on fixing bugs or QA-checking contributions. I’m usually that guy at my job doing a mix of the stuff people overlook and the normal stuff. There’s people doing it in FOSS projects. This one clearly needs at least one person doing that. Maybe one more person if a person or some people are already doing it but overloaded.

                  When it comes down to it, though, I said the group wanting a lot of people to switch to their chat client should fix the problems in it. Your counter implies they shouldn’t fix the problems in it. I’m assuming you meant they should keep doing more features or whatever they’re doing while ignoring the problems. I think for chat clients that fixing problems that would reduce or block adoption should one of highest priorities. Even a layperson would tell you they want their new tech to work about as well on main functions as their old ones its replacing. The old ones work really well. So, the new one needs to. That simple to them.

                  1. 1

                    There’s open source projects that fix their bugs.

                    Your counter implies they shouldn’t fix the problems in it.

                    Ok I think we are talking about different things then because that is not what I meant at all. I’m not saying they don’t fix their bugs, I’m saying they are slowly working a new feature out. Maybe it’s a language barrier but that is what I meant here:

                    and slowly work out bugs and eventually transition

                    I think it’s better to give people a new feature they can opt into than force them into something broken.

                    1. 2

                      Maybe a misunderstanding. Your original writeup suggested they had bugs in quite a few things, invluding E2E messaging. E2E should be on by default due to its importance. So, Im just saying that fixing esp E2E messaging bugs should be high priority since it’s important and should stay on by default. Plus anything else causing problems in daily use.

                      1. 1

                        But that depends on what problem you think Matrix is solving. Currently it’s replacing Slack and IRC, both of which mostly focus on public rooms that anyone can join. E2E encryption doesn’t do much for you in those places. For direct messages, yeah it probably should be on by default. For the private rooms I’m in, we turned it on.

                        So if one thinks Matrix is the next step in IRC or replacing Slack, then E2E encryption isn’t a high priority for you.

                        So, Im just saying that fixing esp E2E messaging bugs should be high priority since it’s important and should stay on by default. Plus anything else causing problems in daily use.

                        It’s easy to dictate project priorities from an arm chair.

                        1. 1

                          Currently it’s replacing Slack and IRC, both of which mostly focus on public rooms that anyone can join. E2E encryption doesn’t do much for you in those places.

                          That makes more sense. I assumed it had a privacy focus since someone mentions it in every thread on stuff like Signal and given homepage line. If just a Slack replacement, E2E wouldn’t make sense by default.

                          “It’s easy to dictate project priorities from an arm chair.”

                          It really isn’t. There’s always lots of debate that follows that consumes time and energy. ;)

              2. 3

                Totally agree! Leaving bugs in the code is just stupid. You’d think they should’ve learnt that by now.

    4. 6

      This article is yet another indication that the Clang/LLVM developer culture is seriously broken. The level of reasoning and technical accuracy would be noticeably bad in an HN rant. The basic premise is nutty: nobody invested billions of dollars and huge amounts of effort and ingenuity to cater to the delusions of C programmers. Processor development is extensively data driven by benchmarks and simulations that include code written in multiple languages. Just to mention one claim that caught my eye, caches are not a recent invention.

      Consider another core part of the C abstract machine’s memory model: flat memory. This hasn’t been true for more than two decades. A modern processor often has three levels of cache in between registers and main memory, which attempt to hide latency.

      Wow! “More than two decades” is right. In fact caches were even present on the PDP-11s and - they were not a new idea back in 1980. Poor Dennis and Ken, developing a programming language in ignorance of the effects of cache memory. The rest of it is scarcely better.

      The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware.

      WTF? Where is the editorial function on ACM Queue? Or consider this explanation of ILP in processor design.

      so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). They inspect adjacent operations and issue independent ones in parallel. This adds a significant amount of complexity (and power consumption) to allow programmers to write mostly sequential code. In contrast, GPUs achieve very high performance without any of this logic, at the expense of requiring explicitly parallel programs.

      Who knew that pipelining was introduced to spare the feelings of C coders who lack the insight to do GPU coding?

      1. 4

        nobody invested billions of dollars and huge amounts of effort and ingenuity to cater to the delusions of C programmers

        People that worked in hardware often said the opposite was true. Mostly due to historical circumstances combined with demand. We had Windows and the UNIX’s in C. Mission-critical code went into legacy mode more often than it was highly-optimized. Then, optimization-oriented workloads like HPC and especially gaming demanded improvements for their code. In games, that was largely in C/C++ with HPC a mix of it and Fortran. Processor vendors responded by making their CPU’s really good at running those things with speed doubling every 18 months without work by software folks. Compiler vendors were doing the same thing for the same reasons.

        So yeah, just because people were using C for whatever reasons they optimized for those workloads and C’s style. Weren’t the benchmarks apps in C/C++, too, in most cases? That would just further encourage improving C/C++ style along with what patterns were in those workloads.

        “Who knew that pipelining was introduced to spare the feelings of C coders who lack the insight to do GPU coding?”

        There were a lot of models tried. The big bets by CPU vendors on alternatives were disasters because nobody wanted to rewrite the code or learn new approaches. Intel lost a fortune on stuff like BiiN. Backward compatibility with existing languages and libraries over everything else. Those are written in C/C++ that people are mostly not optimizing: just adding new features. So, they introduced other ways to speed up those kind of applications without their developers using alternative methods. This didn’t stop companies from trying all kinds of things that did boost numbers. They just remained fighting bankruptcy despite technical successes (Ambric), niche scraping by (Moore’s chips), or priced too high to recover NRE (eg FPGA’s w/ HLS or Venray CPU’s). Just reinforced why people keep boosting legacy and high demand systems written in stuff like C.

        1. 5

          You are not going to find anyone who does processor design who says that.

          1. Processors are highly optimized for existing commercial work loads - which include significant C and Java - true
          2. “processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11. This is essential because it allows C programmers to continue in the belief that their language is close to the underlying hardware.” - not even close.

          First is a true statement (obvious too). Second is a mix of false (PDP-11??) and absurd - I’m 100% sure that nobody designing processors cares about the feelings of C programmers and it’s also clear that the author doesn’t know the slightest thing about the PDP-11 architecture (which utilized caches a not very flat memory model, ILP etc. etc. )

          Caches, ILP, pipelining, oo execution - all those pre-date C. Spec benchmarks have included measurements of Java workloads for decades, fortran workloads forever. The claim “so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism). “ is comprehensively ignorant. ILP works with the processor instruction set fundamentals, not at the language level. To keep a 3GHz conventional processor busy on pure Erlang, Rust, Swift, Java, Javascript loads, you’d need ILP, branch prediction, etc etc as well. It’s also clear that processor designers have been happy to mutate the instruction set to expose parallelism whenever they could.

          “The key idea behind these designs is that with enough high-level parallelism, you can suspend the threads that are waiting for data from memory and fill your execution units with instructions from others. The problem with such designs is that C programs tend to have few busy threads.”

          Erlang’s not going to make your thread switching processor magically work. The problem is at the algorithmic level, not the programming language level. Which is why, on workloads suited for GPUs, people have no problem writing C code or compiling C code.

          Caches are large, but their size isn’t the only reason for their complexity. The cache coherency protocol is one of the hardest parts of a modern CPU to make both fast and correct. Most of the complexity involved comes from supporting a language in which data is expected to be both shared and mutable as a matter of course.

          Again, the author is blaming the poor C language for a difficult algorithm design issue. C doesn’t say much at all about concurrency. Only in the C11 standard is there an introduction of atomic variables and threading (it’s not very good either) but this has had zero effect on processor design. It’s correct that large coherent caches are design bottleneck, but that has nothing to do with C. In fact, shared memory multi thread java applications are super common.

          etc. etc. He doesn’t understand algorithms or processor design, but has a kind of trendy psycho-babble hostility to C.

          1. 3

            Good counterpoints. :)

          2. 2

            The article argues that modern CPU architecture spends vast amounts of die space supporting a model of sequential execution and flat memory. Near as I can tell, he’s absolutely correct. You, otoh, seems to have not understood that, and moved straight to ad hominem. “He doesn’t understand algorithms or processor design”; please.

            1. 5
              1. The claim that “that modern CPU architecture spends vast amounts of die space supporting a model of sequential execution and flat memory” is totally uncontroversial - if you have sensible definitions of both sequential execution and flat memory.

              2. The claim that “The features that led to these vulnerabilities [Spectre and Meltdown] , along with several others, were added to let C programmers continue to believe they were programming in a low-level language,” is absurd and indicates a lack of knowledge about processor design and a offers a nutty theory about the motivations of processor architects.

              3. The claim “The root cause of the Spectre and Meltdown vulnerabilities was that processor architects were trying to build not just fast processors, but fast processors that expose the same abstract machine as a PDP-11.” is similarly absurd and further comments indicate that the author believes the PDP-11 architecture predated the use of features such as caches and instruction level parallelism which is a elementary and egregious error.

              4. The claim “Creating a new thread [in C] is a library operation known to be expensive, so processors wishing to keep their execution units busy running C code rely on ILP (instruction-level parallelism)” - involves both a basic error about C programming and a basic misunderstanding of the motivations for ILP in computer architecure. It precedes a claim that shows that the author doesn’t understand that C based code is widely used in GPU programming which is another elementary and egregious error.

              5. Those are not the only errors in the essay.

              6. “Ad hominem” involves attempting to dismiss an argument based on claims about the character of the person making the argument. If I had argued that Dave Chisnall’s personal attributes invalidate his arguments, that would be ad hominem. I did the opposite: I argued that the gross technical errors in Chisnalls argument indicate that he does not understand computer architecture. That’s not use of ad hominem, but is ad argumentum - directed at the argument, not the person.

              Thanks.

              vy

              1. 2

                The claim that “The features that led to these vulnerabilities [Spectre and Meltdown] , along with several others, were added to let C programmers continue to believe they were programming in a low-level language,”

                Of course, only the author could say that for sure, but my interpretation of this, and other similar sentences, in the article is not to make the point about modern processors somehow trying to satisfy a bunch of mentally unstable programmers, but rather than they are implementing an API that hides a vast amount of complexity and heuristics. As far as I understand, this claim is out of question in this thread.

                The second point, which in my opinion is a bit too hidden in somewhat confusing prose, is that a lower level of programming could be possible, by allowing programs to control things like branch prediction and cache placement for example. That could also simplify processors by freeing them from implementing heuristics without having the full context of what the application may be trying to achieve, and grant full control to the software layer so that better optimisation levels could be reached. I think that is a valid point to make.

                I don’t really like the connection to spectre, which I think is purely anecdotal, and I think that the exposition as a discussion about whether C is or is not low level programmer and what C programmers believe muddies what I think is the underlying idea of this article. Most of the article would be equally valid if it talked about assembly code.

                1. 1

                  I think it would be a really good idea for processor designers to allow lower level programming, but if that’s what the author is attempting to argue, I missed it. In fact, his lauding of Erlang kind of points the other way. What I got out of it is the sort of generic hostility to C that seems common in the LLVM community.

                  1. 1

                    I think (one of) his point(s) is that if CPU designers abandoned the C abstract machine/x86 semantics straightjacket, they could build devices that utilizes the silicon much more effectively. The reason they don’t do that is ofc not that they are afraid of C programmers with pichforks, but that such a device would not be marketable (because existing software would not run on it). I don’t understand your erlang remark though. I believe a CPU that did not do speculative execution, branch prediction, etc, but instead, say, exposed thousands of sequential processing units, would be ideal for erlang.

                    1. 4

                      This was a very interesting machine https://en.wikipedia.org/wiki/Cray_MTA it did a thread switch on every memory load

                      The success of GPUs shows that: 1) if there are compelling applications that fit a non-orthodox processor design, software will be developed to take advantage of it and 2) for many parallel computer designs, C can easily be adapted to the environment - in fact, C is very widely used in GPUs. Also, both AMD and Intel are rushing to extend vector processing in their processors. It is curious that Erlang has had so little uptake in GPUs. I find it super awkward, but maybe that’s just me.

                      Obviously, there are annoying limitations to both current processor designs ( see https://lobste.rs/s/cnw9ta/synchronous_processors ) and to C. My objection to this essay was that it it confirmed my feeling that there are many people working on LLVM who just dislike C for not very coherent reasons. This is a problem because LLVM/Clang keep making “optimizations” that make C even more difficult to use.

    5. 5

      This post is full of bullshit. The author just says things and claims they are true. Zero evidence. So Kotlin code is shorter than Java code, that could mean a lot of things. The author even provides a formula and plugs his arbitrary numbers into it and comes up with Kotlin is 1.8 times more productive than Java. He also calls what he’s done “research”, but it’s presented more of “I did this thing and I think it had this impact so let’s use this number”.

      The author very well might be more productive in Kotlin, great. But I think they are falling into this trap of “I feel good so I must construct a narrative that validates my feeling”. As someone who is a user of Ocaml and has tried to get others to use it, I’ve had this beaten out of me. I like Ocaml, I believe I am more productive in it, but I’m not going to try to put some numbers in a formula to convince others that it’s true because those numbers will be bullshit. There are tons of articles on the internet about how productive of a language Go is despite it requiring tons of duplicate code and other weird things. These arguments just don’t work, the programming world is not objective enough.

      1. 3

        “I was honestly shocked at my productivity after becoming familiar with [kotlin] as I didn’t expect it to have a measurable impact”. Seems likely this has little to do with Kotlin, and a lot to do with Java.

      2. 2

        I absolutely agree that this flunks the sniff test. I have spent positively enormous amounts of time writing Java; my best estimate is around 25000 productive hours. I have seen Java grow and change and be used in ways we never expected (Android). I spent about 200 hours migrating a particularly ugly Android app to Kotlin; I have blown another 250 into attempting to prototype a high availability, high throughput back end app in it. I have researched the evolving best practices religiously. In the end, I cannot see any measurable, tangible differences that don’t boil down to “force everyone to be idiomatic”. If I were to start a new team and new codebase in a void, I would say Java and Kotlin were equal contenders and to choose whatever best suited the paradigms under which the team had learnt to code. But in the real world, there is nothing to outweigh the benefit of being able to call in the thousands of competent Java peers I have worked with and have them immediately understand the finest nuance of what code means against the language spec. Before Kotlin can support huge claims like this article supposes, it needs to go even further. I love some of the features, but so far none of them are killer. None of them feel like a revolution.

      3. -2

        This post is full of bullshit. The author just says things and claims they are true. Zero evidence.

        How is that different from arguments in favor of static typing again?

        1. 5

          I don’t know? My arguments for static typing revolve around explaining my values and why static typing fits into it, not some objective claim. Does everything come back to trying to shoot down static typing with you?

          1. 2

            I’ve noted that the preference is subjective in many of the discussions we’ve had, and you’ve continued to argue that you believe there are tangible benefits absent any empirical evidence to support that. So, I find it odd that you take offence at the author using similar style of argument here.

            1. 2

              you’ve continued to argue that you believe there are tangible benefits absent any empirical evidence to support that

              In those cases I’ve explicitly that that is my “feeling” or “in my experience” and that my perspective is not meant to imply evidence or an objective reality. As I said in the very first comment in this thread, I don’t make up some productivity numbers and plug them into some silly formula and make claims. I don’t make assertions about what other programmers would benefit from, the author consistently makes claims as if they apply to everyone.

              For example, in looking through my threads in conversations with you I found:

              I say:

              I’m not saying static types are objectively superior, just that I tend to find them superior

              https://lobste.rs/s/jlkr3r/when_types_are_eating_away_your_sanity#c_toplhk

              In another one you talk about how you never found static types helped and my response was:

              My experience has definitely not been this.

              https://lobste.rs/s/jlkr3r/when_types_are_eating_away_your_sanity#c_ztg4zm

              1. 2

                Yet, you feel strongly enough about the topic to have prolonged arguments, even though I too qualify my statements as also being rooted in personal experience. You are making assertions about what other programmers would benefit from, and you’re just more careful about qualifying them than the author of the article. Meanwhile, many proponents of static typing don’t bother with any qualification, and state their claims as being self evident.

                1. 2

                  Yet, you feel strongly enough about the topic to have prolonged arguments

                  Yes, I find discussion type systems fun and interesting. I also have long discussions with people about musical tastes, which is clearly entirely subjective. So what?

                  You are making assertions about what other programmers would benefit from

                  Am I? As far as I have read myself, I am saying that “my experience is X, I believe others could benefit from that”. Obviously I’m not a great judge of what I am saying though as I have a lot more context in my head than a reader does. But I do not believe that is the same statement as “Everyone will benefit from that”. I also believe in high taxes and social safety nets but understand other people have different perspectives. So what?

                  Meanwhile, many proponents of static typing don’t bother with any qualification, and state their claims as being self evident

                  What’s that got to do with me?

                  Look, I don’t actually know what you’re trying to say. Are you disagreeing with my comment that started this thread that had nothing to do with static typing? Do you just want to argue about static typing? Are you trying to call me out for saying something about static typing that you haven’t actually shown I’ve said? What is your goal in this discussion?

                  1. 1

                    My original point was simply that the argument the article makes is about as well founded as any argument I’ve seen in favor of static typing, and it didn’t have anything to do with you specifically. Since you decided to make it about you, I’ve simply related the impression I got from our discussions. I’m sorry if I’ve offended you by that or if I misunderstood the nature of your argument.

                    1. 1

                      My original point was simply that the argument the article makes is about as well founded as any argument I’ve seen in favor of static typing

                      Ok, so were you agreeing with my critique of it? Because it reads, to me, as sort of a drive-by “what about ism” and it’s really unclear what you’re trying to say.

                      Since you decided to make it about you

                      Well, you did ask me how it was different and I responded with my perspective, I’m not sure how I should have responded I cannot take ownership of a whole community nor can I take responsibility for what it says.

                      1. 1

                        I was agreeing with your critique of it, and pointing out that it’s a common line of argument. I’m not sure what exactly you found unclear to be honest.

                        1. 1

                          What you said was:

                          How is that different from arguments in favor of static typing again?

                          To me, it’s not obvious you are agreeing or disagreeing.

    6. 20

      Look, here’s the thing. If you’re holding 30 million dollars in 250 lines of code that you haven’t audited, then it’s on you. Seriously. It takes any half-decent appsec guy less than one man-day to fleece those 250 lines. At most, that would cost them a few thousands of dollars. They didn’t do it because they wanted it all for free. They didn’t do it because they’re greedy and cheap. They absolutely deserve this.

      I kinda agree with this, honestly. :-\

      1. 2

        I kinda agree with this, honestly. :-\

        That’s because, as your post history on Lobsters has established, you need to get you some ethics and morals.

        I kinda agree with the top comment in the article:

        “ Look, here’s the thing. If you’re holding 30 million dollars in 250 lines of code that you haven’t audited, then it’s on you.”

        Look here’s the thing. If you’ve parked your car on the street like a pleb instead of buying a house with a garage, then its on you.

        Look here’s the thing. If you’re holding a PC and a TV and a washing machine in a house with single glazing on the rear windows, then it’s on you.

        Whilst this was an extremely interesting read and I’m sure awesome fun to pull off, theft is theft. The rule of law is the rule of law. You know that these ETH belong to other people and you have taken them for yourself. That’s theft, and I hope the law catches up with you.

        1. 13

          But the entire point of “smart” contracts is that the code IS the contract, right? Your analogy is flawed. It’s not like stealing a car, it’s like finding a loophole in an agreement (or “dumb” contract) and exploiting it in the courts. That happens literally every day, and it is perfectly legal.

          The difference is that when you have actual humans making the decisions instead of computers you can make more subtle arguments about what was intended instead of being beholden to the most pedantic possible interpretation of the contract.

          1. 14

            This is the correct interpretation. The “smart contract” hype is built around the concept that the blockchain is the judge and the jury: it’s all built on the assumption that the blockchain is incorruptible and perfect. To quote from Gavin Wood’s paper “Ethereum: A Secure Decentralised Generalised Transaction Ledger:”

            [Ethereum has attributes] not often found in the real world. The incorruptibility of judgment, often difficult to find, comes naturally from a disinterested algorithmic interpreter.

            Further:

            …natural language is necessarily vague, information is often lacking, and plain old prejudices are difficult to shake.

            Most ominously, perhaps:

            …the future of law would be heavily affected by [smart contract] systems… Ethereum may be seen as a general implementation of such a crypto-law system.

            Based on these concepts, the idea that they’re building a perfect replacement for law, they implemented a Turing-complete language with no concept of or provision for proofs, and run it on a distributed VM from which no malicious programs can be purged. Brilliant!

            1. 4

              Is it brilliant? I’m not so sure: what sovereign citizens and computer geeks alike seem to believe is that the law is a sequence of perfectly defined rules - which is why the former loves to look for the magical series of words that exempts them from it.

              But in reality the law is often about intent and judgment. If I found a bank that let me put my name on everyone’s account and I did with the purpose of withdrawing their savings, the court would hold a dim view of me saying “but they let me do it!

              1. 4

                That was sarcasm. :)

                1. 3

                  thank god. but like the best sarcasm - and I say this with complete sincerity - it’s indistinguishable from what people are claiming both here and in the article.

                  1. 1

                    Well note, only the “Brilliant” part was sarcasm. The rest was literally quoting a seminal paper in the space.

            2. 2

              hopefully the interest in contract languages on blockchains will encourage more folks to get involved in formal verification.

          2. 3

            But the entire point of “smart” contracts is that the code IS the contract

            Agreed. The analogies given above were ridiculous:

            Look here’s the thing. If you’ve parked your car on the street like a pleb instead of buying a house with a garage, then its on you.

            This is not a comparison. Try this instead:

            Look here’s the thing. If you’ve parked your limited edition McLaren F1 on the street instead of in your garage, then yeah that was dumb

            But this is still a rubbish analogy because in Ethereum: Code is Law.

            1. 8

              The correct analogy would be to leave the thing unlocked, with the keys in a plastic box inside, and with a notarized affidavit that reads, ‘I, goodger, hereby transfer ownership of this vehicle and its contents to whomsoever may open this box’.

        2. 19

          That’s because, as your post history on Lobsters has established, you need to get you some ethics and morals.

          Says the guy who posted 9/11 truther conspiracies from his blog. Angersock has ethics and morals, and I’m a little disheartened that your ad hominem attack got upvoted.

          1. 6

            There are a few certain types of stories regarding politics and cryptocurrencies that seem to bring out a group of extremely angry and aggressive posters that don’t seem to want to have anything but traditional internet yelling. “Get morals” has been yelled at me any time the US government is brought up and always seems heavily upvoted.

            1. 2

              Damn, I’m a sock puppet after all… Also ad hominem.

              1. 2

                me too! #sockpuppet

                1. 5

                  It must be very hard living a life where you think every time someone disagrees with you it’s because of a huge conspiracy.

                  I encourage you to talk to a mental health professional.

                2. 2

                  I know that this is futile and I’m shouting into the void, but why would you assume that everyone who disagrees with you is a sock puppet? These aren’t fake votes I think people are disagreeing with your aggressiveness, there is no reason for this to be a psy-ops campaign just to mess with you.

    7. 1

      time to learn f#?

      1. 3

        Actually, it was Alex Jones and colleagues who made up that insane story and pushed it on naive and gullible people. http://thehill.com/homenews/325761-infowars-alex-jones-apologizes-for-pushing-pizzagate-conspiracy-theory

        1. 7

          Alex Jones is a con and people who take him seriously need help. When he is in trouble, he calls his work “performance art”. There is no pizzagate conspiracy, just a money raising hoax for people who like to be frightened.

          1. 0

            I support precise use of language, but I also support common sense reading comprehension. “There is no conspiracy” usually means “Conspiracy, which exists, has no relationship with reality”. Your literal interpretation is unhelpful.

            1. 1

              So let’s see if I got this right. 9/11 was an inside job, and the “MSM” is covering it up, hence “pizzagate” is also covered up. Although “pizzagate” might be false.

              1. -1

                I’m dismayed that you think I’m a sock puppet. I would think the lack of spelling errors should make it obvious I’m not @apy. Also, trolling?

                1. 0

                  I’m dismayed that you think I’m a sock puppet.

                  Invited to Lobsters by @apy with 3 comments in your life, two of which are in this thread. Yeah, I think you’re either a sock puppet or a stooge.

                  1. -1

                    I’m not @apy, although I did once work with him IRL. You seem pretty conspirational… But be that as it may,. I’m in all honesty interested in your line of reasoning. Especially these quotes; “[Since some] conspiracy theories are true (e.g. the factually-proven-beyond-all-reasonable-doubt “9/11 was an inside job” conspiracy), then you’ll realize that the MSM is the government’s mouthpiece” and “Now I am not saying [pizzagate] is true, but I am saying that you have been mislead about what pizzagate is” So if this is true (the Government controls mass media), how come you can find the Truth on the internet? Seems to me the Govenment is doing a pretty good job on controlling e.g. Google.

                    1. 0

                      So if this is true (the Government controls mass media), how come you can find the Truth on the internet?

                      How do you ask me such dumb questions? Honestly?

                      This is why I downvote you as trolling, because I can think of no other explanation.

                      Where on Earth did you get the idea that MSM includes the entire Internet?

                      And what on Earth does that have to do with the actual content that I’ve linked to?

                      Troll harder!

                      1. 1

                        I honestly don’t think the question is dumb. I might be dumb, but I don’t understand your mental model of how the world works. Like if the government controls the “MSM”, how come it can’t control Google/Youtube the same way? Or the entire internet, for that matter? I mean, if you can pull off a false flag 9/11, getting some videos taken down from Youtube should be a piece of cake?

                        1. 1

                          You know, those are much better questions, and they actually do have very good answers (although if you think videos aren’t taken down from YT… you haven’t been paying attention), but, I unfortunately have spent more than enough energy answering people’s questions here and pointing them down roads that they are free to explore on their own.

                          If I had all the time in the world I would gladly answer. But the answer is (a) complicated, (b) I don’t have all of it, and (c) I have to run. Sorry.

  1. 0

    Could you give me 5 factual things he’s been right about that the MSM hasn’t?

    (I take it we’re just ignoring his snake oil and pills business that he profits from?)

  • 4

    takeaway from this should be; process_info/1 is not what you want. problem is it loops over (most of) the process_info/2 variants.