1. 9

    I’m not as involved as I used to be, but I’m still on the core team, so feel free to ask me questions if you’ve got any.

    1. 3

      Would you recommend Factor for production use given that it seems to be reaching a sort of plateau in support and community?

      It’s a beautiful language, by the way. Thank you for your work.

      1. 5

        I have Factor running in production. Although I don’t really maintain the web app much - it just ticks along - Factor runs tinyvid.tv and has for the past few years. I originally wrote it to test HTML 5 video implementations in browsers back when I worked on the Firefox implementation of video.

        1. 5

          As always, it depends on what you’re doing—I’d definitely be nervous if you told me you were shoving Factor into an automobile, for example—but Factor the VM and Factor the language are both quite stable and reliable. On top of doublec’s comment, the main Factor website runs Factor (and its code is distributed as part of Factor itself for your perusal), and it’s been quite stable. (We do occasionally have to log in and kick either Factor or nginx, but it’s more common that the box needs to be rebooted for kernel updates.) I likewise ran most of my own stuff on Factor for a very long time, including some…maybe not mission-critical, but mission-important internal tooling at Fog Creek. And finally, we know others in the community who are building real-world things with Factor, including a backup/mirroring tool which I believe is being written for commercial sale.

          The two main pain-points I tend to hit when using Factor in prod are that I need a vocabulary no one has written, or that I need to take an existing vocabulary in a new direction and have to fix/extend it myself. Examples are our previous lack of libsodium bindings (since added by another contributor) and our ORM lacking foreign key support (not a huge deal, just annoying). Both of these classes of issues are increasingly rare, but if you live in a world where everything’s just a dependency away, you’ll need to be ready for a bit of a change.

          You can take a look at our current vocab list if you’re curious whether either of the above issues would impact anything in particular you have in mind.

        2. 1

          What would you say is Factor’s best application domain, the kind of problem it solves best? I met Slava many years ago when he was presenting early versions of Factor to a local Lisp UG, and am curious to see where the language fits now, both in theory and practice.

          1. 4

            My non-breezy answer is “anything you enjoy using it for.” There are vocabularies for all kinds of things, ranging from 3D sound to web servers to building GUIs to command-line scripts to encryption libraries to dozens of other things. Most of those were written because people were trying to do something that needed a library, so they wrote one. I think the breadth of subjects covered speaks well to the flexibility of the language.

            That all said, there are two main areas where I think Factor really excels. The first is when I’m not really sure how to approach something. Factor’s interactive development environment is right up there with Smalltalk and the better Common Lisps, so it’s absolutely wonderful for exploring systems, poking around, and iterating on various approaches until you find one that actually seems to fit the problem domain. In that capacity, I frequently use it for reverse-engineering/working with binary data streams, exploring web APIs, playing with new data structures/exploring what high-level design seems likely to yield good real-world performance, and so on.

            The second area I think Factor excels is DSLs. Factor’s syntax is almost ridiculously flexible, to the point that we’ve chatted on and off about making the syntax extension points a bit more uniform. (I believe this branch is the current experimental dive in that direction.) But that flexibility means that you can trivially extend the language to handle whatever you need to. Two silly/extreme examples of that would be Smalltalk and our deprecated alternative Lisp syntax (both done as libraries!), but two real examples would be regular expressions, which are done as just a normal library, despite having full syntax support, or strongly typed Factor, which again is done at the library level, not the language level. I have some code lying around somewhere where I needed to draft up an elaborate state machine, and I quickly realized the best path forward was to write a little DSL so I could just describe the state machine directly. So that’s exactly what I did. Lisps can do that, but few other languages can.

          2. 1

            Were native threads added in this release, or are there plans to? And did anything ever come to fruition with the tree shaker that Slava was working on way back when?

            Major props on the release. It’s really nice to see the language survive Slava disappearing into Google.

            1. 5

              The threads are still green threads, if that’s what you’re asking, but we’ve got a really solid IPC story (around mailboxes, pattern matching, Erlang-style message passing, etc.), so it’s not a big deal to fire up a VM per meaningful parallel task and kick objects back and forth when you genuinely need to.

              In terms of future directions, I don’t know we’ve got anything concrete. What I’d like to do is to make sure the VM is reentrant, allow launching multiple VMs in the same address space, and then make the IPC style more efficient. That’d make it a lot easier to keep multithreaded code safe while allowing real use of multiple cores. But that’s just an idea right now; we’ve not done anything concrete that direction, as far as I know.

              1. 1

                Really off-topic, but isn’t Slava at Apple?

                1. 1

                  He is now. Works on Swift.

              2. 1

                Where does the core factor team typically communicate these days? #concatenative on freenode seems kinda dead these days. Is there a mailing list, or on the yahoo group?

              1. 2

                I tried several DSP texts, and (by far) liked this one the most: http://dspguide.com/

                There’s a newer version (https://www.amazon.com/Digital-Signal-Processing-Practical-Scientists/dp/B00KEVJG2S/), but I haven’t read it.

                1. 6

                  This news caused the public release for XSA-267 / CVE-2018-3665 (Speculative register leakage from lazy FPU context switching) to be moved to today.

                  1. 16

                    These embargoed and NDA’d vulnerabilities need to die. The system is broken.

                    edit: Looks like cperciva of FreeBSD wrote a working exploit and then emailed Intel and demanded they end embargo ASAP https://twitter.com/cperciva/status/1007010583244230656?s=21

                    1. 8

                      Prgmr.com is on the pre-disclosure list for Xen. When a vulnerability is discovered, and the discoverer uses the responsible disclosure process, and the process works, we’re given time to patch our hosts before the vulnerability is disclosed to the public. On balance I believe participating in the responsible disclosure process is better for my customers.

                      Pre-disclosure gives us time to build new packages, run through our testing process, and let our users know we’ll be performing maintenance. Last year we found a showstopping bug during a pre-disclosure period: it takes time and effort to verify a patch can go to production. With full disclosure, we would have the do so reactively, with significantly more time pressure. That would lead to more mistakes and lower quality fixes.

                      1. 2

                        This is a bad response to the issue. The bad guys probably already have knowledge of it and can use it. A few players deemed important should not get advanced notification.

                        1. 15

                          Prgmr.com qualifies for being on the Xen pre-disclosure list by a) being a vendor of a Xen-based system b) willing and able to maintain confidentiality and c) asking. We’re one of 6 dozen organizations on that list–the criteria for membership is technical and needs-based.

                          If you discover a vulnerability you are not obligated to use responsible disclosure. If you run Xen you are not obligated to participate in the pre-disclosure list. The process consists of voluntary coordination to discover, report, and resolve security issues. It is for the people and organizations with a shared goal: removing security defects from computer systems.

                          By maintaining confidentiality we are given the ability, and usually the means to have security issues resolved before they are announced. Our customers benefit via reduced exposure to these bugs. The act of keeping information temporarily confidential provides that reduced exposure.

                          You have described a voluntary process with articulable benefits as “needing to die,” along with my response being “bad.” As far as I can tell from your comments you claim “the system is broken” because some people “should not get advanced notice.” I’ve described what I do with that knowledge, and why it benefits my users. I’m thankful the security community tells me when my users are vulnerable and works with me to make them safer.

                          Can you improve this process for us? Have I misunderstood you?

                          1. 11

                            Some bad guys might already have knowledge of it. Once it’s been disclosed, many bad guys definitely have knowledge of it, and they can deploy exploits far, far faster than maintainers, administrators and users can deploy fixes.

                            1. 8

                              You’re treating “the bad guys” like they’re all one thing. In actuality, there’s a string of bad guys from people who will use a free, attack tool to people who will pay a few grand for one to people who can customize a kit if it’s just a sploit to people who can build a sploit from a description to rare people who had it already. There’s also a range in intent of attackers from DOS to data integrity to leaking secrets. The folks who had it already often just leak secrets in stealthy way instead of do actual damage. The also use the secrets in a limited way compared to average, black hat. They’re always weighing use vs detection of their access.

                              The process probably shuts down quite a range of attackers even if it makes no difference for the best ones who act the sneakiest.

                              1. 4

                                The process probably shuts down quite a range of attackers even if it makes no difference for the best ones who act the sneakiest.

                                I believe the process is so effective at shutting down “quite a range of attackers” that it works despite: a) accidental leaks [need for improvement of process] b) intentional leaks [abuse] c) black hats on the pre-disclosure list reverse engineering an exploit from a patch. [fraud] In aggregate, the benefit from following the process exceeds the gain a black hat would have from subverting it.

                          2. 9

                            Well, it’s complicated. (Disclosure: we were under the embargo.)

                            When a microprocessor has a vulnerability of this nature, those who write operating systems (or worse, provide them to others!) need time to implement and test a fix. I think Intel was actually doing an admirable job, honestly – and we were fighting for them to broaden their disclosure to other operating systems that didn’t have clear corporate or foundation backing (e.g., OpenBSD, Dragonfly, NetBSD, etc). That discussion was ongoing when OpenBSD caught wind of this – presumably because someone who was embargoed felt that OpenBSD deserved to know – and then fixed it in the worst possible way. (Namely, by snarkily indicating that it was to address a CPU vulnerability.) This was then compounded by Theo’s caustic presentation at BSDCan, which was honestly irresponsible: he clearly didn’t pull eager FPU out of thin air (“post-Spectre rumors”), and should have considered himself part of the embargo in spirit if not in letter.

                            For myself, I will continue to advocate that Intel broaden their disclosure to include more operating systems – but if those endeavoring to write those systems refuse to honor the necessary secrecy that responsible disclosure demands (and yes, this means “embargoed and NDA’d vulnerabilities”), they will make such inclusion impossible.

                            1. 18

                              We could also argue Theo’s talk was helpful in that the CVE was finally made public.

                              Colin Percival tweeted in his thread overview about the vulnerability that he learned enough from Theo’s talk to write an exploit in 5 hours.

                              If Theo and and the OpenBSD developers pieced enough together from rumors to make a presentation that Colin could turn into an exploit in hours, how long have others (i.e., bad guys) who also heard rumors had working exploits?

                              Theo alone knows whether he picked-up eager FPU from developers under NDA. Even if he did, there’s zero possibility outside of the law he lives under (or contracts he might’ve signed) that he’s part of the embargo. As to the “spirit” of the embargo, his decision to discuss what he knew might hurt him or OpenBSD in the future. That was his call to make. He made it.

                              Lastly, I was at Theo’s talk. Caustic is not how I would describe it, nor would I categorize it as irresponsible. Theo was frustrated that OpenBSD developers who had contributed meaningfully to Spectre and Meltdown mitigation had been excluded. He vented some of that frustration in the talk. I’ve heard more (and harsher) venting about Linux in a 30 minute podcast than all the venting in Theo’s talk.

                              On the whole Theo’s talk was interesting and informative, with a sideshow of drama. And it may have been what was needed to get the vulnerability disclosed and more systems patched.


                              Disclosure: I’m an OpenBSD user, occasional port submitter, BSDCan speaker and workshop tutor, FreeNAS user and recommender, and have enjoyed many podcasts, some of which may have included venting.

                              1. 4

                                If Theo and and the OpenBSD developers pieced enough together from rumors to make a presentation that Colin could turn into an exploit in hours, how long have others (i.e., bad guys) who also heard rumors had working exploits?

                                It was clear to me the day Spectre / Meltdown were disclosed that there would be future additional vulnerabilities of the same class based on that discovery. I think there is circumstantial evidence suggesting the discovery was productive for the people who knew about it in the second half of 2017 before it was publicly disclosed. One can safely assume black hats have had the ability to find and use novel variations in this class of vulnerability for at least six months.

                                If Theo did pick up eager FPU from a developer under embargo that demonstrates just how costly it is to break embargo. Five hours, third hand.

                                1. 4

                                  If Theo did pick up eager FPU from a developer under embargo that demonstrates just how costly it is to break embargo. Five hours, third hand.

                                  I have absolutely no idea what point you’re trying to make. Certainly, everyone under the embargo knew that this would be easy to exploit; in that regard, Theo showed people what they already knew. The only new information here is that Theo is every bit as irresponsible as his detractors have claimed – and those detractors would (of course) point out that that information is not new at all…

                                  1. 1

                                    With respect, how is Theo irresponsible for reducing the time the users of his OS are vulnerable?

                                    Like, the embargo thing sounds a lot to the ill-informed like some kind of super-secret clubhouse.

                                2. 4

                                  Theo definitely wasn’t part of the embargo, but it’s also unquestionable that Theo was relying on information that came (ultimately) from someone who was under the embargo. OpenBSD either obtained that information via espionage or via someone trying to help OpenBSD out; either way, what Theo did was emphatically irresponsible. Of course, it was ultimately his call – but he is not the only user of OpenBSD, and is unfortunate that he has effectively elected to isolate the community to serve his own narcissism.

                                  As for the conjecture that Theo served any helpful role here: sorry, that’s false. (Again, I was under the embargo.) The CVE was absolutely going public; all Theo did was marginally accelerate the timeline, which in turn has resulted in systems not being as prepared as they otherwise could be. At the same time, his irresponsible behavior has made it much more difficult for those of us who were advocating for broader inclusion – and unfortunately it will be the OpenBSD community that suffers the ramifications of any future limited disclosure.

                                  1. 6

                                    Espionage? You’re suggesting one of:

                                    1. Someone stole the exploit information, leaked it to the OpenBSD team, a team known for proactively securing their code, on the off-chance Theo would then further leak it (likely with mitigation code), causing the embargoed details to be released sooner than expected,

                                    2. OpenBSD developers stole the exploit information, then leaked it (while committing mitigation code), causing the embargoed details to be released sooner than expected.

                                    The first doesn’t seem plausible. The second isn’t worthy of you or any of the developers on the OpenBSD team.

                                    I’m sure you’ve read Colin’s thread. He contacted folks under embargo after he wrote his exploit code based on Theo’s presentation. The release timeline moved forward. OSs that had no knowledge of the vulnerability now have patches in place. Perhaps those users view “helpful” in a different light.


                                    Edit: Still boggling over the espionage comment. Had to flesh that out more.

                                    1. 8

                                      Theo has replied:

                                      In some forums, Bryan Cantrill is crafting a fiction.

                                      He is saying the FPU problem (and other problems) were received as a leak.

                                      He is not being truthful, inventing a storyline, and has not asked me for the facts.

                                      This was discovered by guessing Intel made a mistake.

                                      We are doing the best for OpenBSD. Our commit is best effort for our user community when Intel didn’t reply to mails asking for us to be included. But we were not included, there was no reply. End of story. That leaves us to figure things out ourselves.

                                      Bryan is just upset we guessed right. It is called science.

                                      He’s also offered to discuss the details with Bryan by phone.

                                      1. 4

                                        Intel still has 7 more mistakes in the Embargo Execution Pipeline™️ according to a report^Wspeculation by Heise on May 3rd.

                                        https://www.heise.de/ct/artikel/Exclusive-Spectre-NG-Multiple-new-Intel-CPU-flaws-revealed-several-serious-4040648.html

                                        Let the games begin! 🍿

                                        1. 1

                                          What’s (far) more likely: that Theo coincidentally guessed now, or that he received a hint from someone else? Add Theo’s history, and his case is even weaker.

                                          1. 13

                                            While everyone is talking about Theo, the smart guys figuring this stuff out are Philip Guenther and Mike Larkin. Meet them over beer and discuss topics like ACPI, VMM, and Meltdown with them and you won’t doubt anymore that they can figure this stuff out.

                                            1. 6

                                              In another reply you claim your approach is applied Bayesian reasoning, so let’s go with that.

                                              Which is more likely:

                                              1. A group of people skilled in the art, who read the relevant literature, have contributed meaningful patches to their own OS kernel and helped others with theirs, knowing that others besides themselves suspected there were other similar issues, took all that skill, experience and knowledge, and found the issue,

                                              or

                                              1. Theo lied.

                                              Show me the observed distribution you based your assessment on. Show me all the times Theo lied about how he came to know something.

                                              Absent meaningful data, I’ll go with team of smart people knowing their business.

                                              1. 4

                                                Absent meaningful data

                                                Your “meaningful data” is 11 minutes and 5 seconds into Theo’s BSDCan talk: “We heard a rumor that this is broken.” That is not guessing and that is not science – that is (somehow) coming into undisclosed information, putting some reasonable inferences around it and then irresponsibly sharing those inferences. But at the root is the undisclosed information. And to be clear, I am not accusing Theo of lying; I am accusing him of acting irresponsibly with respect to the information that came into his possession.

                                                1. 3

                                                  Here is at least one developer’s comment on the matter. He points to the heise.de article about Spectre-NG as an example of the rumors that were floating around. That article is a long way from “lazy FPU is broken”.

                                                  Theo has offered to discuss your concerns, what you think you know, what he knew, when and how. He’s made a good-faith effort to get his cellphone number to you. If you don’t have it, ask.

                                                  If you do have his number, call him. Ask him what he meant by “We heard a rumor that this is broken.” Ask him what rumor they heard. Ask him whether he was referring to the Spectre-NG article.

                                                  Seriously, how hard does this have to be? You engaged productively with me when I called you out. You’ve called Theo out. Talk to him.

                                                  And yes, I get it. Your chief criticism at this point is responsible disclosure. But as witnessed by the broader discussion in the security community, there’s no single agreed-upon solution.

                                                  While you’ve got Theo on the phone you can discuss responsible disclosure. Frankly, I suggest beer for that part of the discussion.


                                                  Edit: Clarify that Florian wasn’t saying he knew heise.de were the source.

                                                2. 0

                                                  Reread the second sentence in my reply you linked.

                                                3. 2

                                                  This is plain libel, pure and simple.

                                                  1. -2

                                                    It is Bayesian reasoning, pure and simple.

                                                    That said, this is a tempest in a teacup, so call it whatever you want; I’m gonna go floss my cat.

                                              2. 6

                                                Sorry – I’m not accusing anyone of espionage; apologies if I came across that way.

                                                What I am saying is that however Theo obtained information – and indeed, even if that information didn’t originate with the leak but rather by “guessing” as he is now apparently claiming – how he handled it was not responsible. And I am also saying that Theo’s irresponsibility has made the job of including OpenBSD more difficult.

                                                1. 9

                                                  The spectre paper made it abundantly clear that addtional side channels will be found in the speculative execution design.

                                                  This FPU problem is just one additonal bug of this kind. What I’d like to learn from you is:

                                                  1. What was the original planned public disclosure date before it was moved ahead to today?

                                                  2. Do you really expect that a process with long embargo windows has a chance of working for future spectre-style bugs when a lot of research is now happening in parallel on this class of bugs?

                                                  1. 5
                                                    1. The original date for CVE-2018-3665 was July 10th. After the OpenBSD commit, there was preparation for an earlier disclosure. After Theo’s talk and after Colin developed his POC, the date was moved in from July 10th to June 26th, with preparations being made to go much earlier as needed. After the media attention today, the determination was made that the embargo was having little effect and that there was no point in further delay.

                                                    2. Yes, I expect that long embargo windows can work with Spectre-style bugs. Researchers have been responsible and very accommodating of the acute challenges of multi-party disclosure when those parties include potentially hypervisors, operating systems and higher-level runtimes.

                                                    1. 10

                                                      Thanks for disclosing the date. I must say I am happy that my systems are already patched now, rather than in one month from now.

                                                      I’ll add that some new patches with the goal of mitigating spectre-class bugs are being developed in public without any coordinated disclosure:

                                                      http://gitweb.dragonflybsd.org/dragonfly.git/commitdiff/9474cbef7fcb61cd268019694d94db6a75af7dbe

                                                      https://patchwork.kernel.org/patch/10202865/

                                                  2. 5

                                                    Thanks for the clarification.

                                                    I don’t think early disclosure is always irresponsible (the details of what and when matter). Others think it’s never irresponsible; and some that it’s always irresponsible. Good arguments can be made for each position that reasonable people can disagree about and debate.

                                                    One thing I hope we can all agree on is that we need clear rules for how embargoes work (probably by industry). We need clear, public criteria covering who, what, when and how long. And how to get in the program, ideally with little or no cost.

                                                    It’s a given that large companies like Microsoft will be involved. Open-source representatives should have a seat at the table as well. But “open source” can’t just mean Red Hat and a few large foundations. OSs like OpenBSD have a presence in the ecosystem. We can’t just write the rules with a “You must be this high to ride” sign at the door.

                                                    And yeah, Theo’s talk might make this more difficult going forward. Hopefully both sides will use this event as an opportunity to open a dialog and discuss working together.

                                                    1. 6

                                                      Right, I completely agree: I’m the person that’s been advocating for that. I was furious with Intel over Spectre/Meltdown (despite our significant exposure, we learned about it when everyone else did), and I was very grateful for the work that OpenBSD and illumos did together to implement KPTI. This time around, I was working from inside the embargo to get OpenBSD included. We hadn’t been able to get to where we needed to get, but I also felt that progress was being made – and I remained optimistic that we could get OpenBSD disclosure under embargo.

                                                      All of this is why I’m so frustrated: the way Theo has done this has made it much more difficult to advocate this position – it has strengthened the argument of those who believe that OpenBSD should not be included because they cannot be trusted. And that, in my opinion, is a shame.

                                                      1. 11

                                                        Look at it from OpenBSD’s perspective though. They (apparently) tried emailing Intel to find out more, and were told “no”. What were they supposed to do? Just wait on the hope that someone, somewhere, was lobbying on their behalf to be included, with no knowledge of that lobbying?

                                      1. 5

                                        I’m curious what book people would recommend for someone to pickup C++. I already can program, but I’ve avoided C++ because of the reputation and also the syntax, but its something I’d really like to get at least comfortable in.

                                        1. 5

                                          A Tour of C++ isn’t bad, particularly if you’re already familiar with C. After that it’s mostly practice – write a ray-tracer or some such.

                                          The second edition comes out in a month.

                                          1. 4

                                            I think it’s more than practice. There’s no way I was going to learn all the wrinkles in C++ without reading Scott Meyers’ Effective C++ series.

                                            1. 3

                                              Sure, something like Effective Modern C++ is a fine choice after becoming competent at C++. That’s advanced material though, more for the kind of people who set coding guidelines for teams.

                                              1. 3

                                                IME, without a detailed understanding of C++ ownership semantics you are going to hit some utterly impenetrable bugs pretty quickly.

                                            2. 1

                                              thanks for posting this, I’ll definitely check out the book, especially if a new edition is right around the corner.

                                            3. 3

                                              I like C++ Primer. It’s a whole lotta book, but it’s a whole lotta language and the book does an excellent job running you through a relatively recent version of the language. I’m currently working through Introduction to Design Patterns in C++ with QT. It’s a little dated but I’ve heard good things. Accelerated C++ is another I’ve picked up recently that seems to be well regarded. I’ve worked a bit with older C++98 style code in the past, but things have changed a bit with the advent of C++11 and especially later…

                                              1. 1

                                                thanks for taking the time to reply. One of the reasons I’ve avoided the language so far is just the massive size of it in comparison to my other languages.

                                            1. 7

                                              I always laugh when people come up with convoluted defenses for C and the effort that goes into that (even writing papers). Their attachment to this language has caused billions if not trillions worth of damages to society.

                                              All of the defenses that I’ve seen, including this one, boil down to nonsense. Like others, the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift, and, for the things C is not needed for, yes, even JavaScript is better than C (if you’re not doing systems-programming).

                                              1. 31

                                                Their attachment to this language has caused billions if not trillions worth of damages to society.

                                                Their attachment to a language with known but manageable defects has created trillions if not more in value for society. Don’t be absurd.

                                                1. 4

                                                  [citation needed] on the defects of memory unsafety being manageable. To a first approximation every large C/C++ codebase overfloweth with exploitable vulnerabilities, even after decades of attempting to resolve them (Windows, Linux, Firefox, Chrome, Edge, to take a few examples.)

                                                  1. 2

                                                    Compared to the widely used large codebase in which language for which application that accepts and parses external data and yet has no exploitable vulnerabilities? BTW: http://cr.yp.to/qmail/guarantee.html

                                                    1. 6

                                                      Your counter example is a smaller, low-featured, mail server written by a math and coding genius. I could cite Dean Karnazes doing ultramarathons on how far people can run. That doesn’t change that almost all runners would drop before 50 miles, esp before 300. Likewise with C code, citing the best of the secure coders doesn’t change what most will do or have done. I took author’s statement “to first approximation every” to mean “almost all” but not “every one.” It’s still true.

                                                      Whereas, Ada and Rust code have done a lot better on memory-safety even when non-experts are using them. Might be something to that.

                                                      1. 2

                                                        I’m still asking for the non C widely used large scale system with significant parsing that has no errors.

                                                        1. 3

                                                          That’s cheating saying “non-c” and “widely used.” Most of the no-error parsing systems I’ve seen use a formal grammar with autogeneration. They usually extract to Ocaml. Some also generate C just to plug into the ecosystem since it’s a C/C++-based ecosystem. It’s incidental in those cases: could be any language since the real programming is in the grammar and generator. An example of that is the parser in Mongrel server which was doing a solid job when I was following it. I’m not sure if they found vulnerabilities in it later.

                                                      2. 5

                                                        At the bottom of the page you linked:

                                                        I’ve mostly given up on the standard C library. Many of its facilities, particularly stdio, seem designed to encourage bugs.

                                                        Not great support for your claim.

                                                        1. 2

                                                          There was an integer overflow reported in qmail in 2005. Bernstein does not consider this a vulnerability.

                                                      3. 3

                                                        That’s not what I meant by attachment. Their interest in C certainly created much value.

                                                      4. 9

                                                        Their attachment to this language has caused billions if not trillions worth of damages to society.

                                                        Inflammatory much? I’m highly skeptical that the damages have reached trillions, especially when you consider what wouldn’t have been built without C.

                                                        1. 12

                                                          Tony Hoare, null’s creator, regrets its invention and says that just inserting the one idea has cost billions. He mentions it in talks. It’s interesting to think that language creators even think of the mistakes they’ve made have caused billions in damages.

                                                          “I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

                                                          If the billion dollar mistake was the null pointer, the C gets function is a multi-billion dollar mistake that created the opportunity for malware and viruses to thrive.

                                                          1. 2

                                                            He’s deluded. You want a billion dollar mistake: try CSP/Occam plus Hoare Logic. Null is a necessary byproduct of implementing total functions that approximate partial ones. See, for example, McCarthy in 1958 defining a LISP search function with a null return on failure. http://www.softwarepreservation.org/projects/LISP/MIT/AIM-001.pdf

                                                            1. 3

                                                              “ try CSP/Occam plus Hoare Logic”

                                                              I think you meant formal verification, which is arguable. They could’ve wasted a hundred million easily on the useless stuff. Two out of three are bad examples, though.

                                                              Spin has had a ton of industrial success easily knocking out problems in protocols and hardware that are hard to find via other methods. With hardware, the defects could’ve caused recalls like the Pentium bug. Likewise, Hoare-style logic has been doing its job in Design-by-Contract which knocks time off debugging and maintenance phases. The most expensive. If anything, not using tech like this can add up to a billion dollar mistake over time.

                                                              Occam looks like it was a large waste of money, esp in the Transputer.

                                                              1. 1

                                                                No. I meant what I wrote. I like spin.

                                                            2. 1

                                                              Note what he does not claim is that the net result of C’s continued existence is negative. Something can have massive defects and still be an improvement over the alternatives.

                                                            3. 7

                                                              “especially when you consider what wouldn’t have been built without C.”

                                                              I just countered that. The language didn’t have to be built the way it was or persist that way. We could be building new stuff in a C-compatible language with many benefits of HLL’s like Smalltalk, LISP, Ada, or Rust with the legacy C getting gradually rewritten over time. If that started in the 90’s, we could have equivalent of a LISP machine for C code, OS, and browser by now.

                                                              1. 1

                                                                It didn’t have to, but it was, and it was then used to create tremendous value. Although I concur with the numerous shortcomings of C, and it’s past time to move on, I also prefer the concrete over the hypothetical.

                                                                The world is a messy place, and what actually happens is more interesting (and more realistic, obviously) than what people think could have happened. There are plenty of examples of this inside and outside of engineering.

                                                                1. 3

                                                                  The major problem I see with this “concrete” winners-take-all mindset is that it encourages whig history which can’t distinguish the merely victorious from the inevitable. In order to learn from the past, we need to understand what alternatives were present before we can hope to discern what may have caused some to succeed and others to fail.

                                                                  1. 2

                                                                    Imagine if someone created Car2 which crashed 10% of the time that Car did, but Car just happened to win. Sure, Car created tremendous value. Do you really think people you’re arguing with think that most systems software, which is written in C, is not extremely valuable?

                                                                    It would be valuable even if C was twice as bad. Because no one is arguing about absolute value, that’s a silly thing to impute. This is about opportunity cost.

                                                                    Now we can debate whether this opportunity cost is an issue. Whether C is really comparatively bad. But that’s a different discussion, one where it doesn’t matter that C created value absolutely.

                                                              2. 8

                                                                C is still much more widely used than those safer alternatives, I don’t see how laughing off a fact is better than researching its causes.

                                                                1. 10

                                                                  Billions of lines of COBOL run mission-critical services of the top 500 companies in America. Better to research the causes of this than laughing it off. Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                                                  1. 7

                                                                    Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                                                    Researching the causes of something doesn’t imply taking a stance on it, if anything, taking a stance on something should hopefully imply you’ve researched it. Even with your comment I still don’t see how laughing off a fact is better than researching its causes.

                                                                    You might be interested in laughing about all the cobol still in use, or in research that looks into the causes of that. I’m in the latter camp.

                                                                    1. 5

                                                                      I think you might be confused at what I’m laughing at. If someone wrote up a paper about how we should continue to use COBOL for reasons X, Y, Z, I would laugh at that too.

                                                                      1. 3

                                                                        Cobol has some interesting features(!) that make it very “safe”. Referring to the 85 standard:

                                                                        X. No runtime stack, no stack overflow vulnerabilities
                                                                        Y. No dynamic memory allocation, impossible to consume heap
                                                                        Z. All memory statically allocated (see Y); no buffer overflows
                                                                        
                                                                        1. 3

                                                                          We should use COBOL with contracts for transactions on the blockchains. The reasons are:

                                                                          X. It’s already got compilers big businesses are willing to bet their future on.

                                                                          Y. It supports decimal math instead of floating point. No real-world to fake, computer-math conversions needed.

                                                                          Z. It’s been used in transaction-processing systems that have run for decades with no major downtime or financial losses disclosed to investors.

                                                                          λ. It can be mathematically verified by some people who understand the letter on the left.

                                                                          You can laugh. You’d still be missing out on a potentially $25+ million opportunity for IBM. Your call.

                                                                          1. 1

                                                                            Your call.

                                                                            I believe you just made it your call, Nick. $25+ million opportunity, according to you. What are you waiting for?

                                                                            1. 4

                                                                              You’re right! I’ll pitch IBM’s senior executives on it the first chance I get. I’ll even put on a $600 suit so they know I have more business acumen than most coin pitchers. I’ll use phrases like vertical integration of the coin stack. Haha.

                                                                        2. 4

                                                                          That makes sense. I did do the C research. Ill be posting about that in a reply later tonight.

                                                                          1. 10

                                                                            Ill be posting about that in a reply later tonight.

                                                                            Good god man, get a blog already.

                                                                            Like, seriously, do we need to pass a hat around or something? :P

                                                                            1. 5

                                                                              Haha. Someone actually built me a prototype a while back. Makes me feel guilty that I dont have one instead of the usual lazy or overloaded.

                                                                                1. 2

                                                                                  That’s cool. Setting one up isn’t the hard part. The hard part is doing a presentable design, organizing the complex activities I do, moving my write-ups into it adding metadata, and so on. I’m still not sure how much I should worry about the design. One’s site can be considered a marketing tool for people that might offer jobs and such. I’d go into more detail but you’d tell me “that might be a better fit for Barnacles.” :P

                                                                                  1. 3

                                                                                    Skip the presentable design. Dan Luu’s blog does pretty well it’s not working hard to be easy on the eyes. The rest of that stuff you can add as you go - remember, perfect is the enemy of good.

                                                                                    1. 0

                                                                                      This.

                                                                                      Hell, Charles Bloom’s blog is basically an append-only textfile.

                                                                                    2. 1

                                                                                      ugh okay next Christmas I’ll add all the metadata, how does that sound

                                                                                      1. 1

                                                                                        Making me feel guilty again. Nah, I’ll build it myself likely on a VPS.

                                                                                        And damn time has been flying. Doesnt feel like several months have passed on my end.

                                                                              1. 1

                                                                                looking forward to read it:)

                                                                        3. 4

                                                                          Well, we have those already, and they’re called Rust, Swift, ….

                                                                          And D maybe too. D’s “better-c” is pretty interesting, in my mind.

                                                                          1. 3

                                                                            Last i checked, D’s “better-c” was a prototype.

                                                                          2. 5

                                                                            If you had actually made a serious effort at understanding the article, you might have come away with an understanding of what Rust, Swift, etc. are lacking to be a better C. By laughing at it, you learned nothing.

                                                                            1. 2

                                                                              the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift

                                                                              Those (and Ada, and others) don’t translate to assembly well. And they’re harder to implement than, say, C90.

                                                                              1. 3

                                                                                Is there a reason why you believe that other languages don’t translate to assembly well?

                                                                                It’s true those other languages are harder to implement, but it seems to be a moot point to me when compilers for them already exist.

                                                                                1. 1

                                                                                  Some users of C need an assembly-level understanding of what their code does. With most other languages that isn’t really achievable. It is also increasingly less possible with modern C compilers, and said users aren’t very happy about it (see various rants by Torvalds about braindamaged compilers etc.)

                                                                                  1. 4

                                                                                    “Some users of C need an assembly-level understanding of what their code does.”

                                                                                    Which C doesnt give them due to compiler differences and effects of optimization. Aside from spotting errors, it’s why folks in safety- critical are required to check the assembly against the code. The C language is certainly closer to assembly behavior but doesnt by itself gives assembly-level understanding.

                                                                              2. 2

                                                                                So true. Every time I use the internet, the solid engineering of the Java/Jscript components just blows me away.

                                                                                1. 1

                                                                                  Everyone prefers the smell of their own … software stack. I can only judge by what I can use now based on the merits I can measure. I don’t write new services in C, but the best operating systems are still written in it.

                                                                                  1. 5

                                                                                    “but the best operating systems are still written in it.”

                                                                                    That’s an incidental part of history, though. People who are writing, say, a new x86 OS with a language balancing safety, maintenance, performance, and so on might not choose C. At least three chose Rust, one Ada, one SPARK, several Java, several C#, one LISP, one Haskell, one Go, and many C++. Plenty of choices being explored including languages C coders might say arent good for OS’s.

                                                                                    Additionally, many choosing C or C++ say it’s for existing tooling, tutorials, talent, or libraries. Those are also incidental to its history rather than advantages of its language design. Definitely worthwhile reasons to choose a language for a project but they shift the language argument itself implying they had better things in mind that werent usable yet for that project.

                                                                                    1. 4

                                                                                      I think you misinterpreted what I meant. I don’t think the best operating systems are written in C because of C. I am just stating that the best current operating system I can run a website from is written in C, I’ll switch as soon as it is practical and beneficial to switch.

                                                                                      1. 2

                                                                                        Oh OK. My bad. That’s a reasonable position.

                                                                                        1. 3

                                                                                          I worded it poorly, I won’t edit though for context.

                                                                                1. 4

                                                                                  While I enjoyed the post, the comparison at the end is unfair. The author compares ZFS with a 475GB NVMe drive as a cache to XFS without an equivalent cache.

                                                                                  1. 2

                                                                                    The initial comparison with XFS is somewhat unfair as well, though: does XFS provide the same data integrity features that ZFS does? It’s hard, really, to compare file systems with vastly different design centres and feature sets – which feels like the point they’re trying to make, really.

                                                                                    1. 2

                                                                                      Was the comparison looking at data integrity though? I didn’t see any mention of that anywhere – everything I saw was entirely about performance. If you’re doing a performance comparison of two filesystems, comparing them on (very) different hardware doesn’t seem real meaningful.

                                                                                      The author mentions the possiblity of comparing against something like bcache (which would then be a zfs vs. xfs+bcache comparison rather than strictly a filesystem comparison), but then handwaves it away as “exotic” and concludes, essentially, that “zfs plus additional fancy hardware and a bunch of manual tuning outperforms xfs”. Well…big deal.

                                                                                      1. 2

                                                                                        At what point do you need to assume integrity as a baseline though? This is a database blog we’re talking here.

                                                                                        Unrelated observation: it’s tragic that most production databases out there aren’t running on ZFS, and says a lot about the priorities (and less charitably the general ability) of our industry.

                                                                                  1. 6

                                                                                    Knuth vs McIlroy, round 2: now with parallelism.

                                                                                    1. 1

                                                                                      I feel like this isn’t quite fair to Knuth…

                                                                                    1. 1

                                                                                      Unfortunately, GraalVM’s massive size and memory consumption makes it a lot less interesting than in could be for language implementation (e.g. via Truffle). :(

                                                                                      1. 4

                                                                                        Pretty neat. The other nearly-universal technique is to attach gdb to the process, and repeatedly stop ask for a backtrace. Works with many interpreted languages like Python or Perl, SQL, C, etc. If you collect 5-10 samples, that’s enough to start with.

                                                                                        And strace is a quick way to see if it’s blocked on a system call or e.g. repeatedly opening the same file.

                                                                                        1. 9

                                                                                          If you have access to DTrace, something like this will save you time:

                                                                                          dtrace -n ‘profile-5 /pid==475/ { ustack(8) }’

                                                                                          Prints the bottom eight calls of the stack of PID 475 every 5Hz. Adjust to taste.

                                                                                          1. 6

                                                                                            The gdb technique even has a website: http://poormansprofiler.org/ =D

                                                                                            ( https://gist.github.com/unhammer/4e91821075c2485999eb has some handy tweaks on that for OCaml programs)

                                                                                          1. 2

                                                                                            At the same time they probably have to reroll a bunch of libraries useful for gaming, since the Rust ecosystem is still quite poor in that regard. Beating your own path comes with its own costs.

                                                                                            I’m most interested in what libraries on crates.io they used.

                                                                                            1. 7

                                                                                              I think they don’t heavily use lots of libraries in C++ either. They’re making 2D games and they’re famous for quality of art and atmosphere and not for tech (so Rust can make tech for them less painful and labor-intensive, as languages like C# do for other indie game dev companies). Outside of large monolithic frameworks designed primarily for 3D shooters like Unreal engine, I don’t think C++ ecosystem offers much for game dev.

                                                                                              Yes, it’s interesting what libraries they use, considering that they were experimenting with FRP on Haskell before.

                                                                                              According to their github account, they are making lua bindings and contributing to rust itself and SDL bindings.

                                                                                              1. 13

                                                                                                rlua (Chucklefish’s Lua binding for Rust) is an amazing work. The author wrote a long comment about its design on Reddit.

                                                                                                In fact, rlua is actually the only general high level bindings system to the Lua C API I’ve actually ever seen in any language that even might be safe.

                                                                                            1. 21

                                                                                              All the stack based ones: Forth, Cat, PostScript, etc

                                                                                              1. 2

                                                                                                I only recently saw my first example of a stack-based language. My thought was that it seems terribly difficult, as a programmer, to keep in mind what the stack contains at any given point in time. Is that something one gets used to over time?

                                                                                                1. 8

                                                                                                  I found it fun, when learning Forth, to actually work things out using a physical stack of index cards and a pencil. But yeah, you get used to it pretty quick.

                                                                                                  1. 2

                                                                                                    In my experience, words rarely cause more than 7 changes to the stack (like, rot pops 3 and pushes them back in a different order, for 6 changes, while dup pops once and pushes the same thing twice for a delta of one change), so if you get used to chunking in terms of only what a word pops and pushes, you can almost treat it like imperative with implicit args.

                                                                                                  2. 1

                                                                                                    I think it is difficult at first but once you get used to use combinator words (common in e.g. Factor) that probably becomes just as obvious as using map and fold in functional languages.

                                                                                                  3. 1

                                                                                                    Forth may omit variable names, but makes up for it with many word names.

                                                                                                  1. 7

                                                                                                    Laziness is neat. But just not worth it. It makes debugging harder and makes reasoning about code harder. It was the one change in python 2->3 that I truly hate. I wish there was an eager-evaluating Haskell. At least in Haskell, due to monadic io, laziness is at least tolerable and not leaving you with tricky bugs (as trying to consume an iterator in python twice).

                                                                                                    1. 6

                                                                                                      I had a much longer reply written out but my browser crashed towards the end (get your shit together, Apple) so here’s the abridged version:

                                                                                                      • Lazy debugging is only harder if your debugging approach is “printfs everywhere”. Haskell does actually allow this, but strongly discourages it to great societal benefit.

                                                                                                      • Laziness by default forced Haskellers to never have used the strict-sequencing-as-IO hack that strict functional languages mostly fell victim to, again to great societal benefit. The result is code that’s almost always more referentially transparent, leading to vastly easier testing, easier composition, and fewer bugs in the first place.

                                                                                                      • It’s impossible to appreciate laziness if your primary exposure to it is the piecemeal, inconsistent, and opaque laziness sprinkled in a few places in python3.

                                                                                                      • You almost never need IO to deal with laziness and its effects. The fact that you are conflating the two suggests that you may have a bit of a wrong idea about how laziness works in practice.

                                                                                                      • Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                                                                                                      1. 1

                                                                                                        Haskell has the Strict language extension which turns on laziness by default. It’s very rarely used because most people experienced enough with Haskell to know about it prefer laziness by default. This is experimental evidence that laziness by default may actually be a good idea, once you’ve been forced to grok how it’s used in practice.

                                                                                                        I am not quite sure whether this is really evidence. I actually never tried to switch it on. Iwonder whether that option plays nicely with existing libraries, I gues not many are tested for not depending on lazy-evaluation for efficient evaluation. If you use Haskell and Hackage, I guess you are bound with rolling with the default.

                                                                                                        1. 2

                                                                                                          It works on a per-module basis. All your modules will be compiled with strict semantics, and any libraries will be compiled with the semantics they chose.

                                                                                                      2. 3

                                                                                                        Idris has strict evaluation. It also has dependent types, which are amazing, but strict evaluation is a pretty good perk too.

                                                                                                        1. 2

                                                                                                          I thought there were annotations for strictness in Haskell.

                                                                                                          1. 3

                                                                                                            yes, but I consider it to be the wrong default. I’d prefer having an annotation for lazy evaluation. I just remember too many cases where I have been bitten by lazy evaluation behaviour. It makes code so much more complicated to reason about.

                                                                                                            1. 1

                                                                                                              Do you happen to remember more detail? I enjoy writing Haskell, but I don’t have a strong opinion on laziness. I’ve seen some benefits and rarely been bitten, so I’d like to know more.

                                                                                                              1. 1

                                                                                                                I only have vague memories to be honest. Pretty sure some where errors due to non-total functions, which I then started to avoid using a prelude that only uses total ones. But when these occured, it was hard to exactly find the code path that provoked it. Or rather: harder than it should be.

                                                                                                                Then, from the tooling side I started using Intero (or vim intero). (see https://github.com/commercialhaskell/intero/issues/84#issuecomment-353744900). Fairly certain that this is hard to debug because of laziness. In this thread there are a few names reporting this problem that are experienced haskell devs, so I’d consider this evidence that laziness is not only an issue to beginners that haven’t yet understood haskell.

                                                                                                                PS: Side remark, although I enjoy haskell, it is kind of tiring that the haskell community seems to conveniently shift between “Anyone can understand monads and write Haskell” and “If it doesn’t work for you, you aren’t experienced enough”.

                                                                                                          2. 2

                                                                                                            Eager-evaluating Haskell? At a high level, Ocaml is (more or less) an example of that.

                                                                                                            It has a sweet point between high abstraction but also high mechanical sympathy. That’s a big reason why Ocaml has quite good performance despite a relatively simple optimizing compiler. As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                                                                                                            Haskell has paid a high price for default laziness.

                                                                                                            1. 2

                                                                                                              As a side effect of that simple optimizing compiler (read: few transformations), it’s also easy to predict performance and do low-level debugging.

                                                                                                              That was used to good effect by Esterel when they did source-to-object code verification of their code generator for aerospace. I can’t find that paper right now for some reason. I did find this one on the overall project.

                                                                                                              1. 1

                                                                                                                Yes, however I would like to have Typeclasses and Monads I guess, that’s not OCaml’s playing field

                                                                                                                1. 1

                                                                                                                  OCaml should Someday™ get modular implicits, which should provide some of the same niceties as typeclasses.

                                                                                                                  1. 1

                                                                                                                    OCaml has monads so I’m really not sure what you mean by this. Typeclasses are a big convenience but as F# has shown are by no means required for statically typed functional programming. You can get close by abusing a language feature or two but you’re better off just using existing language features to accomplish the same end that typeclases provide. I do think F# is working on adding typeclasses and I think the struggle is of course interoperability with .Net, but here’s an abudantly long github issue on the topic. https://github.com/fsharp/fslang-suggestions/issues/243

                                                                                                                  2. 1

                                                                                                                    F# an open source (MIT) sister language is currently beating or matching OCaml in the for fun benchmarks :). Admittedly that’s almost entirely due to the ease of parallel in F#.
                                                                                                                    https://benchmarksgame.alioth.debian.org/u64q/fsharp.html

                                                                                                                  3. 1

                                                                                                                    Doesn’t lazy io make your program even more inscrutable?

                                                                                                                    1. 1

                                                                                                                      well, Haskell’s type system makes you aware of many side-effects, so it is a better situation than in, for example, Python.

                                                                                                                      Again, I still prefer eager evaluation as a default, and lazy evaluation as an opt-in.

                                                                                                                    2. 1

                                                                                                                      Purescript is very close to what you want then - it’s basically “Haskell with less warts, and also strict” - strict mainly so that they can output clean JavaScript without a runtime.

                                                                                                                    1. 6

                                                                                                                      very surprising that the BSDs weren’t given heads up from the researchers. Feels like would be a list at this point of people who could rely on this kind of heads up.

                                                                                                                      1. 13

                                                                                                                        The more information and statements that come out, the more it looks like Intel gave the details to nobody beyond Apple, Microsoft and the Linux Foundation.

                                                                                                                        Admittedly, macOS, Windows, and Linux covers almost all of the user and server space. Still a bit of a dick move; this is what CERT is for.

                                                                                                                        1. 5

                                                                                                                          Plus, the various BSD projects have security officers and secure, confidential ways to communicate. It’s not significantly more effort.

                                                                                                                          1. 7

                                                                                                                            Right.

                                                                                                                            And it’s worse than that when looking at the bigger picture: it seems the exploits and their details were released publicly before most server farms were given any head’s up. You simply can’t reboot whole datacenters overnight, even if the patches are available and you completely skip over the vetting part. Unfortunately, Meltdown is significant enough that it might be necessary, which is just brutal; there have to be a lot of pissed ops out there, not just OS devs.

                                                                                                                            To add insult to injury, you can see Intel PR trying to spin Meltdown as some minor thing. They seem to be trying to conflate Meltdown (the most impactful Intel bug ever, well beyond f00f) with Spectre (a new category of vulnerability) so they can say that everybody else has the same problem. Even their docs say everything is working as designed, which is totally missing the point…

                                                                                                                        2. 7

                                                                                                                          Wasn’t there a post on here not long ago about Theo breaking embargos?

                                                                                                                          https://www.krackattacks.com/#openbsd

                                                                                                                          1. 12

                                                                                                                            Note that I wrote and included a suggested diff for OpenBSD already, and that at the time the tentative disclosure deadline was around the end of August. As a compromise, I allowed them to silently patch the vulnerability.

                                                                                                                            He agreed to the patch on an already extended embargo date. He may regret that but there was no embargo date actually broken.

                                                                                                                            @stsp explained that in detail here on lobste.rs.

                                                                                                                            1. 10

                                                                                                                              So I assume Linux developers will no longer receive any advance notice since they were posting patches before the meltdown embargo was over?

                                                                                                                              1. 3

                                                                                                                                I expect there’s some kind of risk/benefit assessment. Linux has lots of users so I suspect it would take some pretty overt embargo breaking to harm their access to this kind of information.

                                                                                                                                OpenBSD has (relatively) few users and a history of disrespect for embargoes. One might imagine that Intel et al thought that the risk to the majority of their users (not on OpenBSD) of OpenBSD leaking such a vulnerability wasn’t worth it.

                                                                                                                                1. 5

                                                                                                                                  Even if, institutionally, Linux were not being included in embargos, I imagine they’d have been included here: this was discovered by Google Project Zero, and Google has a large investment in Linux.

                                                                                                                            2. 2

                                                                                                                              Actually, it looks like FreeBSD was notified last year: https://www.freebsd.org/news/newsflash.html#event20180104:01

                                                                                                                              1. 3

                                                                                                                                By late last year you mean “late December 2017” - I’m going to guess this is much later than the other parties were notified.

                                                                                                                                macOS 10.13.2 had some related fixes to meltdown and was released on December 6th. My guess is vendors with tighter business relationships (Apple, ms) to Intel started getting info on it around October or November. Possibly earlier considering the bug was initially found by Google back in the summer.

                                                                                                                                1. 2

                                                                                                                                  Windows had a fix for it in November according to this: https://twitter.com/aionescu/status/930412525111296000

                                                                                                                              2. 1

                                                                                                                                A sincere but hopefully not too rude question: Are there any large-scale non-hobbyist uses of the BSDs that are impacted by these bugs? The immediate concern is for situations where an attacker can run untrusted code like in an end user’s web browser or in a shared hosting service that hosts custom applications. Are any of the BSDs widely deployed like that?

                                                                                                                                Of course given application bugs these attacks could be used to escalate privileges, but that’s less of a sudden shock.

                                                                                                                                1. 1

                                                                                                                                  DigitalOcean and AWS both offer FreeBSD images.

                                                                                                                                  1. 1

                                                                                                                                    there are/were some large scale deployments of BSDs/derived code. apple airport extreme, dell force10, junos, etc.

                                                                                                                                    people don’t always keep track of them but sometimes a company shows up then uses it for a very large number of devices.

                                                                                                                                    1. 1

                                                                                                                                      Presumably these don’t all have a cron job doing cvsup; make world; reboot against upstream *BSD. I think I understand how the Linux kernel updates end up on customer devices but I guess I don’t know how a patch in the FreeBSD or OpenBSD kernel would make it to customers with derived products. As a (sophisticated) customer I can update the Linux kernel on my OpenWRT based wireless router but I imagine Apple doesn’t distribute the Airport Extreme firmware under a BSD license.

                                                                                                                                1. 4

                                                                                                                                  This is going to get expensive for companies pretty quickly.

                                                                                                                                  1. 7

                                                                                                                                    So that means buying more servers, with… Intel processors in them!

                                                                                                                                    1. 4

                                                                                                                                      Maybe. With the slowdown that KPTI incurs, it makes EPYC even more attractive.

                                                                                                                                      Now whether AMD can fab enough to keep up with demand is another question.

                                                                                                                                      1. 1

                                                                                                                                        Unfortunately AMD historically hasn’t had the management and the stockholder return to take on Fortress Intel. So Intel board hires weasel CEO’s to exploit the situation. Ironically, the tech is more than good enough.

                                                                                                                                    2. 2

                                                                                                                                      It already is. Across the board 30% hit is fairly common on cloud services. So the hit is worse than say, Apple and it’s battery/clock down issue, but clearly Intel weasels think they can outlast it - what are you going to do, not buy more Intel?

                                                                                                                                    1. 12

                                                                                                                                      Docker has not been very good software for my team at all. We’ve managed to trigger non-stop kernel semaphore leak bugs as well as lvm filesystem bugs. Some of them going through multiple different attempts to fix. And any attempt to try to figure it out yourself by reading their code is stymied by the weird Moby/Docker disconnect that seems to be there.

                                                                                                                                      If you are thinking about running docker by yourself and not in someone else’s managed docker solution then beware. It’s very sensitive to the kernel you are running and the filesystem drivers you are using it with. As far as I can tell if you aren’t running in Amazon, or Googles docker hosted solutions you are in for a bad time. And only Amazon is actually running docker. Google just sidestepped the whole issue by using their own container technology under the hood.

                                                                                                                                      The whole experience has soured me on Docker as a deployment solution. It’s wonderful for the developer but it’s a nightmare for whoever has to manage the docker hosts.

                                                                                                                                      1. 11

                                                                                                                                        A few things that bit me:

                                                                                                                                        • containers don’t report real memory limits. Running top will report all 32GB of system memory even if the container is limited to 2GB. Scala/Java or other JVM apps aren’t aware of this limit, so you have to wrap the Java process with -X memory limit flags, otherwise your container will get killed (you don’t even get an OutOfMemory exception) and marathon/k8s/whatever scheduler will start a new one. Eventually most interpreters (python, ruby, jvm, etc.) will have built in support to check cgroup memory limits, but for now it’s a pain.
                                                                                                                                        • Not enough tooling in the container. I don’t want to have to apt-get nc each time I rebuild a container to see if my network connections work. I’ve heard good things about sysdig bridging this gap though.
                                                                                                                                        • Tons of specific Kernel flags (really only matters if you use Gentoo or you compile your own kernel).
                                                                                                                                        • Weird network establishment issues. If you expose a port on the host, it will be available before it’s available to a linked container. So if you want to do a check to see if something like a database is ready, you have to do it in a container.

                                                                                                                                        I’m sure there are more. Overall I actually do like Docker, despite some of the weirdness. However I hate how we have k8s/marathon/nomad/swarm .. there’s no one scheduler or scheduler format and if you switch from one to the other, you’re redoing a lot of tooling, labels and config to get all your services to connect together. Consul makes me want to stab myself. DC/OS uses up 2GB ~ 4GB of ram just for the fucking scheduler on each node! k8s is a nightmare to configure without a team of at least three and really ten. None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                                                                                        Containers are nice. The scheduling systems around them can go die in a fire.

                                                                                                                                        1. 4
                                                                                                                                          containers don’t report real memory limits
                                                                                                                                          

                                                                                                                                          [X] we’ve been bitten by this.It also has implications for monitoring so you get double the fun.

                                                                                                                                          Not enough tooling in the container.
                                                                                                                                          

                                                                                                                                          [X] we’ve established out own baseline container images and

                                                                                                                                          Weird network establishment issues.
                                                                                                                                          

                                                                                                                                          [X] container and k8s networking was, at least until a few months ago, a mess.

                                                                                                                                          Consul makes me want to stab myself.

                                                                                                                                          [X] we hacked our own

                                                                                                                                          without a team of at least three and really ten.

                                                                                                                                          [X] confirmed, we’re throwing money and people at it.

                                                                                                                                          None of these solutions scale up from one node to a ton easily (minikube is a hack).

                                                                                                                                          [X] I’ve thrown up my hands on having a working developer environment without running it on a cloud provider. We can’t trust minikube to behave sufficiently similarly as staging and production.

                                                                                                                                          Containers are nice. The scheduling systems around them can go die in a fire.

                                                                                                                                          I’m not even sure containers are that nice, the idea of containers is nice but the execution is still half-baked.

                                                                                                                                          1. 2

                                                                                                                                            Why do you need so many people to operate kubernetes well? And what is it enabling, to make that kind of expenditure worth it?

                                                                                                                                            1. 2

                                                                                                                                              We’re developing a commercial turn-key, provider-independent platform based on it. Dog-fooding our own stuff has exposed many sharp bits and rough edges.

                                                                                                                                              1. 1

                                                                                                                                                Thanks.

                                                                                                                                        2. 7

                                                                                                                                          I’ve had a positive experience with Triton. It doesn’t support all of Docker’s features, since like Google they opted for emulating Docker and apparently decided some things weren’t having, but for the features Triton does it Just Works.

                                                                                                                                          Of course, that means getting used to administering a different ecosystem.

                                                                                                                                          1. 1

                                                                                                                                            I love the idea of Triton, but having rolled it out for a past position I worked at I can say honestly that I would not recommend it. There is no high-availability for many of the internal services by default (you need to roll your own replicas etc), there is no routing across networks (static routes and additional interfaces in every instance is not a good solution). I love Joyent as a company, and their products have a great hypothetical appeal to me as a technologist but there are just too many “buts” to justify spending the kind of money they charge for the solution they offer.

                                                                                                                                            1. 2

                                                                                                                                              I’m just curious how old the version of Triton was, because it has had software-defined networking for ~3 years or so. Was there a limitation with it?

                                                                                                                                          2. 2

                                                                                                                                            That stinks, but sounds more like a critique of the Linux kernel? Are you running anything custom?

                                                                                                                                            Newer Docker defaults to overlayfs (no more aufs), and runs fine for us on stock Debian 9 kernels (without the extra modules package, or any dkms modules). This is both on bare metal and the AMIs Debian provides. Though we run on plain ext4, without LVM.

                                                                                                                                            1. 4

                                                                                                                                              My experience is purely anecdotal so shouldn’t be taken as more than that.

                                                                                                                                              However we aren’t on anything custom. Running latest CentOS kernels for everything and we keep it patched. The bugs aren’t in the linux kernel. It’s the way docker does things when it sets up the cgroups and manages them. My early experimentation with other container runtimes seems to indicate that they don’t have the same problems.

                                                                                                                                              Just searching for the word hang in the moby project shows 171 open bugs and 521 closed. Most of them from a cursory examination look very similar to our issues. For us the tend to manifest as a deadlock in the docker engine which then causes the managed containers to go unhealthy and start a reboot loop. We’ve had to have cronjobs run and kill the docker daemons periodically in the past to keep things up and running.

                                                                                                                                              1. 2

                                                                                                                                                Maybe there are bugs in the way Docker sets up cgroups too, but you mentioned kernel semaphore leaks and LVM bugs which seem to be squarely in the kernel? Which seems to track to me - I know when systemd started exposing all this Linux kernel-specific stuff, they were the first really big consumer so they also exposed lots of kernel bugs.

                                                                                                                                          1. 4

                                                                                                                                            As far as I can tell, underneath the actor model Pony is feeding multiple native threads using a work stealing algorithm? Basically: does Pony have parallelism, and not just green threads?

                                                                                                                                            And what is the story with frame pointers? I’d like to know if DTrace can be used on it.

                                                                                                                                            1. 5

                                                                                                                                              There was actually a pretty interesting article posted the other about dtrace and Pony.

                                                                                                                                              https://lobste.rs/s/d5ndrg/dynamic_tracing_pony_python_program_with

                                                                                                                                              1. 3

                                                                                                                                                Yes, multiple native threads with a work stealing algorithm. Pony has parallelism.

                                                                                                                                                1. 3

                                                                                                                                                  To build on doublec’s comment… there are no green threads in the Pony runtime.

                                                                                                                                                  1. 1

                                                                                                                                                    A few more questions, if you don’t mind. I wasn’t able to find answers to these with preliminary Googling:

                                                                                                                                                    Are floats boxed in Pony? Does Pony support multidimensional arrays? Is the GC a copying collector, or do objects stay where they were allocated? How difficult would it be to create objects over existing data in memory (i.e. a mmapped file)? And does Pony use %rbp as a base pointer, or rely entirely on DWARF to figure out a stack frame?

                                                                                                                                                    1. 2
                                                                                                                                                      • Are floats boxed in Pony?

                                                                                                                                                      It depends. A float could end up boxed but F32 and F64 are not boxed by default. https://www.ponylang.org/reference/pony-performance-cheatsheet/#boxing-machine-words

                                                                                                                                                      • Does Pony support multidimensional arrays?

                                                                                                                                                      You could design a class to do it, but there’s not builtin type for them. There’s a RFC process to can be used to add new features. https://github.com/ponylang/rfcs/

                                                                                                                                                      • How difficult would it be to create objects over existing data in memory (i.e. a mmapped file)?

                                                                                                                                                      It depends.

                                                                                                                                                      You’d might need to have access to be able to operate on pointers to do it. This is currently limited to some builtin classes like String and Array. We are planning on adding a capability that would allow non-builtin classes to use Pointers within Pony (as compared to C-FFI).

                                                                                                                                                      OTOH, you might be able to do it now. Really it depends on what you would need to do. If you could represent the memory mapped file as an Array, (which you probably can), then you should be able to leverage existing functionality to do what you want.

                                                                                                                                                      • And does Pony use %rbp as a base pointer, or rely entirely on DWARF to figure out a stack frame?

                                                                                                                                                      That I can’t answer. I’ve never gone looking. Pony uses LLVM. I’ve never had a need to check out what is happening yet.

                                                                                                                                              1. 4

                                                                                                                                                They didn’t want to use a certain type of database because they decided it was immature. That’s a good idea: data is often the lifeblood of a business, so you don’t play games with it.

                                                                                                                                                Then they decided to go with Rust in 2015. I am looking forward to using Rust for professional projects in a few years, but the ecosystem still has a lot of bleeding to do in 2017. Ecosystem matters.

                                                                                                                                                So I think their reasoning has a whiff of rationalization to it.

                                                                                                                                                But someone has to blaze that trail for the rest of us, so hey… have at it TiDB dudes.

                                                                                                                                                1. 4

                                                                                                                                                  What aspects of the ecosystem need to be more mature to implement a storage engine? It seems so domain-specific that one would be writing a ton of their own code anyway.

                                                                                                                                                1. 4

                                                                                                                                                  Microsoft’s newest update to Windows 10, called The Creators Update, will contain Windows Subsystem for Linux, a tool that could make Windows 10 much more appealing to the increasing number of developers considering a move from Mac OS because they find the MacBook Pro underpowered for their needs.

                                                                                                                                                  So, just curious. I’ve been rather less impressed lately with Apple’s offerings myself, but very likely for different reasons than most people. Do developers find the current MBP underpowered?

                                                                                                                                                  1. 4

                                                                                                                                                    I use VMs for development, thus I need more than 16GB RAM to do my work properly.

                                                                                                                                                    1. 12

                                                                                                                                                      I use (many!) VMs too, and don’t need more than 16GB of memory for them.

                                                                                                                                                      However, I need more than 16 GB for Google Chrome.

                                                                                                                                                      1. 5

                                                                                                                                                        Chrome is basically just a hypervisor anyway

                                                                                                                                                  1. 4

                                                                                                                                                    By far my favourite package manager. It’s the way package management should be done: portably.

                                                                                                                                                    Major props to the pkgsrc devs, who have made my job more consistent and significantly more pleasant. You guys are fighting the good fight.

                                                                                                                                                    1. 6

                                                                                                                                                      I’m reading shitty Space Opera, and I’m not afraid to admit it!