1. 4

    You can translate almost any algorithmic problem into a graph modeled approach. Often this allows you to find more efficient algorithms (mostly already written by other people).

    1. 1

      I think the best docs can do is give you an intuition. Often times there are a mix of datatypes and optimizations in play that can make it super unintuitive. If you want fast code eventually you’re going to have to reach for a profiler and actually measure with differently sized inputs.

      1. 1

        This looks really cool, the quality of the talks looks amazing. I kinda want to submit but I’m afraid I won’t be able to pull together something awesome enough.

        1. 1

          I encourage you to submit something anyway! Some of my favorite talks have been about something that seemed trivial to the speaker, but that I’d never even heard of before :)

        1. 13

          This is more like architect or engineering choices, not really what you’d normally consider CTO stuff.

          1. 7

            Depends on the size of the startup. CTO can also amount to basically a team lead of a few devs, or possibly the entire dev team.

            1. 3

              Yeah, just another case of words meaning different things to different people. 👍

              1. 2

                Then why call yourself CTO?

                1. 4

                  When the team is 1 person, you’re t he CTO of yourself!

            1. 1

              Makes me feel frustrated to see the facebook patents license showing up again on something I’d really like to use.

              1. 7

                Seems like a pretty language. The whole anything divided by zero is zero thing means it’s probably not great for anything math heavy as it will pave over legitimate errors.

                1. 7

                  That seems like such a bizarre design decision - I wonder what motivated it. It’s definitely something that would’ve caught me off guard.

                  1. 3

                    Division by zero should at least be inf if not going to be an error :P

                    1. 2

                      You could easily define a division function that checks if you’re dividing by zero and emits an “exception”. (I put it in quotes because exceptions must be handled in at some point in the stack.)

                      So I guess if you want to do heavy math you’d do that?

                      1. 2

                        The official tutorial has a “gotcha” page for division by zero that explains their thinking a bit. It originally used a partial function that would throw an exception if you divided by zero. They later changed it to a total function that returns zero in the case of division by zero.

                        From a mathematical standpoint, it is very much insane. From a practical standpoint, it is very much not.

                        From a practical perspective, having division as a partial function is awful. You end up with code littered with trys attempting to deal with the possibility of division by zero. Even if you had asserted that your denominator was not zero, you’d still need to protect against divide by zero because, at this time, the compiler can’t detect that value dependent typing.

                        Here’s a recent discussion on Reddit of this decision.

                        To me it seems like the language makes you do something with all possible errors but when it came to actually programming that way, the authors found it exhausting. There are a lot of people that find Go’s error handling or Swift’s optional handling similarly tiresome.

                        1. 2

                          Oh geez, this really grinds my gears. I’m trying not to be nasty about this, but the only explanation of this is in a sentence that’s not finished:

                          Even if you had asserted that your denominator was not zero, you’d still need to protect against divide by zero because, at this time, the compiler can’t detect that value dependent typing.

                          It’s literally “blah blah dependent typing”!

                          I’m sure that they can correct this sentence to at least be grammatically correct. However, the more serious error is a philosophical one.

                          I call this “the map is not the territory”, which is an old idea in philosophy [1]. The type system is a MAP to the territory. The territory is your PROGRAM – i.e. its runtime behavior.

                          The map is supposed to HELP you with your program. In this case, they’ve got it precisely backwards: the map is making your program wrong.

                          You should not make your program wrong to satisfy the type system. That’s like demolishing a house because someone misprinted a map of your town. No – somewhere lives there and there are real world consequences to it.

                          Also, somebody else mentioned stack overflow and heap allocation failures on Reddit, which are two other things that can happen everywhere (at least in C programs).

                          [1] https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation

                          1. 1

                            Couldn’t there be an option for the compiler inserting the checks automatically with it crashing and dumping the state into a log on a divide by zero? Anything that keeps it from running and running with potentially-bogus data.

                          2. 1

                            I think it’s an interesting idea. I wonder whether a separate coalescing division operator would be good? Or, have pony have its semantics for division by zero but supply a separate operator for traditional semantics.

                            1. 1

                              Thanks! We’d love to hear more of your thoughts. There are a lot of directions we can take the project and we want to know what the community finds interesting and useful.

                            1. 9

                              I read it a few months ago, but just a recommendation: The Phoenix Project is super good, both as a novel and as a learn-about-how-to-do-enterprise-it book.

                              1. 2

                                Looks like fun, I just picked it up. Thanks!

                              1. 5

                                It kind of sucks that you can’t upgrade basic things like RAM or the SSD, but I guess I kind of expect it now from laptops.

                                1. 32

                                  When you think about it, what’s replaced and what’s repaired is kind of arbitrary. Used to be able to replace L2 cache (and I had a 386 computer where it was bad, subsequently replaced) but I don’t think many people would be happy with the performance of a system where L2 was two inches away from the CPU, and the signal timing constraints that would impose. Hell, you used to be able to replace the ferrite rings in your core memory one by one.

                                  What do you do if you have bad RAM? You toss the whole $100 stick, right? But that’s only one bad chip out of 16. You’re throwing away $95 in perfectly good RAM! But rarely do you see complaints about that.

                                  I speculate that people establish a kind of baseline based on the level of integration on their first computer, and then everything after that is damn kids on my lawn. But the irreparable integrated components in that first computer are just the natural order of things, the atomic building blocks of our computing universe.

                                  1. 7

                                    What do you do if you have bad RAM? You toss the whole $100 stick, right? But that’s only one bad chip out of 16. You’re throwing away $95 in perfectly good RAM! But rarely do you see complaints about that.

                                    But my screwdriver kit is at the other end of the house is so I would probably just resort to mapping it away (Windows can do this too it seems.

                                    Maybe I am just lazy, but a kernel argument seems like less effort :-)

                                    1. 2

                                      Oh that’s pretty neat. I didn’t know this was possible in windows. Thanks for sharing.

                                    2. 5

                                      I think there’s something to what you’re saying, but I do think IC design/integration and outer packaging design are somewhat different topics, and the latter doesn’t go as monotonically in the direction of more integration. The early ‘80s Apples were notorious for having “no user-serviceable parts” for example, but then later Macs went back to having user-serviceable parts (before going back, once again, to not having any), for reasons that didn’t have much to do with chip design in either case. And the trend of glued-together, unserviceable outer packaging goes far beyond computers, to even things like toasters, which used to be more easily repairable because you could take out the screws, fix stuff, and then put the screws back in, but now aren’t simply because of how the final assembly is done— lots of stuff across many market segments is now stamped or glued shut in a way not designed to be non-destructively opened.

                                      Which is not to say there aren’t good reasons for that trend too, I just think whether cases (computer or toaster) are glued/crimped vs. screwed shut isn’t quite the same question as whether L2 cache should be an independently replaceable module.

                                      1. 3

                                        I speculate that people establish a kind of baseline based on the level of integration on their first computer, and then everything after that is damn kids on my lawn.

                                        No, people very reasonably compare what can be easily upgraded on a modern desktop computer: CPU, RAM, and storage. One can convincingly make the argument that the cooling needs of the CPU are too complex to allow someone to just pop in whatever processor they want in the tight form factor of a laptop, but there are literally no justifications for DRAM and SSDs being chip-down except to save physical space.

                                        What do you do if you have bad RAM? You toss the whole $100 stick, right? But that’s only one bad chip out of 16. You’re throwing away $95 in perfectly good RAM! But rarely do you see complaints about that.

                                        No, you toss out the $2K computer unless you bought the extended warranty. We’re talking about design decisions that make the problem you’re describing an order of magnitude worse.

                                        You’re also comparing soldered components on an industry-standard form factor to components soldered to a proprietary motherboard. You can’t go to company B and buy a replacement MLB for the laptop from company A. However, you can choose whatever DIMM vendor you want when you’re replacing your faulty DIMM.

                                        1. 2

                                          No, people very reasonably compare what can be easily upgraded on a modern desktop computer: CPU, RAM, and storage.

                                          But what makes those the things that can be reasonably upgraded, except for the fact that they already are. Again, you’re establishing a baseline that’s simply a snapshot in time. Why isn’t it reasonable for me to expect that I can upgrade my hard drive by opening it up and dropping a new platter onto the spindle?

                                          1. 3

                                            Why isn’t it reasonable for me to expect that I can upgrade my hard drive by opening it up and dropping a new platter onto the spindle?

                                            Because the inside of the harddrive has to be dust free for safe operation and most consumers do not have a clean room?

                                            Or as in the before example, the L2 cache has to be close i.e. integrated to the CPU?

                                            These are actual physical limitations on the hardware and not “we glued the case shut because we wanna sell a new laptop every 2 years when before we had the newer and faster CPUs to drive sale”.

                                            1. 2

                                              But what makes those the things that can be reasonably upgraded, except for the fact that they already are.

                                              They inherently can be easily upgraded since the technological ability to do that at low costs exists and has been proven in actual products. A vendor doing those in a way that can’t be upgraded easily should be assumed to be doing planned obsolescence or some other predatory behavior unless they have convincing argument for why it’s beneficial to consumers. I see that argument to a degree in something ultra-small with significant energy and cost issues like mobiles. Usually in anything decent size they’re doing it for predatory reasons despite the fact they could do it differently with interchangeable stuff. Most of the different form factors that became available to many suppliers each started with a company or group of them doing the latter.

                                              1. 1

                                                Are we talking in circles? At any given time, the market has proven that all the not yet integrated parts are viable by the very fact that they’re not yet integrated. Once upon a time this included L2 cache. Before that, when your disk drive was a dish washer, it included the platters in the drive. The market proved that upgradeable hard drive platters could exist because they did exist.

                                                If you make a list of ok to integrate and not ok to integrate parts, and then I go back and ask someone from 1997 to make the same list, and then someone from 1977, I find it very improbable that I’m going to get three identical lists. So what makes the 2017 list a natural law? Why not the 1997 list? Could the 2037 list also be different, or have we reached final enlightenment and uncovered universal truth?

                                                1. 3

                                                  L2 cache is on die to make the CPU run faster.

                                                  A single speck of dust would ruin a modern hard drive platter, so a normal living room isn’t a suitable place to handle and replace them.

                                                  The technologies in use in 1977 and 1997 were different from what we have today, so the design decisions change.

                                                  Why are socketed RAM and storage not present in some laptops? Because the manufacturer (and apparently many consumers) want the laptops to be lighter and thinner. There are no other benefits but many costs to this decision. Those costs ruffle prosumer feathers. You can call it “arbitrary” all day, but someone is paying those costs.

                                                  Why is the fuel port not on the roof of my car? Because that would be a huge pain in the ass - Like throwing away an entire laptop because my chip down SSD ran out of usable blocks faster than expected.

                                              2. 2

                                                you’re establishing a baseline that’s simply a snapshot in time

                                                Yes, I am. Today we can easily swap RAM and storage modules in laptops. We are quickly losing that ability, and the only justification is form factor.

                                            2. 2

                                              I speculate that people establish a kind of baseline based on the level of integration on their first computer

                                              I don’t know, I got my first computer in 1990 and it was certainly a different world. I still build my own computers for gaming, but my laptops for the past 15+ years have been pretty much 2-3 year things, maybe a RAM upgrade or HDD/SSD swap, then buy a new one. I think I’ve adjusted to how they’re built more than anything.

                                          1. 6

                                            The C# team have, for now, chosen to do the opposite of marking a variable as non-nullable; all reference types will become non-nullable by default, and you can mark the type of a variable as ‘null-safe’ by decorating them with ?, similar to nullable value types. Using a non-nullable variable that might be null (because you didn’t check if it was yet) will result in a warning, as will assigning the value of a nullable variable to one that is non-nullable.

                                            This is really exciting

                                            1. 5

                                              Holy moley, I wouldn’t think they would make a breaking change like that.

                                            1. 14

                                              I’m trying to think of protocols or standards that have been implemented by many parties, even competitors, that have been regularly updated and improved over years or decades, and I’m drawing a blank. I’m curious to find strategies that have worked. It feels like a lot of “successful” standards are immediately trapped into stagnation by their popularity. Even standards designed with extensibility in mind inevitably want some vital change that’s backwards incompatible and we end up with the next version stuffing everything into a comment (early inline js), tolerance of invalid data (unknown html tags to add stylesheet links), special constructions (docstrings, js pragmas), etc. The alternative seems to be forming a committee and drafting an RFC or other standard that is considered fast-moving if a new version comes out once per decade (and committee members don’t destroy it through as a stalking horse or proxy war).

                                              HTML has been a nightmare. IP stalled for decades at IPv4. DNS stalled for longer. CSS had a rough start and suffers fits and starts, but the “core and modules” system of CSS3 (and “CSS4”) has done well over the last decade. Email has deliberate extensibility in headers and protocols… but it took about 15y after basic client-server encryption was an obvious necessity for it to become ubiquitous, end-to-end encryption will probably never happen, and IMAP makes me miss the 90s browser wars. Jabber replaced a mess of proprietary protocols for a few years before fracturing back into walled gardens.

                                              I feel like I’m missing something obvious. or totally ignorant of some industrial CNC standard or avionics protocol or something, can someone point out a good example of a long-lived protocol with regular improvement? Or maybe my scale is wrong - it takes 5+ decades to replace hardware standards (adding grounding pins to the NEMA electrical outlet standard), maybe I should be thrilled it only takes 1 or 2 in software.

                                              1. 20

                                                I think the problem is that you’re thinking about this the wrong way, and that a lot of other people are too.

                                                The fundamental question is what is the benefit of a protocol or standard that constantly improves?

                                                I think an answer to this is “not a hell of a lot”.

                                                IPv4 has worked and worked well enough for decades. IPv6 has failed in a lot of ways because people kept piling on shit to make it spiffier and more academic and awesome, and in so doing keeping it from ever being easy to roll out or to be quite finished. Likewise, something isn’t “stalled” if it is continuing to deliver value.

                                                HTML isn’t that gross, especially once CSS came out. It’s as good as it ever was for displaying documents. It’s a reasonable approximation at a 2D scenegraph with automatic layout capabilities. Certain implementations were terrible, but that’s not the fault of HTML but instead the vendors.

                                                The main takeaway here is that both worked and worked well enough, and it was worth more to freeze them than to keep updating them. Protocols are centered around conversations, and if the content matter of a conversation doesn’t change (e.g., how to send and receive byte buffers with KV metadata as in HTML) there is no reason to continually add on things that are outside of that.

                                                1. 8

                                                  Thank you, I appreciate this response. To unpack what I meant by “stalled” in the case of CSS, at points there were the problems that obvious “next features” that people wanted were kludging around for years (flexbox address most of the missing layout/grid features) and also painfully uneven support, especially for the first ten years or so.

                                                  1. 2

                                                    Ah, thank you for your clarification!

                                                2. [Comment removed by author]

                                                  1. 9

                                                    WiFi too. Lots of vendors, lots of versions over time (802.11: a, b, g, n, ac).

                                                    1. 3

                                                      Ethernet and WiFi are really good examples.

                                                      Maybe hardware standards (like Ethernet and WiFi) make progress faster than software standards (like IP, DNS, HTML and CSS) because hardware vendors have a strong incentive to converge and agree on new features and new specifications, because they need this to justify buying their new product?

                                                      I think big players like Google, Facebook, Twitter, Microsoft and others agreed on HTTP 2.0 because it’s a net win for them in terms of network/server usage and user experience.

                                                    2. 10

                                                      USB is a reasonably good example of standards that are well thought-out and long-lived, and yet manage to productively evolve, often with impressive backwards compatibility.

                                                      The set of OS semantics we broadly call Unix have lasted a long time

                                                      PostScript

                                                      The versioning schtick of TeX and Metafont are aimed at answering this question; whether they can be said to be a big success is a different question though.

                                                      …?

                                                      1. 4

                                                        IP stalled for decades at IPv4

                                                        I think that’s more “if it’s not broke…”. TCP has had extensions and options and development. IPv6 was standardised long in advance of it’s actual need (maybe that’s the problem…)

                                                        Email has deliberate extensibility in headers and protocols…

                                                        They were retrofitted in a back compatible way. RFC821 doesn’t know about EHLO, RFC822 doesn’t know about MIME or charsets in headers.

                                                        IMAP makes me miss the 90s browser wars.

                                                        Interesting. The protocol is opinionated, but I’ve not followed recent developments - what’s the problem here?

                                                        […email…] end-to-end encryption will probably never happen

                                                        S/MIME and PGP have been standardised for a long time. I think that’s not a protocol failure but an incentive/commercial/UX failure. (One can argue that the protocol forces poor UX, which is perhaps fair but I’m not sure I understand that well enough).

                                                        On balance, I’d say the RFC approach has worked well. I don’t know how healthy the current IETF RFC system is but in the past lots of people put the effort in to build interoperable systems which could run as “internet scale”.

                                                        I actually think the problem is that since google search demonstrated you can scale a “single website” to “internet scale”, the assumption that you need to implement scalable, interoperable protocols to do big things on the internet was broken, perhaps reducing the incentive and importance of standardisation efforts.

                                                        1. 2

                                                          SCSI

                                                          1. 1

                                                            We should just stick with Gopher

                                                            1. 1

                                                              USB

                                                              SCSI

                                                              SAS

                                                              ATA

                                                              PCI bus

                                                              x86 instruction set

                                                              C and C++ languages

                                                              POSIX

                                                            1. 4

                                                              What about cache misses though?

                                                              1. 2

                                                                I remember the cuckoo hash papers going into more depth about cache friendliness and the paper assumes you’re familiar with those.

                                                              1. 1

                                                                I bet they do this with their other components as well.

                                                                1. 1

                                                                  Those big chunks of thick C++ with no code coloring made my brain scream out in pain.

                                                                  1. 3

                                                                    I’m @Rickasaurus. I write/speak about functional programming and run some F# related events. my site hasn’t been updated much lately, but it’s mostly because I’m embarrassed about my antiquated theme. Hopefully I’ll find the time to get it fixed soon.