1. 6

    Smalltalk has a tool where methods can be found using example values. And Smalltalk has many other tools for discovery. Other languages could provide similar capabilities, but rarely do.

    1. 6

      Smalltalk proper doesn’t, though Squeak does (and I assume Pharo, also). But that’s largely a toy: MethodFinder only goes through a list of pre-selected methods to avoid having it accidentally run e.g. Object>>halt or Smalltalk class>>saveAndQuit:, so it’s generally only something I suggest to people who are very new to Squeak. And even there, I hesitate. It’s a pre-screened list, so you may not discover everything and end up reimplementing a method that already exists. And some things are just not discoverable through that interface: there is no conceivable way of describing “I want six random numbers between 17 and 34,” even though Squeak has methods for that.

      The real strength of Smalltalk is the same as the one hwayne identified in the context of Python: the standard library is well-organized, even grouping methods within a given class into useful categories. Combined with its excellent IDE, you have a real likelihood of discovering what functionality exists and finding an example of practical usage within the image.

      MethodFinder can be a part of that discoverability, in a very narrow way, but I really feel as if it’s more an antipattern and a crutch and a genuinely useful tool.

      1. -7

        Tedious Lobster debate straight ahead. I’d rather quit Lobsters than go through such a thing one more time. Bye.

        1. 11

          I honestly wasn’t looking for a debate; I was just assuming that most people here wouldn’t have used MethodFinder or known how it worked, and I didn’t want to paint it as more than it was. I’m sorry that you’ve decided to leave the community, and I hope you return.

          1. 7

            Don’t blame yourself too much, people who leave like that have likely been on the edge of leaving for quite some time. This was likely the straw that broke the camel’s back.

            1. 7

              His positions were outliers in a lot of the discussions which people either disagreed with or lacked historical information to fully understand. That he brought those here was a good thing. That they’d get argued with a lot was inevitable since he was going against the flow. Watching it, I was concerned he might think of leaving some time ago since the comments rarely got much positive feedback. It definitely built up over time.

            2. 2

              Don’t mind him. That was a good comment. I haven’t touched Squeak in years, but it was one of the very first apps I made an RPM package for back around Red Hat 6.0. In fact, there’s even still a dead links to it on their wiki.

              I didn’t play around with it much, but I do remember it did have that API browser. Comments like yours are important because you break down the thing discussed, talked about your experiences with it, it’s strengths and weaknesses, etc. It’s not just semantics either as Squeak and Smalltalk are two different things.

            3. 5

              Well, Ill miss having you here. Thanks for the things you taught me about Smalltalk.

        1. 3

          “OOP was concerned with representing reality”

          This is a widely held belief but it’s neither the most useful definition nor original intent. Likewise classification through hierarchical taxonomies is not an essential aspect of OOP, but an unfortunate offshoot of the popular “representing reality” mindset.

          They original intent and most useful concept of OOP is essentially message passing. Smalltalk’s early (ca 1972) evolution and Hewitt’s actors influenced each other through first person exchanges between Kay and Hewitt.

          1. 3

            They original intent and most useful concept of OOP is essentially message passing. Smalltalk’s early (ca 1972) evolution and Hewitt’s actors influenced each other through first person exchanges between Kay and Hewitt.

            I think there’s a couple of common misconceptions here. The first is that Smalltalk was the “original intent” of OOP. The first OOP language was Simula-67 (ca 1968), which had polymorphic dispatch and inheritance. IIRC Smalltalk and Simula were developed independently and each contributed ideas to “modern” OOP.

            The second is that there is an “essential aspect” of OOP at all. There is no “essential” OOP in the same way there is no “essential” FP: Lisp, APL, and ML are all considered FP languages despite being wildly different from each other. I’d argue that there’s a few common “pedigrees” of OOP that are all influential and all contribute ideas that most modern OOP languages consider “essential”:

            • Simula: Modularization, Dynamic Dispatch
            • Smalltalk: Message Passing
            • CLU: Abstract Data Types, Generics
            • Eiffel: Contracts, Class Invariants*

            I think part of the reason we assume Smalltalk is the “foundation” of OOP is because the other foundational languages, for the most part, aren’t well-known today.

            *I’ve read that CLU had contracts first, but I can’t find a primary source on this.

            1. 4

              Alan Kay, The Early History of Smalltalk…

              “The whole point of OOP is not to have to worry about what is inside an object… data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic any desired behavior.”

              “Though it has noble ancestors indeed, Smalltalk’s contribution is a new design paradigm—which I called object-oriented—for attacking large problems of the professional programmer, and making small ones possible for the novice user.”

              “[Dedicated] To SKETCHPAD, JOSS, LISP, and SIMULA, the 4 great programming conceptions of the sixties.”

              “It is not too much of an exaggeration to say that most of my ideas from then on took their roots from Simula—but not as an attempt to improve it. It was the promise of an entirely new way to structure computations that took my fancy.”

              “In 1966, SIMULA was the “better old thing,” which if looked at as “almost a new thing” became the precursor of Smalltalk in 1972.”

              1. 2

                Is Haskell missing the whole point of Functional Programming because it’s not a Lisplike?

                1. 2

                  The way I see it, running all the combinations of features in OO/Actor and then running combinations for all of the features in FP the elemental thing seems to be ‘tell’ in OO/Actor and ‘ask’ in FP. ‘Tell’ enables asynchrony and ‘ask’ enables ‘laziness.’ They are quite high but different mechanisms for modularity.

                  1. 2

                    I’m sorry. Who is claiming Haskell is “missing the whole point of functional programming”?

                    1. 1

                      It’s a point of comparison. While AK was very important in The foundations of OOP, he doesn’t have a ground to claim he’s the founder or even necessarily the most significant contributor. And just because modern OOP diverges from his vision does not mean that it’s in some way diminished because of that.

                      Similar to how Haskell is very different from lisp but still counts as an FP language.

                      1. 1

                        If the history you’ve got is not the history you want, just make it up as you go along. I have no interest in a discuss like this.

                        1. 1

                          What I’m saying is that equating OOP with just Alan Kay is historically incorrect. He played a big role, yes. But so did Kristen Nygaard, David Parnas, and Barbara Liskov. Their contributions matter just as much and ignoring them is historical revisionism.

                          1. 1

                            I never equated OOP with “just Alan Kay”. I equated him correctly with coining the term and inventing the first (completely) OOP language Smalltalk. Parnas and Liskov played roles, certainly in modularity, information hiding, and abstract data types. Later on in the history of OOP I recall around 1986-7 Luca Cardelli published a paper which was intended to provide a taxonomy of OOP-related terms. He define languages like CLU (abstract data types, parametric polymorphism, but no message-passing / runtime dispatch) as “object-based” where he reserved “object-oriented” for the latter.

                            Certainly Kay never gave Nygaard the short shrift, emphasizing in countless forums Simula’s huge role in OOP.

                  2. 2

                    That’s indeed what Alan Kay thinks, but he’s not exactly a neutral historian here, since he’s obviously a pretty strong partisan on behalf of the Smalltalk portion of that lineage.

                    1. 2

                      “The Early History Of Smalltalk” is a reviewed, published (ACM HOPL) history of the early years of OOP by the person who originally used the term. If you wish to refer to that as being a “strong partisan” then I guess that’s your privilege.

                      1. 2

                        If we’re going to use the ACM as an authority here they officially recognize Dahl and Nygaard as the inventors of OOP.

                        1. 1

                          The ACM is correct and this coincides exactly with Kay’s attribution of Simula being a “better old thing” and “almost a new thing”. While Simula introduced all the key concepts of OOP, Kay coined the term to imply the new thing, which is “everything is an object” not Simula, the almost new thing, which is ALGOL + objects for some things.

                          It’s a fine line distinguishing the two, which Kay more than acknowledges and which does not disagree with the ACM’s recognition of Nygaard and Simula in the history of OOP.

              1. 4

                I dislike these kinds of posts because instead of discussing effective uses of Go they discuss how to imitate language X in Go. That’s just not an appealing way to use a programming language.

                1. 6

                  Many developers learning LISP and functional languages have said it changed how they think about some problems with their coding style picking up on that. Some people also imitate useful idioms to get their benefits. So, with no claim about this one, I think it’s always worth considering in general how one might expand a language’s capabilities.

                  Double true if it has clean metaprogramming. :)

                  1. 2

                    I don’t entirely disagree. Maybe it’s just the quality of most of these posts that leave something to be desired.

                  2. 6

                    The goal of the post was to show how you would solve problems in Go that you would commonly use sum types for in other languages; not how to “get” sum types in Go.

                    I agree that the first two approaches are trying to do imitate sum types, and there are disadvantages to that. But I would argue that using a vistor pattern is quite different, and is the “Go way” (as in it’s the only way that works harmoniously with the type system).

                  1. 1

                    “When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. And since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand.”

                    Simultaneously a straw man and a false dichotomy. Not written by someone who understands logic?

                    1. 2

                      The author is Leslie Lamport, who won the 2013 Turing Award for his work on distributed algorithms.

                      1. 1

                        I’m aware of that. My question is rhetorical.

                        1. 1

                          What he may have meant is that programmers using the biological approach with things like information hiding, guard functions, and testing built complex programs that usually work as intended. That’s without knowing anything about formal logic or mathematical aspects. Writers covering things like LISP used to compare it to biological approaches as arguments it was more adaptable whereas the formalized stuff failed do to rigidity and slow-moving. Just reading Leslie’s remark, someone might assume all biologically-inspired approaches were barely comprehensible or failures whereas the formal or logical methods stayed outperforming them. Most of the latter actually failed.

                          I still enjoyed reading it despite that inaccuracy. Leslie’s mind is interesting to watch in action with down-to-earth style. This reminded me of a computer scientist who thought like a biologist to overcome limitations CompSci folks were facing. Led him to do everything from invent massively-parallel processing to using evolution to try to outperform human designers. Always claimed biology was better. A lot of better write-ups are paywalled or disappearing with Old Web but I can try to dig some out this week if you’re interested.

                          1. 2

                            Please do dig it up, I’m quite intrigued to see where their solutions worked well, and where they didn’t.

                            1. 2

                              With the way ML/AI is going, it’s quite possible many future systems could be much closer to biology than human design. An AI system-design software will just do whatever works as long as its optimization function says it’s good.

                              1. 2

                                I am in no way questioning Lamport’s brilliance nor contributions in general. However most people, brilliant or otherwise, have blind spots. I believe he’s betrayed some of his here, and that in itself is interesting and worth reading.

                          1. 3

                            “It would be difficult to reproduce this in a strict language because how could you write a function that produces all natural numbers without looping forever?”

                            Of course there are multiple ways of doing this in struct languages. The expressiveness of the solution depends on the capabilities of the specific language. Whether it’s better to be strict by default it lazy by default seems to be a matter of preference.

                            1. 3

                              Unless you need to know time or space behavior ahead of time. Example would be real-time segment or supressing covert channels. So far, strict and low-level languages seem to be inherently easier to check for that .

                              1. 2

                                Yes, agreed there are certain situations like these that would lead more to strict evaluation. The general case is closer to a toss up.

                              2. 3

                                I agree it’s possible in a strict language, but I encourage you to keep reading the article. Java has Iterator, Scala has Stream, Python has generators, etc. The point I make in the article is that approximating a lazy list with an Iterator (or Stream, or generator) is less natural and incurs its own complexities.

                                1. 4

                                  I disagree with the statement I originally quoted, above. I don’t believe anything in the article justifies that claim. As I wrote, in some strict languages the expressiveness may be cumbersome. But that is due to the specifics of those languages, not due to strict evaluation per se.

                                  For example any language with a reasonable macro capability is going to accommodate explicit lazy evaluation pretty well. See for example…

                                  https://srfi.schemers.org/srfi-45/srfi-45.html

                              1. 3

                                hey @patrickdlogan, @GeoffWozniak, @nickpsecurity and all the others, Thanks to your precious feedback I finally improved the article by adding a section about Grid computing and improving the section dedicated to virtualization.

                                Let me know if they sound fair enough in your opinion :)

                                Thanks again for the support!

                                1. 3

                                  Ok, maybe I need to try to write up a thorough history for new researchers to draw on since there’s a lot of it. For one, there’s nothing new about the main concepts of cloud computing: mainframes were already doing them. They had big machines running virtualized workloads with I/O accelerators that users connected to with dumb terminals. They were centrally managed. The machines were leased charging for usage by the minute or something like that (can’t recall). Here’s both a description of mainframe virtualization features that are cloudlike and a rant on how the wheel is reinvented:

                                  http://www.clipper.com/research/TCG2014009.pdf

                                  http://www.winestockwebdesign.com/Essays/Eternal_Mainframe.html

                                  MULTICS also tried to make computing a utility but it cost too much ($7+ mil per installation). It had better security and usability than a lot of systems.

                                  http://multicians.org/history.html

                                  https://www.acsac.org/2002/papers/classic-multics.pdf

                                  Another factor in concept of moving workloads onto multiple, external machines were distributed operating systems. They were mainly used for distributing workloads among commodity computers in one location (i.e. server room). That’s what grid computing does, though. A lot of the tech for Single-System Image in clusters is similar to capabilities cloud vendors developed. These two fields stayed really silod for some reason, though.

                                  https://en.wikipedia.org/wiki/Distributed_operating_system

                                  https://en.wikipedia.org/wiki/Amoeba_(operating_system)

                                  https://en.wikipedia.org/wiki/MOSIX

                                  https://en.wikipedia.org/wiki/Convergent_Technologies_Operating_System

                                  Then you have the grid computing platforms. For availability and harnessing various supercomputers, the concept of metacomputing was born. I’m not sure how far back it goes since I didn’t research it. I didn’t think it would be easy to get a lot of raw performance or low latency out of the concept when I studied Beowulf Clusters and grids. I’ll leave that one to you. :)

                                  Later, people got tired of the complexity and costs that came with the freedom of managing their own infrastructure. There were solutions to different problems that basically simplified things with a bunch of automation. The industry didn’t do that for politics as usual outside a few companies here and there trying to play it smart. The result was a move back to the mainframe model but on commodity hardware and FOSS tech. Definitely an improvement in cost, flexibility, and avoiding lock-in so long as one keeps their code portable and data onsite. Like mainframe vendors, cloud vendors encouraged all in approach sneakily letting incoming data be free with outgoing data costing money. (wink) Mainframes still have security advantages for LPAR’s with availability advantages on top of it. That they change very carefully and slowly with lots of testing helps a lot. There’s mainframes that haven’t accidentally gone down for decades. That plus total, backward compatibility with new stuff incrementally added is why big businesses love them.

                                  So, you might want to research on mainframes, distributed OS’s, and single-system image tech before putting together your final concept of what happened. The big picture will be spread among them.

                                  1. 2

                                    That’s another awesome insight @nickpsecurity, I’ll make sure I’ll research a bit more about this! It’s funny how big inventions are pretty often old things redesigned to address more specific problems :)

                                  2. 2

                                    Nice work, putting this together.

                                  1. 18

                                    This history fails to acknowledge the grid computing era which predates cloud computing by several years. One of the goals of grid computing was much like “serverless” functions, ie the ability to have a function run on demand, on any available node in the grid.

                                    1. 22

                                      History begins with the Internet in the world of computing these days. It is an inconvenient truth that virtualization has existed in mainframes since the 1960s.

                                      1. 3

                                        Definitely true. Maybe I can mention this as well :)

                                        1. 2

                                          Really?

                                          1. 7

                                            CP-40 was a research project in 1964 that ran on the 360. IBM released a product from that called VM in 1972. I wouldn’t doubt you could still run it on a z machine. This eventually turned into z/VM, which has a long line of products before it.

                                            Edit: Here’s an article from 2009 about it, interviewing one of the people who worked on it.

                                          2. 2

                                            Any good books/sites/anything to read about this? That’s absolutely fascinating!

                                          3. 3

                                            Thank you for your comment @patrickdlogan. This is definitely a good hint to improve the article, maybe I can add an extra section to provide this bit of history. I will start to dig some info, so feel free to send me any link you might think to be relevant for this section :)

                                            1. 3

                                              Probably as good a place as any is this Wikipedia article.

                                              https://en.m.wikipedia.org/wiki/Grid_computing

                                              1. 1

                                                thanks!

                                              2. 3

                                                Here’s a link to one of the old ones that were easy to acquire:

                                                http://toolkit.globus.org/toolkit/

                                                Click What Is Globus at bottom left to see some familiar-looking concepts in a chart.

                                                1. 2

                                                  BOINC is a similar thing, also open source.

                                                  1. 2

                                                    Thanks :)

                                              1. 4

                                                “could have used a lot more than 128k bytes of RAM.” – Alan Kay

                                                Still astounding after all these years.

                                                1. 5

                                                  “They’re not apps!”

                                                  Key point.

                                                  1. 4

                                                    Yeah. Bret Victor briefly reiterated this a few months ago: https://twitter.com/worrydream/status/881021457593057280 .

                                                  1. 5

                                                    “The Python SPAKE2 code has a bunch of assertions to make sure that one method isn’t called before another”

                                                    This is not a good OOP design, it has nothing to do with Python per se. Yes, in Haskell you use types that can enforce the state transition. In OOP you use objects, not a 100% isomorphism with the type-based solution, but simple, clear, and effective in an OOP setting.

                                                    1. 3

                                                      I don’t think that’s true; the essence of OOP is that you have objects that you send messages to and have no visibility of their internal state, which means there’s always the possibility of sending the wrong kind of message for the current state. Haskell-style programming really does eliminate a broad class of potential errors here.

                                                      1. 3

                                                        Trivially, if something must be started before it can be finished, have the start message take a function whose argument is an object that can receive a finish message. That’s just continuation passing to linearize starting before finishing.

                                                        s start: [ :f | f finish ]

                                                        1. 1

                                                          At which point you’ve shifted away from OO style and into a more functional style.

                                                          1. 1
                                                            1. Smalltalk has had code along these lines for more than 40 years.

                                                            2. That’s just one way I chose to appeal to an FP frame of mind. Here’s another…

                                                            f := s start. f finish

                                                    1. 4

                                                      They had a choice?

                                                      1. 2

                                                        Interesting. I like how the message protocol was extended for creating classes and methods with DbC capabilities.

                                                        I posted a related article… https://lobste.rs/s/hwayqa/wrappers_rescue which describes alternatives for implementing asserts and other capabilities in Smalltalk by wrapping classes, instances, and methods.

                                                        1. 2

                                                          I need an assistant that is at least as capable as I am. Every item on your list appears to me to be an activity that will cost you more time to work with the assistant a a lower-quality result than the time required to actually do the work yourself. Or hire someone with much more capabilities and share the time between you to perform these tasks at a significantly lower overhead.

                                                          Or try to develop automated assistants for some of these. At least that would be more fun.

                                                          1. 3

                                                            Sounds like you should be their assistant.

                                                            1. 3

                                                              Sounds like they’re unwilling to pay for the true value of the assistance they seek.

                                                            2. 2

                                                              Every item on your list appears to me to be an activity that will cost you more time to work with the assistant a a lower-quality result than the time required to actually do the work yourself.

                                                              Of course. Mentoring is supposed to benefit the mentee, not the mentor. This is by design. Look at the problem OP is trying to solve:

                                                              My motivation for asking this is there’s an oversupply of junior devs in Chicago right now. People don’t like hiring them as engineers because they can’t just jump in and be productive (see mythical man month), so I’ve been curious if they’d be more productive (and get experience) as paraprogrammers of sorts.

                                                              1. 4

                                                                Perhaps the question should have been, “What are the best ways for junior developers to gain experience?”

                                                                Rather the question was, “What would you do with a programming assistant?”

                                                                My answer remains, I would not hire such a very junior developer to be an assistant with the activities listed in the original approach. If I actually wanted assistance with those activities I would hire someone more capable.

                                                                People fresh out of such boot camps are the greenest of beans. I would hire them for other activities than the ones listed if that made sense for the business. I would pay for their education as software developers if that made financial sense for the business. I would not hire them as development assistants as listed in the OP.

                                                                1. 3

                                                                  Perhaps the question should have been, “What are the best ways for junior developers to gain experience?”

                                                                  That’s not the question I’m interested in asking. It’s been asked many, many times before.

                                                                  My answer remains, I would not hire such a very junior developer to be an assistant with the activities listed in the original approach.

                                                                  And that’s a completely fair. I strongly disagree, but it’s still a good answer.

                                                            1. 2

                                                              What I read in the original article was about productivity. The author claimed to be more productive in javascript without the typescript compiler, and to be more productive in vue than react.

                                                              1. 1

                                                                The quote about FORTRAN is from C.A.R. Hoare

                                                                1. 1

                                                                  I recall the Orbit compiler for T uses Y at the source-to-source transformation level then recovers efficiency in later optimizations. I could be off base.

                                                                  1. 3

                                                                    The following prints true.

                                                                    var pnil P = (*T)(nil); fmt.Println(thing.P == pnil)

                                                                    However the real problem is this smells bad – avoid having to inspect interface implementations.

                                                                    The simple solution is having lint warn of comparing an interface via ==

                                                                    1. 7

                                                                      This is just bad semantics. You’d expect that coercing a type to its supertype preserves all properties shared with the supertype including nil. For “implementation reasons” this isn’t happening.

                                                                      1. 2

                                                                        Not so. Printing all the details…

                                                                        fmt.Printf(”%#v %#v %#v %v”, t, t2, thing, thing.P == nil)

                                                                        Provides…

                                                                        &main.T{}

                                                                        (*main.T)(nil)

                                                                        &main.Thing{P:(*main.T)(nil)}

                                                                        false

                                                                        Clearly the interface P in the third position is not nil. Rather it is implemented by a nil *T

                                                                        It’s fair to dislike the Go interface mechanism, but it is not the case that this is type / supertype.

                                                                        1. 1

                                                                          Hm, maybe I’m misunderstanding then, but if isNil is a property of all types in Go and I can silently convert a value to a substantiation of an interface then I’d expect isNil to be preserved. That silent coercion is the subtyping relationship.

                                                                          Now also clearly this doesn’t quite work from a memory perspective since coercing to an interface means we need to reference the methods somehow or another, but that’s why I think this is a convenient-but-wrong kind of behavior.

                                                                          1. 1

                                                                            Think of Go interfaces as a pair of a type tag and a value of that type. In the original code the call…

                                                                            factory(t2)

                                                                            Go implicitly creates a P interface instance as the pair (*T, nil)

                                                                            And the original predicate asks…

                                                                            Given a pair (*T, nil) and the constant nil are they == ?

                                                                            And the answer is false.

                                                                            In another comment on this page I modified the example to…

                                                                            var pnil P = (*T)(nil); fmt.Println(thing.P == pnil)

                                                                            In this case Go implicitly creates a P interface instance as the pair (*T, nil) and the variable pnil is assigned that value.

                                                                            In this example the predicate asks…

                                                                            Given a pair (*T, nil) and another pair (*T, nil) are they == ?

                                                                            In this case the answer is true.

                                                                      1. 9

                                                                        Yet another “Go needs generics because Go does not have generics” post that is shallow on analysis, pro and con. The actual Go 2 process will be much more in depth. I see no need to address this topic until then unless someone takes the time to go deeper than every other post over the last several years.

                                                                        1. 1

                                                                          Does go 2 process has already some kind of result appart the blog post announcing it?

                                                                          1. 4

                                                                            OP: “I want to hear more about the kinds of tricks that would be hard or impossible to replicate in static type systems.”

                                                                            That certainly doesn’t apply to the web, as far as I can tell. Maybe you’re right in some respect, but your two-word post doesn’t give me a lot to go off of.

                                                                            1. 3

                                                                              Certainly parts of the web can be, and are, statically checked prior to their deployment. There is no static guarantee of this for any given component. Nor is there a guarantee, at runtime, the corresponding components will agree on the correctness of any interactions.

                                                                              Moreover the web in its entirety is a dynamic system. In fact has all of the characteristics of an “ultra-large-scale system” including the following characteristics that make “statically checking the web “ impossible at this point, if ever…

                                                                              • Have decentralized data, development, evolution and operational control
                                                                              • Address inherently conflicting, unknowable, and diverse requirements
                                                                              • Evolve continuously while it is operating, with different capabilities being deployed and removed Contain heterogeneous, inconsistent and changing elements
                                                                              • Erode the people system boundary. People will not just be users, but elements of the system and affecting its overall emergent behavior.
                                                                              • Encounter failure as the norm, rather than the exception, with it being extremely unlikely that all components are functioning at any one time
                                                                              • Require new paradigms for acquisition and policy, and new methods for control

                                                                              https://en.m.wikipedia.org/wiki/Ultra-large-scale_systems

                                                                              1. 2

                                                                                You said a lot of business procurement/marketing jumbo jumbo, but you haven’t explained why you think the web is inherently dynamically typed, which is the question. It’s also a technically specific question, which your answer seems to ignore entirely.

                                                                                Here’s why I think it’s not: there are nice strongly typed frameworks for any kind of web interaction. QED “web stuff” can be nicely expressed in a strongly typed way.

                                                                                1. 2

                                                                                  As I wrote, specific components can be statically checked. No component can rely on any other distributed component being similarly checked or completely compliant.

                                                                                  I guess if you consider any of what I listed in my previous message above as “marketing mumbo jumbo” then we have little further to discuss.

                                                                                2. 1

                                                                                  I see buzzwords, but not much meaning – and definitely no explanation of why these buzzwords imply dynamic typing is necessary.

                                                                                  1. 2

                                                                                    Choose one item from the list you see as particularly buzzwordy and not meaningfully describing the dynamism of the web and let’s discuss it.

                                                                                    1. 1

                                                                                      “Dynamism” isn’t the same thing as “dynamically typed”.

                                                                                      This is incredibly buzzwordy: “Require new paradigms for acquisition and policy”. That literally doesn’t mean anything.

                                                                                      1. 1

                                                                                        Well that definitely came out of the aerospace / military industry as they grappled with what became known as ultra-large-scale systems that far exceeded the traditional contracting, design, and implementation they had been used to for decades.

                                                                                        This definitely characterizes the web vs. the software industry prior to the web. Networked systems prior typically had tight control, if not ownership of, both clients, servers, and the protocols between them. Acquisition of almost every aspect of the web is different from the ground up, networking, hardware, software, client and server. I’d be hard pressed to name anything significant that remains as it was. As for policies, the same. (Think about the changes is protocols, security, reliability, privacy, etc) Maybe you weren’t around for what passed for large scale computing prior to the web. I can go into detail if desired.

                                                                                        But what do you mean by “dynamism is not the same thing as dynamically typed”? Dynamism is essentially accommodating change that had not been planned a priori, ie the types are not known or are not significant because they can be accounted for dynamically, through some kind of dynamism.

                                                                              2. 3

                                                                                You must consider HTML and/or HTTP to be dynamically typed. I haven’t thought about it hard. When I did web stuff, they had specific tags/actions requiring specific types of data in specific ranges in some fields and open-ended in others. That seemed statically typed to me even if you didn’t have to write an annotation. You always know what kinds of data the various tags or headers will contain even down to bounds of some.

                                                                                1. 4

                                                                                  HTML is interesting (having not thought about this before) by being typed in the sense that you have to specify the types of the tags, and there’s a bunch of reasonable default behavior that comes with that, but then those types (and the runtime environment - the browser) are amenable to all kinds of overloading, partly-correct or outdated usage, and just making any particular tag behave however you want (like in post-table, pre-HTML5 days when it was all the way down). So it’s like… pointlessly typed?