1. 6

    Unfortunately the reality is that most websites is not designed for slow connections. This is the same for users who have outdated browsers as well as those who disable java script. When devs tell managers that X doesn’t work for Y browser, or doesn’t have full compatibility, managers will see that it will take ZZZ hours for an issue that affects 2% of users.

    1. 3

      I dont have much pity left for outdated browsers.., though there are always exceptions

    1. 3

      This is find more palatable than the other thread. I’m definitely for college courses that focus on software engineering with just what courses are considered valuable from CompSci. They should learn proper abstraction, interface design, verification strategies, debugging libraries from 3rd parties, balancing act that is safety/security, how compilers can improve/break source code, basics of distributed systems, and importance of software, configuration management. These come to mind.

      Early stuff related to basic programming should be in Associates Degree so they can get jobs quickly. Then, their work experience and Bachelor’s studies fuse into a larger, overall lesson.

      1. 3

        This is exactly, what a software engineering degree in Canada offers. In Canada Software Engineering is a discipline which you can get a Professional Engineering accreditation for. This level of accreditation puts you in the same level as any other engineer (e.g you can legally and have a duty to tell managers to back off if a decision leads to safety concerns.)

        Unfortunately, verification strategies in my experience isn’t taught unless you look for it. There is more emphasis on project management, legal responsibilities, economics of software. As well as OOP design/archiecture.

      1. 17

        My current mantra at work is “eliminate dependencies”. I think alot of developers think something like: “Oh, I need to do X. There is a library for X. It’s free. Surely I will be better off using that library and those library authors are experts at X and I can focus on delivering business value”. For some values of X that’s a pretty reasonable assumption. I use lots of libraries every day. I use an open source kafka client that I could and probably never would want to write myself.

        In fact, the core competency of my team is serving web requests and we rely hugely on go and net/http lib to provide us with keepalive, transparent h2 support, a scalable epoll based network handling stack, etc.

        But a dependency is never free. There’s no such thing. A dependency is something you have to understand and debug. A library likely tries to accommodate a very generic set of use cases for maximum reusability, and it’s quite possible you need a very small subset of that functionality. So it’s very possible that writing that one or two functions may very well be a much better choice than the ongoing work of importing a 3rd party framework, staying up to date while simultaneously managing risk of framework disappearing while also auditing the framework to ensure that it’s safe to run on your production servers. That framework may very well do it’s job in a very non-optimal way because it has to work in so many different environments and use cases (running on windows, or supporting Oracle, etc).

        As an extreme example, it’s unlikely that the left-pad library was worth the risk.

        Finally, and this is specific to my experience, alot of 3rd party services are just bad. I really hate having to tell someone that their site is down, but there’s nothing I can do, I don’t like to pass the buck or throw up my hands and say: “Sorry, your site is down until $provider X fixes their stuff”.

        1. 13

          Surely I will be better off using that library and those library authors are experts at X

          Maybe it’s because I do JS, but lately I’ve been questioning this more and more. Cynical, but a lot of library authors aren’t experts at X. I’ve switched to trusting a small list of reputable Node/JS developers (which is vague and in my head) and evaluating the need for a dependency for the rest. Maybe not fair, but it saves me trouble.

          1. 4

            I reached the same conclusion after evaluating my company’s dependency tree and finding problems with most of the third party libraries our code depended on. Some of them are described here:

            https://kev.inburke.com/kevin/dont-use-sails-or-waterline/

            I focus more on areas… I don’t want to rewrite a bcrypt hasher, or a postgres driver. But I’m definitely going to rewrite an API client for your API. In some cases I’ll steal only the parts of a third party library that I need, put them in my source tree and remove the rest.

            I also found most uses of lodash to be totally unnecessary and increasing the complexity of the code inside.

            1. 1

              I wonder if you’d be willing to share that list? I’m outside of the JS modules community. However merely publishing the list could cause drama, which I wouldn’t want.

              1. 5

                Like I said, it’s really vague and all in my head. It’s probably not even a list, just people I recognize from my 3-4 years doing Node. Sindre Sorhus, TJ, Dominic Tarr, Max Ogden, Mafintosh, among many others come to mind right now.

                I also give more “points” when the project is in a company’s GitHub, because then I can a) have some confidence more than one person looked at the code and b) learn about the company—they could be a top Node agency or respected in their field.

                If you want to summarize my method, it’s really just “don’t install too many dependencies with little activity and made by random people”. Another point is that most people (including me) don’t live on GitHub like prolific authors sometimes do and ignore or forget about project issues and pull requests.

            2. 1

              Dependencies are definitely a risk as sources of bugs but I do not believe we should eliminate them. Instead of eliminating dependencies, alternatively you can structure your code in a way that switching out libraries/dependencies is easy. Dependency Injection would allow you to modify or replace the object that’s creating the issue.

            1. 3

              If you read both of these papers, you get something profound:

              Notes on Postmodern Programming : http://www.mcs.vuw.ac.nz/comp/Publications/CS-TR-02-9.abs.html

              Design Beyond Human Abilities (by Richard Gabriel, author of Worse is Better): https://scholar.google.com/scholar?cluster=5397162041763930663&hl=en&as_sdt=0,5&sciodt=0,5

              The theme here is to give up on thinking of software as a well-formed and engineered crystal, i.e. a “modern” structure. I notice this tendency among programmers to want to make a perfect world within a single model that you understand. Everything is in Java in my IDE; everything is modelled with strong types in Haskell, code and data in a single uniform Lisp syntax, single language operating systems, etc.

              Through tremendous effort, you can make your software a crystal. But that betrays your small point of view more than anything. All you have to do is zoom out to the larger system and you’ll see that it’s wildly inconsistent – made of software in different languages, and of different ages.

              For some reason I feel the need to defend the word “post-modern”. Certain types of programmers are allergic to this term.

              Here is means something very concrete:

              “Modern” – we have a model of things, we understand it, and and we can change it.

              “Postmodern” – there are multiple models of things – local models that fit specific circumstances. We don’t understand everything, and we can’t change some things either.

              Even though humans created everything in computing, no one person understands it all. For example, even when you make a single language OS like Mirage in OCaml, you still have an entire world of Xen and most likely the Linux kernel below you.

              Another example: a type system is literally a model, and this point seems beyond obvious to me, but lots of people don’t recognize it: you need different type systems for different applicaitons. People seem to be searching for the ultimate type system, but it not only doesn’t exist, but is mathematically impossible.

              1. 2

                I find this to be a refreshing perspective on software engineering as it does have interesting points. I agree with the sentiment for large software systems. After a certain size, a single individual can only really have a mental model of a small portion of the overall system.

                But from mid-size to small-sized system I think this is incorrect. You can definitely have a grand narrative on mid-sized systems. For example, using “post-modern” techniques like design patterns. Most payment processing systems at banks have an architecture team which will gather requirements and create high-level (architecture) diagrams and mid-level diagrams (object diagrams). Essentially the Software Requirements Specifications allows us to translate it to a grand narrative that can be understood visually. Additionally this gets easier as you go closer to bare metal.

                1. 2

                  Maybe a typo, but to clarify design patterns are a modern technique and not post-modern. They are a model within OOP, which is itself a modern idea, if you think that “everything is an object”. In the post-modern perspective OOP is an appropriate language and abstraction for some systems.

                  I haven’t worked on payment processing systems, but they seem like large and long-lived systems. I’m sure there is decades-old code in almost all current production systems. I thought there were a lot of banks with systems in COBOL?

                  The point is that if you look from a large enough perspective, they will be composed of heterogeneous parts, each with different models.

                  I briefly worked at an insurance company when I was young, and that was definitely true. You had a data warehouse, probably on a mainframe, and they were connecting to it with Microsoft Access. If you think that Microsoft Access “was” the system, then you are sorely mistaken. There is decades of legacy code behind the scenes, and it absolutely affects the way you do your work. If you try to “wall off” or “abstract” the old system, you end up with poorly working and inefficient systems.

                  Based on my customer experience, payment systems don’t appear to be gracefully adapting to adversaries.

                  That’s not to say that modern techniques are bad. They work; they’re just limited to a domain. If you read the first article, they explicitly make the point that post-modern techniques encompass modern ones. They just pick and choose whatever is useful, rather than claiming to have the one solution and grand narrative.

                2. 2

                  a type system is literally a model,

                  Eh? Yes, you may choose to use it to model things…. but I think you’re on to a pretty sticky wicket as soon as you try.

                  Type systems to me are purely axiomatic rules for a mathematical game.

                  Just symbols and declarations and definitions and references in a digraph with rules for what is allowed and what isn’t.

                  Model? Really?

                  Ye Olde 1980’s era Object Orient Analysis and Design texts were full of that ….

                  But I rapidly gave up on that as totally disconnected from reality.

                  You work out what outputs you want from the inputs…. and then you play the type game to produce the simplest thing that will do what you want.

                  Model?

                  Nah.

                  Just a game with rules that you can pile up. But because the rules are consistent, you can make very big piles and know what they will do. (So long as you don’t cheat).

                  1. 2

                    When I say model, I mean it’s a map that you can reason about at compile time for what the program does at runtime.

                    But the map is not the territory. I wrote this comment a few years ago and it still reflects my position: https://news.ycombinator.com/item?id=7913684

                    I don’t think the use of the word “model” is very controversial. On that page:

                    http://blog.metaobject.com/2014/06/the-safyness-of-static-typing.html

                    “I think it’s most helpful to consider a modern static type system (like those in ML or Haskell) to be a lightweight formal modelling tool integrated with the compiler.”

                    Models and maps are useful. But they can be applied inappropriately, and don’t apply in all situations. That’s all I’m saying, which I think should be relatively controversial. And that’s the “postmodern” point of view, although I admit that this is perhaps a bad term because it inflames the debate with other connotations.

                    Wikipedia has a pretty good explanation:

                    The map–territory relation describes the relationship between an object and a representation of that object, as in the relation between a geographical territory and a map of it. Polish-American scientist and philosopher Alfred Korzybski remarked that “the map is not the territory” and that “the word is not the thing”, encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people do confuse maps with territories, that is, confuse models of reality with reality itself.

                    https://en.wikipedia.org/wiki/Map%E2%80%93territory_relation

                    1. 2

                      Ye Olde Object Oriented Design folk were very keen on modelling reality by types…..

                      Aha! We have a Car in our problem domain, we’re going to have a Car type and a Wheel type and …. and these are going to be the symbols we draw on our maps to represent the territory.

                      Over the decades of dealing with large systems I have completely divorced from that mindset.

                      I’m only interested in invariants and state and constraints and control flow and dependencies.

                      Models of reality? I don’t see that in the lines of code. I just see the digraph of dependencies, encapsulated state, invariants enforced, and ultimately inputs and outputs.

                      Models of reality? If reality says X and class invariant says Y, I know all kinds of buggy things will happen if I violate the class invariant…

                      But if I violate reality…. It’s an enhancement request, not a bug.

                      Part of our difference is you’re talking about static typing, I’m talking about typing.

                      Smalltalk / ruby / … are typed, just dynamically typed.

                      When you invoke a method on a variable, you can’t know until run time which implementation of that method will actually get invoked.

                      But you can know exactly the algorithm by which it decides which one will get invoked.

                      With a static typed language like C, you can know which function at compile time, because you can evaluate it’s rules at compile time.

                      ie. For both, its an axiomatic mathematical game with fixed rules.

                      If you show me a class “Car”, I ignore completely the similarity of the word “Car” to the name of the thing I drive.

                      I look at it’s state, it’s methods, it’s invariants, it’s dependencies.

                      I might after awhile, as an afterthought, mutter that it’s badly named…

                      To me we only map to some semblance of reality at a feature level, not a type.

                      1. 2

                        Yeah I know what you are saying – there was a this naive way of thinking of taking the nouns and verbs in your domain and trying to “model” reality with objects. That is not how I think either, but it’s not what I meant by the word “model”. I meant model in a more mathematical sense, like “formal modelling”.

                        I think of it as naive top-down design. I tend to work from the bottom up… write the simplest thing that does what you need to do. Then does it have the structure of an object? Factor it out. I don’t try to start with the classes. Then you get a structure that doesn’t match what your code actually does.

                1. 6

                  Web development is in a shabby state. I find it displeasing to work on many web applications. The reason for this I think is that frameworks like Rails set the barrier to entry/learning curve too low. What I mean specifically is that in my experience, web dev projects do not require well-thought out design decisions. Instead all you need is a couple individuals out of a bootcamp and maybe a senior installing gems/extensions with a couple mods. This will allow most projects to fulfill their requirements and in maybe 4-5 years a new website will be built to replace it.

                  1. 4

                    I agree, even though I am new to the industry (2-3 years) and work on frontend. The low barrier doesn’t just apply to the actual web programmers (fairly low but not as bad as it seems), but almost every aspect of the project’s planning, resource management, and execution is done rushed and with little thought for maintenance (to be “"agile”“). It doesn’t matter, though, because 3-4 years later it’ll be rewritten and your old code will effectively cease to exist.

                    That said, I’d rather work on a Rails project with fellow newbies than a Node/React one which becomes an absolute mess because of the lack of convention.

                    1. 4

                      Web development is in a shabby state. I find it displeasing to work on many web applications

                      The entire platform seems smothered between lots of people changing careers (good on them!) and the alpha nerds of the platform parroting cliches about the “open web” ad nauseum. It’s like the hype of making money on the platform overrides all quality concerns.

                      Things are held together by duct tape, but we should be proud of this thing because we’ve worked so hard on making it somewhat performant on Haswell i7s. Sunk cost fallacy all the way down.

                      Also, I really resent the self-justification that the easiest platform for users is the one that is the most important. This cedes control to people who don’t know any better. We should be framing how users use technology, and I’ll be the first to admit we’ve never prized accessibility as much as we should have.

                      1. 2

                        I was discussing this with two people earlier in the week, in the context of visiting a local code school’s graduation showcase.

                        All three of us had “grown up” with the web. We remembered when CSS came to be, when DHTML was still a thing, and when JavaScript was only used to make the website snow during December. None of us could possibly imagine being in the shoes of someone learning the web now. All of us were trying to get a sampling of that from the various code school grads. Having a gradual history of the technology in our heads, we all felt it was easier to navigate new technologies as they come about, and to not let the new shiny distract from core software engineering principles.

                        On one hand, I definitely want more people to be able to code, to understand the digital world, and to grow intellectually or professionally. But when the most experienced fellow among us said “Anyone who can write 2 lines of JavaScript thinks they’re the god of the web,” I had an mental flash of agreement before opening my mouth to push back. The resulting conversation was around how and when someone matures out of that.

                        My current thinking revolves around the first time you realize you’ve added to a mess rather than having fixed it.

                        I meet a lot of ambitious junior developers who goes into one of their early-career jobs with a platonic ideal of clean code in their heads. They behold the vast sprawl of legacy code around them and think “This is a swamp! A huge a pile of mud! I guess I’ll be the one to build real structure and bring sanity to this place.”

                        Among those who are lucky and are given the chance to do that, the majority will fail, and the best among them will look back at what they built and see that all of their scaffolding was just heaping more mud onto the pile. Then the healing can begin. Then they have some perspective on how the the mess comes to be in the first place: well-intentioned people just like them.

                        1. 1

                          all you need is a couple individuals out of a bootcamp and maybe a senior installing gems/extensions with a couple mods. This will allow most projects to fulfill their requirements and in maybe 4-5 years a new website will be built to replace it.

                          We, who care about quality and medium to long term maintenance, might not like it but this is positive, if you are on the other side of the table. A junior team can crank out something that works in short order.

                          1. 1

                            i’m still sad every time i think about opa failing to gain mindshare. it really should have been the next rails, and it would have advanced the state of web development marvellously. let’s see what elm and phoenix can do.

                          1. 9

                            The new design is so awesome and 2.0 it doesn’t even render on the Wayback Machine.

                            Sadly, the usual go-to in such cases, archive.is, makes a pretty jumble of it too.

                            In the browser though, the new layout looks pretty neat. Compared with the previous design it gives you a quicker idea of how many different things the platform is capable of and looks less like a generic landing page.

                            1. 4

                              To be honest, that’s a deficiency in those websites. Which you can’t fault them for, because downloading an entire HTML site is hard, and embedding one HTML page inside another is hard as well. But the Racket site works perfectly without JavaScript and works reasonably without CSS. If you can’t handle that, don’t blame them.

                              1. 3

                                I was impressed they have a layout that works with NoScript on. Then it fails hard in the Wayback Machine. Harder than most sites I see from CompSci people. Any web developers know why it does that?

                                1. 8

                                  Looks like this is just a specific bug in Wayback Machine’s implementation. Usually the crawler will download the assets and CSS and I believe the server will change all links to refer to the new locations. In this case, the site didn’t do that properly so the CSS isn’t loaded.

                                  For archive.is, is a bit more interesting. Looks like how their implementation works is that they move all the CSS into the HTML as inline styling and serve it as one static file. But when they translate it, it seems they translate it for old browsers like IE8. The new design doesn’t support old browsers at all so it looks jumbled when it sees CSS3 properties like flex-columns. Their previous design used a CSS framework which usually handles boring tasks like browser compatibility for you.

                                  1. 1

                                    Thanks for the explanation. At least I know not to put the problem on the Racket site now.