1. 2

    I personally don’t like standups but we do them so that designers know what devs do and vice-versa. It’s an opportunity to keep everyone in sync (we’re about 10 people). We do it once a week and keep it short. The article seems to skip over the fact that teams are not always performing well, or even just functional – processes and methodologies are necessary tools to help smooth out changes in mood and composition of the team. That being said, it’s good to see people questioning now established practices.

    1. 3

      This is just the traditional weekly team meeting.

    1. 1

      This is a curated list of resources, not the ultimate guide to learning CSS. No doubt, most of those resources lead to further recommended resources, and so on.

      In my opinion such “lists” simply result in a never ending todo list of things I’m told are required reading to be proficient in CSS.

      A less misleading title would be “My recommended CSS reading list”.

      1. 2

        The advancement of mankind for many seems to be focused on rocket ships, self driving cars, and mechanisms to know more about you to in order to influence your actions and spend.

        Perhaps it’s just infatuation with celebrity, consumerism, and the startup. The age of discovery driven by solving “big and meaningful” problems seems over. At least we will get self driving things, cheaper rockets, better algorithms to tell you what to buy, and new ways to share selfies with others.

        1. 0

          I agree with the mood, but rocket ships are not in the same group of self driving cars.

          Exploring the universe, might be the best use these apes can do of the expensive brains they carry around over their shoulders.

        1. 2

          For a very good perspective of modeling complex problems outside the domain of informatics and computing infrastructure, it is worth reading (informational) parts of the manual for the Beta language, the successor to Simula. It can be found on the web.

          It provides insight into the “programming is modeling” perspectives of the inventors of object-orientation. Interestingly, this is the same (unpopular) perspective of Domain Driven Design as promoted by its original author.

          1. 2

            If the problem or abstraction that I need to represent in code is of interest, then yes I enjoy the result. Particularly if the problem or abstraction is outside the domains of informatics and computing infrastructure.

            I think most programmers would describe their job as never ending long periods of pain (how do I do that?), followed by brief moments of elation (I got it to work!). It comes with the job.

            Your feelings are entirely representative of the majority of programmers. Don’t succumb to the nonsense that your supposed to always be enjoying your work, and that our industry is full of those that do.

            1. 8

              Following any proscriptive approach (agile or otherwise) doesn’t work in my experience. People seem to be completely seduced by the idea that there is a magic methodology that works for all development teams. Appopriating models from other industries (Lean Manufacturing, Kaizen) and selling them as cookie cutter solutions..

              It’s a shame the agile manifesto ever became more than an enlightened discussion-starter. Maybe that happens when you set down a manifesto!

              Organizing the work of a typical development team (of 3-8 people) is not that hard when you don’t fixate on process between deliverables - i.e. trusting competent, motivated people to do what they’re good at.

              The “methodology” should not go beyond:

              1. Define what you need to do - one sentence per feature/capability
              2. Roughly size the tasks and choose a deadline
              3. Prioritize
              4. Do it :)
              5. Assess
              6. Go to step 1

              Sorry if this comes across as arrogant and dismissive, not intended as such. I just think a large portion of the software engineering industry is deluded.

              1. 5

                competent, motivated people

                Hints at your unstated step 0:

                1. Find competent motivated people who focus on solving business problems, build pragmatic maintainable systems, and choose technology based on merit, not resume bingo.
                1. 2

                  What of a complex product, where domain knowledge is outside of the development team? Consider the outcome of your approach for say a Payroll System?

                  Let’s say I want you to write a simple CLI based double-entry general ledger program. I suggest you could not do either steps one or two of your methodology with success.

                  1. 2

                    I’m not saying to write one sentence to define all that you need to do to build a general ledger program. Defining what you need to do could be a series of conversations with stakeholders, team brainstorms, requirements gathering - getting an shared understanding of the problem in whatever way works best for the project team. My point is that you don’t need to proscribe how that task definition is reached (appreciate the irony of me proscribing it :) You don’t need a mental model to follow for every project.

                    Regarding estimation, this falls out of defining the problem as a series of discrete capabilities of the system. In your example, an accountant could help define what the acceptance criteria for the simplest possible application that could be considered a general ledger. From that a team that understands the domain should be able to break it down to features or capabilities that can be easily described. That may result in more than 1 iterations worth of work, but some experienced heads should be able to give estimates within 20% accuracy for each capability.

                    Of course, I’m also not implying that everything is simple. I threw out that 1-6 cycle only as a argument against process-heavy methodologies. In a complex system, that cycle could be very short and only cover a fraction of the final product.

                    Hell, much of what I’m rabbiting on about is agile stuff, my only point is that I think agile is only useful as a starting point for a team to find their optimal way of working together. As far as success goes, I’d say it’s far more important to have the right people and attitudes within a team than following the 10 Commandments of Agile. A human project team isn’t a machine to be oiled.

                1. 1

                  Always useful to see how real problems can be solved, particularly in this case through the use of the Observable pattern. Found the link to the tc39 observable discussion invaluable - many thanks.

                  1. 1

                    I’m glad. Thanks for reading!

                  1. 14

                    I believe that OO affords building applications of anthropomorphic, polymorphic, loosely-coupled, role-playing, factory-created objects which communicate by sending messages.

                    It seems to me that we should just stop trying to model data structures and algorithms as real-world things. Like hammering a square peg into a round hole.

                    1. 3

                      Why does it seem that way to you?

                      1. 5

                        Most professional code bases I’ve come across are objects all the way down. I blame universities for teaching OO as the one true way. C# and java code bases are naturally the worst offenders.

                        1. 5

                          I mostly agree, but feel part of the trouble is that we have to work against language, to fight past the baggage inherent in the word “object”. Even Alan Kay regrets having chosen “object” and wishes he could have emphasized “messaging” instead. The phrase object-oriented leads people to first, as you point out, model physical things, as that is a natural linguistic analog to “object”.

                          In my undergraduate days, I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                          I suppose I have taken that “Aha!” moment for granted and can see how, in the absence of such an explicit lesson, it might be hard to discover the notion on your own. It is definitely a problem if OO concepts are presented universally good or without pitfalls.

                          1. 4

                            I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                            Can you remember some of the specifics of this? Sounds fascinating.

                            1. 3

                              My memory is a bit fuzzy on it, but the project was about simulating a bank. Your bank program would be initialized with N walk-in windows, M drive-through windows and T tellers working that day. There might’ve been a second type of employee? The bank would be subjected to a stream of customers wanting to do some heterogeneous varieties of transactions, taking differing amounts of time.

                              There did not need to be a teller at the drive-through window at all times if there was not a customer there, and there was some precedence rules about if a customer was at the drive-through and no teller was at the window, the next available teller had to go there.

                              The goal was to produce a correct order of customers served, and order of transactions made, across a day.

                              The neat part (pedagogically speaking) was the project description/spec. It went through so much effort to slowly describe and model the situation for you, full of distracting details (though very real-world ones), that it all-but-asked you to subclass things needlessly, much to your detriment. Are the multiple types of employees complete separate classes, or both sublcasses of an Employee? Should Customer and Employee both be subclasses of a Person class? After all, they share the properties of having a name to output later. What about DriveThroughWindow vs WalkInWindow? They share some behaviors, but aren’t quite the same.

                              Most people here would realize those are the wrong questions to be ask. Even for a new programmer, the true challenge was gaining your first understandings of concurrency and following a spec rules for resource allocation. But said new programmer had just gone through a week or two on interfaces, inheritance and composition, and oh look, now there’s this project spec begging you to use them!

                          2. 2

                            Java and C# are the worst offenders and, for the most part, are not object-oriented in the way you would infer that concept from, for example, the Xerox or ParcPlace use of the term. They are C in which you can call your C functions “methods”.

                            1. 4

                              At some point you have to just let go and accept the fact that the term has evolved into something different from the way it was originally intended. Language changes with time, and even Kay himself has said “message-oriented” is a better word for what he meant.

                              1. 2

                                Yeah, I’ve seen that argument used over the years. I might as well call it the no true Scotsman argument. Yes, they are multi-paradigm languages and I think that’s what made them more useful (my whole argument was that OOP isn’t for everything). Funnily enough, I’ve seen a lot of modern c# and java that decided message passing is the only way to do things and that multi-thread/process/service is the way to go for even simple problems.

                                1. 4

                                  The opposite of No True Scotsman is Humpty-Dumptyism, you can always find a logical fallacy to discount an argument you want to ignore :)

                          3. 2
                            Square peg;  
                            Round hole;  
                            Hammer hammer;  
                            hammer.Hit(peg, hole);
                            
                            1. 4

                              A common mistake.

                              In object-orientation, an object knows how to do things itself. A peg knows how to be hit, i.e. peg.hit(…). In your example, your setting up your hammer, to be constantly changed and modified as it needs to be extended to handle different ways to hit new and different things. In other words, your breaking encapsulation by requiring your hammer to know about other objects internals.

                            2. 2

                              your use of a real world simile is hopefully intentionally funny. :)

                              1. 2

                                That sounds great, as an AbstractSingletonProxyFactoryBean is not a real-world thing, though if I can come up with a powerful and useful metaphor, like the “button” metaphor in UIs, then it may still be valuable to model the code-only abstraction on its metaphorical partner.

                                We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                1. 2

                                  Factory

                                  A factory is a real world thing. The rest of that nonsense is just abstraction disease which is either used to work around language expressiveness problems or people adding an abstraction for the sake of making patterns.

                                  We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                  I think OOP has its place in the world, but it is not for every (majority?) of problems.

                                  1. 3

                                    A factory in this context is a metaphor, not a real world thing. I haven’t actually represented a real factory in my code.

                                    1. 2

                                      I know of one computer in a museum that if you boot it up, it complains about “Critical Error: Factory missing”.

                                      (It’s a control computer for a factory, it’s still working, and I found that someone modeled that case and show an appropriate error the most charming thing)

                                      1. 2

                                        But they didn’t handle the “I’m in a museum” case. Amateurs.

                                2. 1

                                  You need to write say a new air traffic control system, or a complex hotel reservation system, using just the concepts of data structures and algorithms? Are you serious?

                                1. 3

                                  As someone retiring after 40 years in software development, I find this article quite poignant. My conclusion on leaving is that each new generation learns essentially nothing of the lessons of the past. No progress in software development technique and methodology is really made.

                                  As evidence of this, let me state that today it is no more likely that a complex, team-based, software project will be successful [1], than it was 30 years ago. I’m particularly referring to the largest software sector:- applications in business, commerce and industry where domain knowledge and requirements are outside of the software team.

                                  [1] On time, on budget, and to specification. I reject the new “agile” benchmark that “the software be ‘satisfactory’” as an acceptable measure of success.

                                  1. 1

                                    Quite fascinating to see into the world of game development, in contrast to the one I work in:- domain applications in business and industry.

                                    I envy in some ways that the development problem in the presenter’s world appears to be stated by what I see as the following equation, where the right hand side is deterministic …

                                    Software == Efficient data transformations (on constrained hardware)

                                    On the other hand, my reality (untruth) is that …

                                    Software == Requirements (as determined by external domain experts)

                                    It strikes me that the root issue in my world, is primarily that the right-hand side is non-deterministic and is subject to constant change.

                                    (edit: changed “finite” hardware to “constrained”)

                                    1. 1

                                      I find the diagram comparing a single monolith to multiple services misleading. It should be instead be simply drawn with each labeled microservice block within the monolith outline. As shown, the incorrect implication is that clear modular boundaries are not possible within a monolith. I find it to be propaganda to influence the viewer that monoliths are just one big mess in contrast to the lovely defined boundaries of microservices.

                                      1. 8

                                        I can’t understand why tech interviews involve implementing some sort of tree, regular expression engine, sorting algorithm or parser

                                        I don’t have the history to back it up, but I strongly suspect tree and linked-list questions are a holdover from a time where almost everything was done in C, and you had to know how to hand-roll data structures to get anything done. If that’s the case, then it would have been a “do you know how to use the language” question, not a “can you come up with new algorithms on the spot” question.

                                        1. 10

                                          Specifically, programming in C requires understanding pointers, and the difference between and object and a pointer to an object, and a pointer to a pointer. These distinctions are essential to solving any problem in C. Basic data structures are a simple self contained problem that involves pointers. The linked list question is not about linked lists. It’s about understanding why your append function takes a node star star.

                                          1. 2

                                            Basic data structures are a simple self contained problem

                                            (I agree with your whole assessment, but find this part particularly compelling.)

                                            If you interview for a position that involves programming, you are ultimately going to be forced to solve problems—sometimes brand new problems that have not been solved before. So how does one assess a person’s ability to do that? You can’t give someone a problem that’s never been solved before…

                                            I don’t know. The thing that I find most awful about data structures questions is that the distribution of those with knowledge is normal, so it’s either impossible, derive it from scratch, or memorized/almost rehearsed because the candidate just knows it.

                                            The best questions I’ve had have been generally open ended. There might be a data structure that solves it well that the interviewer has in mind, but there are also 10 other ways to solve the problem, with different levels of sophistication, that would probably be good, in practice, at least until hockey stick growth hits. The best interviewers are open minded, and good enough on their feet to adapt their understanding of the problem to a successful understanding of a potential solution that the candidate is crafting.

                                            Maybe the fact that algorithms, and data structures have but one answer is the actual drawback… hmmm.

                                            1. 3

                                              WRT the normal distribution, I think modern interviewing has forgotten that such questions aren’t supposed to be textbook tests and have drifted away from reasonable questions. Even if you’ve never heard of a binary tree, I can explain the concept in 30 seconds such that you should be able to implement search and insert. (Rebalance may be harder.) I can’t argue it won’t be easier if you’ve done it before, but it should never be impossible.

                                              1. 4

                                                That’s probably true. But the concept isn’t useful without intuition about how to apply it, and I think that’s part of the problem, too. Often, criticism of these questions is that “libraries already have a binary tree, and I’ll just use that.” I think it’s likely that these types of interview questions poorly proxy the question of: “does this person know when to use a binary tree? They must if they know how to implement one!”

                                            2. 1

                                              They are asked such questions to differentiate between a flood of new programmers each without experience. Such aptitude type questions can be be automated as a way to cull the majority of applicants.

                                              Sad that they are being applied across the board regardless of experience, even though nearly all experienced programmers have never had to actually implement a CS/text book algorithm, preferring instead to use the tried, tested and optimized implementation in a language’s base class library, or those readily available elsewhere.

                                              1. 1

                                                But trivia about algorithms and data structures says practically nothing about experience.

                                                EDIT: Nevermind, I just read your post again.

                                                Still, I feel like focusing on more realistic problems could be a much better predictor of aptitude.

                                          1. 6

                                            “I came across the work of Alan Kay, the inventor of Object Oriented Programming.”

                                            Alan Kay didn’t invent object-orientation, Nygaard and Dahl did as evidenced by their Turing Award citations. This Alan Kay/Smalltalk as inventor mistake prevents the author from seeing beyond “object-orientation as message passing”. As recently as September 2017, James Gosling stated that Java’s object model is entirely based on Simula, which explains the ops confusion about how and which primary paradigm Java and C++ supports - object-orientation.

                                            It is the concepts introduced in Simula:- objects, classes, inheritance and virtual functions, that are of importance today. The contribution of Smalltalk’s focus on “messaging between objects, all the way down” is not.

                                            1. 1

                                              In the case of professional programming, the common exclusion of older programmers shows that experience in our industry just doesn’t count. What does that say about programming being an “engineering” discipline?

                                              1. 1

                                                Consider that the effectiveness of an architecture for large-team, complex projects can only be determined in retrospect in the long term. This is particularly the case for external domains - where the problem space is outside of the domains known by the IT implementers. Problems where knowledge is held by external domain experts.

                                                One might say that architecture is a reflection of non-functional requirements. Both sides of that equation sadly have no standard practices or processes that can be readily categorized and compared. Do you know any?

                                                We (as an industry) used to try to measure project success in terms of delivery on-time, on-budget and to specification. The agile movement has led to success just being some vague measure of “satisfaction”. The “satisfaction” that comes from meeting (likely not formally stated) non-functional requirements is simply subjective and vague until the application is used long term.

                                                1. 1

                                                  I’d say generate getters and setters everywhere if one really only uses objects as data or modular containers. On the other hand, if you model your problem domain based on encapsulated objects that expose domain behavior - then don’t. The former maps to modeling the concepts in your domain within your data model, which is so prevalent today.

                                                  1. 2

                                                    That we are “software engineers”, despite not knowing or having a repeatable process or methodology that results in the successful delivery of a large team - complex project, on-time, on-budget and to some specification.

                                                    1. 2

                                                      There are attempts at doing the “Engineering” part of Software Engineer. But when talking and/or working with people (Lobsters included), the common conception is that they are only dull document used by manager to slow down or drive insane the smart developpers.

                                                      Ref: https://www.computer.org/web/swebok and many ISOs.

                                                      1. 1

                                                        That’s usually true with how programming is done. There are those doing engineering of software. I linked to three here:

                                                        https://news.ycombinator.com/item?id=15886317

                                                        Example I just found for industrial application of formal simulation and verification of a plant’s operation:

                                                        http://vigir.missouri.edu/~gdesouza/Research/Conference_CDs/IFAC_ICINCO_2007/ICINCO%202007/Area%203%20-%20Signal%20Processing,%20Systems%20Modeling%20and%20Control/Short%20Papers/C3_629_Seabra.pdf

                                                        EDIT: The only thing I can’t tell them in engineering with any confidence is time and budget. Software is too non-linear for that if the team is doing arbitrary work. Might be more accurate at estimating stuff similar to past work.

                                                      1. 2

                                                        A very interesting article, as descriptions of most real world systems are. I do think that a better title might have been “Undoing the harm of decomposing a complex system based on technical boundaries”.

                                                        The new division based on functionally is very broad, almost based on application areas. It would be interesting to see inside these to evaluate if they each remain ‘big balls of mud’, if logic and constraints are in a big application/service layer acting against a data model, or if alternatively an object model is used.

                                                        1. 4

                                                          Decomposition to subsystems is the heart of any sane architecture. The recent trend toward decomposition by layers is merely an attempt to ‘architect’ by categorizing things, as if the main problem in understanding code is figuring out what a piece of code is, rather that what it does. The main advantage of layering is that it requires fewer design skills, and still feels enough like design to avoid making hard choices on subsystem boundaries.

                                                        1. 4

                                                          I agree it should be changed to “cryptography”. I’d rather have unambiguous tags. Nevertheless, I am annoyed to see that “crypto” is increasingly hijacked to mean “cryptocurrencies”.

                                                          1. 2

                                                            Reading Uncle Bob’s Clean Architecture and alternating between saying “duh”, “hmm that’s good” and rolling my eyes.

                                                            1. 2

                                                              alternating between saying “duh”, “hmm that’s good” and rolling my eyes

                                                              That’s refreshing. Been meeting too many programmers (of the sect SOLIDites) recently, who seem to view it as some sort of holy tome to be revered and held aloft, as unquestionably the one way true way to programmer enlightenment. Hallelujah.