1. 0

    The WWW might seem hopeless, but there’s still Gopher.

    There’s still some life in it, with gopher blogs and such. Plaintext is standard there. Much pleasant in contrast with web sites.

    Gopherus is a nice, low-footprint, well-maintained gopher client.

    1. 5

      Gopher is an interesting protocol, but what’s stopping people from just writing bare HTML (eg Dan Luu’s site) instead? As an (IMHO) added benefit, you get reflowing text + inline images and text formatting.

      1. 3

        I love the video on that site. They show a 80286, to which our current machines are basically supercomputers. The machine boots fast (DOS FTW) and browsing gopherspace is nearly instant. Yet, here we are with our supercomputers burning cycles loading and running megabytes of Javascript just to view some pages. Many tabs with basic ‘web apps’ take hundreds of megabytes of memory.

        It’s such a waste.

        1. 1

          I just compiled Gopherus on my Raspberry Pi 4. It’s much faster than a 286, cheaper when new, and requires less power to operate.

          I don’t see why one cannot enjoy the fruits of Moore’s Law, instead of being nostalgic for a (crappy) past.

          (My first computer was a 386 without a math coprocessor. That sucked)

          1. 2

            I don’t see why one cannot enjoy the fruits of Moore’s Law, instead of being nostalgic for a (crappy) past.

            As someone who is currently working with old software on (emulated) old systems, rest assured that what we have now is better in many ways.

          2. 1

            I have used it on a 386/25, which is faster but not that much. Gopherus supposedly works on 8088, as long as there’s ~400KB of free RAM for it to use, which most remaining PCs and XTs likely do have.

            I believe the only reason it isn’t actually instant is that the information is fetched from the Internet.

        1. 2

          The following is not safe:

          var=$(dirname "$f")

          I…I had no idea! I was pretty sure I was safe because I rely so heavily on shellcheck.

          1. 1

            Has anyone ever actually seen a file with a newline in the name?

            I mean this breaks with spaces at the end too but that also seems pretty oddball naming

            1. 4

              Has anyone ever actually seen a file with a newline in the name?

              Yes. It was in the home directory of an entirely non-technical user – I don’t think save dialogs and other such file-naming GUI bits would generally allow that to hapen, so I’m not really sure how it could have arisen (perhaps an email attachment or web download with an externally-supplied filename?), but one day there it was, breaking things that assumed it wouldn’t be there…

              1. 3

                I don’t write a ton of code, but when I must, I make a deliberate choice not to try to catch every conceivable corner case. Instead, I put my focus on explicitly catching and dealing with the most likely ways that things might go wrong, and sanitize data only where it absolutely must be sanitized. (And further, there is something to be said as well for maintaining a sane data set, but that’s a can of worms for another day.)

                Sometimes this irritates my co-workers. For example, when writing to a file in Python, there are a bunch of things that can go wrong. But I don’t need to check for all of them, because Python has exceptions built in and it will tell you when something unexpected happened by throwing an exception. Wtih an informative traceback and everything. It’s all built right into the language! All the programmer really has to do is make sure that data is not destroyed when Something Bad happens. And maybe post a “whoops, something went wrong message” if the program is being executed by a non-technical user, at the very worst.

                1. 2

                  What you are noting is that there is a cost in readability, maintenance, and likelihood of mistakes in dotting every single i.

                  Sometimes, that cost might be worth paying. Sometimes, the cure is worse than the disease.

                2. 3

                  Has anyone ever actually seen a file with a newline in the name?

                  On purpose? No. By accident? Yes.

                  1. 1

                    Probably, but this only breaks with a newline at the end, which seems way less likely to me.

                    1. 1

                      well technically var=$(dirname "$f") will break with whitespace anywhere, because the call isn’t quoted. But var="$(dirname "$f")" would only break with it at the end, yes.

                      1. 2

                        It won’t, actually; word splitting and globbing don’t apply to variable assignments. It doesn’t apply to word in case word in either.

                        1. 1

                          Huh, learn something new every day.

                1. 1
                  1. Surprising math:

                  int x = 0xfffe+0x0001;

                  looks like 2 hex constants, but in fact it is not.

                  But what is it? For me that’s a syntax error and gcc agrees:

                  error: invalid suffix "+0x0001" on integer constant
                      2 | int x = 0xfffe+0x0001;
                        |         ^~~~~~~~~~~~~
                  

                  Am I missing something?

                  1. 2

                    What is it? It’s a syntax error, as you determined.

                    If you’re asking why it’s a syntax error, I believe it’s because the letter e directly followed by a plus or minus sign is used to provide the exponent of a floating-point value. It’s not legal to use this syntax on a hexadecimal constant (that’s done with the 0xfffp+1 syntax), but I’m assuming that the C standard doesn’t allow this syntax to avoid confusion.

                    Adding a space before the plus sign makes the syntax error go away.

                    1. 2

                      If you’re asking why it’s a syntax error, I believe it’s because the letter e directly followed by a plus or minus sign is used to provide the exponent of a floating-point value. It’s not legal to use this syntax on a hexadecimal constant (that’s done with the 0xfffp+1 syntax), but I’m assuming that the C standard doesn’t allow this syntax to avoid confusion.

                      That’s also my understanding of the code presented.

                      Adding a space before the plus sign makes the syntax error go away.

                      Yes. And then it’s also “2 hex constants” and nothing fishy is going on. I still don’t see the point of the author :D

                    2. 1

                      I tried this with gcc -std=c89 and it still won’t parse. I tried with DeSmet C from 1985 and it doesn’t complain, but it also doesn’t do anything surprising.

                      C:\WOZ>type abuse.c
                      #include <stdio.h>
                      
                      main()
                      {
                        int x = 0xfffe+0x0001;
                        printf ("%x\n", x);
                        return 0;
                      }
                      C:\WOZ>c88 abuse.c -Ic:\desmetc\
                      C88 Compiler    V2.51    (c) Mark DeSmet, 1982,83,84,85
                      end of C88        001F code   0004 data       1% utilization
                      C:\WOZ>bind abuse.o
                      Binder for C88 and ASM88     V1.92    (c) Mark DeSmet, 1982,83,84,85
                      end of BIND        9% utilization
                      C:\WOZ>abuse
                      FFFF
                      

                      Since DOS uses 16-bit integers if you print that with %d you’ll get -1, but again, this is expected.

                    1. 1

                      I used to use PowerShell a lot, on both Windows and Linux. I eventually came to the conclusion that bash (or just plain old POSIX sh) was better, for a few reasons.

                      1. scripts aren’t run in a subshell. (which means if the scripts happen to run cd, it will change the directory of the parent shell too!)
                      2. ridiculous Verb-Noun verbosity. e.g. Get-Help instead of man, Write-Host instead of printf, etc. (this is partly solved with aliases.)
                      3. PowerShell was (at least when I used it late 2018) incredibly slow compared to bash (bash’s own manpage describes it as “too big and too slow”.)
                      4. This reason is purely subjective, but I didn’t really like the “everything is an object” thing about PowerShell. I prefer the usual Unix’s plain text.
                      1. 10

                        ridiculous Verb-Noun verbosity. e.g. Get-Help instead of man, Write-Host instead of printf, etc.

                        It’s not great for interaction but it’s great in scripts. I much prefer verbosity there. It makes for better reading when the script has been undeveloped for a while. Shell has an intense terseness to it that makes it an opaque wall of symbols and sigils when it gets past about 100 lines.

                        I didn’t really like the “everything is an object” thing about PowerShell. I prefer the usual Unix’s plain text.

                        Personally, I think this the number one thing holding terminals back from being great system interaction tools. Stuff like the output of ps and ls is just begging to have decent structure imposed on it, as just a simple example.

                      1. 5

                        It’s less learning the language that’s the hard part - it’s the whole jungle behind it, including (almost certainly) z/OS, CICS, etc.

                        1. 1

                          I was really surprised at both how difficult IBM makes it to get hobbyist or learner developer environments and how little (relatively speaking) mainframe administrators make (the salaries are great but given the important of mainframes and the relative rarity of the skill set I was expecting something crazy like $250k/yr.)

                          Steve The Better, More Canadian Wozniak had a blog post about the first point (the difficulty of setting up developer environments for new learners) and mentioned that was a major reason why he quit working on the Swift port to mainframes.

                          1. 2

                            Steve Wozniak had a blog post about the first point (the difficulty of setting up developer environments for new learners) and mentioned that was a major reason why he quit working on the Swift port to mainframes.

                            That was me. :)

                            https://wozniak.ca/blog/2017/05/16/A-lament-for-the-mainframe/index.html

                            1. 1

                              My bad. I’ll correct my comment above.

                              1. 1

                                No worries! It’s not the first time it’s happened.

                            2. 1

                              I work at IBM, and I spent some time working on this: https://github.com/jjasghar/COBOL-on-k8s, it’s a science experiment right now, but it at least shows k8s can do the work.

                              IBM also has the LinuxOne Community Cloud: https://github.com/linuxone-community-cloud/technical-resources/blob/master/deploy-virtual-server.md where you can get a free Linux zOS based VM.

                              1. 3

                                Thanks for that. I guess I should’ve been more clear: I know Linux on z is relatively easy to get going for a learner (either via IBM Cloud or Hercules) but I’m unaware of any good solution for z/OS, CICS, CMS, etc for the new learner. I know some resources exist but they’re expensive and/or extremely restricted. You can run old versions of that software on Hercules but not the newer stuff (at least not legally).

                                I absolutely could be wrong though and I’d love to know of any good resources.

                                (I also have no idea how someone would get started with IBM i if they didn’t join a company already using it and learn on the job.)

                                1. 3

                                  (I also have no idea how someone would get started with IBM i if they didn’t join a company already using it and learn on the job.)

                                  I often have to remind people of this in the i sphere. I think a lot of expectation mismatch over what used to occur (training, lifers, etc) versus what now occurs (universities as training, gig economy) is happening to tl;dr it.

                          1. 26

                            Although I posted something else in this thread already, I wanted to respond to this, which is the crux of the post.

                            How do we build computer science programs that prepare students to build their own futures?

                            That’s a huge question and I don’t think the answer provided had a lot of thought put into the matter. For one thing, it cites no relevant research. And there is a wealth of it. You could start with the ACM Special Interest Group on Computer Science Education, for example. Or read the ACM’s Computer Science curricula guidelines and learn about the wide range of approaches taken by various education institutions. Chapter 5 is of particular interest, where it states:

                            Whether or not programming is the primary focus of their first course, it is important that students do not perceive computer science as only learning the specifics of particular programming languages.

                            Even for computer science, programming from the outset is not a given.

                            The post also cites the ACM curriculum incorrectly. The post states,

                            We have to recognize what the ACM has been telling us since 2002 - that there are two or three tracks to “computer science”: what we might call “theoretical computer science”; what we might call “computer engineering”; and what we might call “software engineering”.

                            According to the ACM Curricula Recommendations (from 2005), it defines five sub-disciplines.

                            1. Computer Engineering
                            2. Computer Science
                            3. Information Systems
                            4. Information Technology
                            5. Software Engineering

                            In short, this proposal ignores the plethora of existing research into teaching computer science, computational thinking, and all the related sub-fields with the unfounded assertion that teaching programming and teaching the details of how computers work, preferably with a command line interface, is the path forward. There is little reason to believe any of this. This seems like an op-ed from what is likely a bright mind falsely assuming that most others should work the same way they do. I encourage the author to read some existing research and talk with those who work in this field. It is remarkably complex and nuanced.

                            1. 9

                              Thank you very much for linking those relevant resources! Also, big oopsie on my part for not looking for the updated curricula recommendations; I was using the version that was printed out in my department’s lab when I wrote this (prior to COVID, actually!)

                              1. 1

                                If you would like, I will ask my friends who research in this area to provide you with some more papers or links. I know they have mentioned active learning at U of Toronto and some studies with POGIL, but I have no specifics. PM me if you’re interested.

                                Edit: I do know that Mark Guzdial has been active in this area for some time (I read his blog in the mid 2000s when it was around). If you’re interested, that’s a good place to start.

                              2. 3

                                A better way of explaining the controversy here is by focusing on exactly where we transition to pure op-ed:

                                We move forward by giving students a taste of all three when they enter the field, and letting them specialize as they wish.

                                I am very sympathetic to the idea that getting students programming quickly is important. So do that.

                                Like Geoff said, not everyone thinks it’s about better programmers. Another software engineer might say “I am very sympathetic to the idea that getting students comfortable and flexible with novel computing tools is important”.

                                Another might replace that with “practice, practice, practice in recognizing and framing problems is important”.

                                Perhaps another would say “proving correctness or testing more often than programming is important”,

                                or “an acute and situational awareness of the impacts and ethics of computing is important”,

                                or “visualizing and building an intuition of data and analysis is important”,

                                or “communicating and collaborating often with fellow engineers is important”…

                                In short, this particular educational goal of creating “better professors, better engineers, and better web developers” relies on a very opinionated definition of what’s “better” for them. This debate around “better” is at the heart of the nuances in CS education.

                              1. 1

                                That’s why I think the focus on high level languages is bad.

                                I’d teach baremetal raspberry pi assembler programming, or on x86 FreeDOS and Flat Assembler.

                                The amount of cs grads I’ve encountered who can’t even write a simple program in assembler or C sickens me.

                                1. 14

                                  The amount of cs grads I’ve encountered who can’t even write a simple program in assembler or C sickens me.

                                  If you take the stance that CS is training for programming jobs, then there are a multitude of programming jobs where this doesn’t matter. Especially writing in assembler. (We should let C die a slow death anyway. And I say this as someone who works on a C compiler for a living.)

                                  If you take the stance that CS is learning to think computationally and understand computation, what evidence is there that knowing C and assembler even matter in this regard? I’ve never heard of anything about it. Everything I hear about it is dogma and completely unsubstantiated.

                                  1. 3

                                    If you take the stance that CS is training for programming jobs

                                    I have to say: I absolutely do not take this stance. University traditionally has been all about learning all about and advancing a subject or series thereof. A place of knowledge and research.

                                    It is a really bad trend that, for many universities and degrees, it is turning into a “prepare for job” course like the alternatives to university.

                                    to think computationally and understand computation, what evidence is there that knowing C and assembler even matter in this regard?

                                    That how computers work do not matter on understanding computation is quite the claim. I believe the burden of supporting this claim is with you.

                                    1. 3

                                      The comment by @matklad on this thread is proof of how very far a beginner can get without getting around to the so-called basics.

                                      1. 1

                                        It’s kind of amazing, but ultimately I have to feel sorry for the person who had to do it with hands tied on his back, and can’t help but wonder how much faster and farther they could have got otherwise.

                                        1. 4

                                          For that matter, along the four hours of Pascal we also got exposure to CS-flavored Boolean algebra (Karnaugh maps for minimization, and Boolean scheme modeling in some GUI program). This bit of knowledge was fun, but irrelevant for the polynomial things. We didn’t talk about assembly and compilers, and the lack of that knowledge didn’t harm, at that time.

                                    2. 2

                                      Came to this thread for exactly this idea. When I’ve taught uni students Java (which is only one of a few languages I’ve taught to students, not all of whom were in CS) we’ve used BlueJ which is an environment that removes the thinking between “open environment” and “type in Java”. That’s good for when you want students to explore an algorithm, coincidentally in Java notation, and bad for when you want them to learn how professional programmers write software, which is often done in Java despite the protestations of the RESF.

                                      Agreeing with or even understanding critiques of programming tooling in a CS class requires acceptance of what a CS class is for, and there isn’t wide agreement even among educators. Where I teach, plenty of the undergraduate classes don’t even use a computer because the goal is to understand computation, not to make a computer implement a customer’s requirements.

                                    3. 8

                                      I don’t think optimizing Computer Science education for the current local maximum of C-on-Unix is future proof.

                                      Anyway, most recent CS grads I’ve gotten to know online are very unhappy that their first developer jobs don’t allow them to use Haskell for everything.

                                      1. 1

                                        May I ask why? I agree that learning a systems programming language is probably a boon, but I imagine that C++ would be the language of choice at many institutions.

                                        1. 2

                                          I guess @ethoh is saying that universities shouldn’t be vocational schools, factory lines churning out programmers that know only one or few more things.

                                          I think students need to learn both, the high-level stuff and the low-level, from web servers to the basics of digital electronics. Assembler is not a very complicated language. It is useful for a student to understand how do you add two numbers using just two registers and an instruction. I wouldn’t have students write a game in assembler. But writing a number guessing game in a high-level language first, showing the compiler assembler and then being able to study what it does is a valuable skill.

                                          I would teach the full spectrum with a parallel curriculum of two courses, a programming 101 teaching some basics of a high-level programming language, and a computer engineering basics teaching the low-level bits. The first one of the high level one starts with hello world in Java/Python/X, moving on to more complicated applications focusing on more abstract problems, while the first class of the low-level one starts from boolean logic, moving on to logic gates, flip-flops, registers, von Neumann architectures, and so on.

                                          Eventually, about one third in of the curriculum, these courses would meet, so to speak; we would have now a sufficient understanding of the high-level language and the compiler so that we can inspect the raw assembler it generates, which we would then be able to understand given the low level knowledge we’ve accumulated. Then they would again diverge, the high-level language builds something more interesting and tangible, like a compiler, a web server, a game, while the low-level part goes on to study different microarchitectures, computer architectures, microcontrollers, embedded operating systems, and so on.

                                          By the end, the students would understand what the abstract machine C describes (a PDP-11) is different from the computers we have today, and how compilers, operating systems, even programs, make all sorts of interesting compromises to be able to perform efficiently. They would understand why your JavaScript web app was also vulnerable to Spectre.

                                          1. 2

                                            This is almost exactly how things are done at Georgia Tech, where I’m a student. There’s some variability depending on your specializations, but almost everyone goes through the sequence you’ve described. I’ve found that it’s given me a really solid foundation for learning the more practical parts of software engineering (SCM, web dev, building stuff with the cloud, best practices, etc), most of which I’ve learned on my own.

                                          2. 1

                                            C++ can be used to teach systems programming, but it is too much of an invitation to abstract away the machine. C is less problematic in that sense, but it is no replacement for assembly.

                                        1. 12

                                          Having learned from a curriculum that didn’t use IDEs and forced you to use the computer lab (which is basically a unified environment), I can tell you it wasn’t any better than using an IDE. I taught and was a teaching assistant on some courses at the university level and one thing that always happened was students would customize or do seemingly strange things that make teaching to a common environment challenging. Yeah, you put that “it must work on such and such a machine”, but that didn’t stop them. It’s been a while, though, so I don’t know what the current thinking is on this.

                                          I do know that in my graduate and professional career, the knowledge needed to use a computer is not correlated with the ability to program it. Not even close.

                                          1. 7

                                            I was really hoping for an analysis of the content. That would be far more interesting than the technical aspects of the sites themselves.

                                            1. 5

                                              I’m setting up some personal computer environments that are representative of what it was like to do development in the 1980s and early 1990s. That includes using the appropriate versions of MS-DOS, various C compilers, assemblers, debuggers, and yes, even the editors.

                                              People like the nostalgia of personal computers from that era, but I’m going to see what it’s like to actually work on them, down to reasonable cycle accuracy (even on a 8088 or 286). (If you do this, you’ll quickly notice that file I/O is pretty slow.)

                                              If anyone can point me to some QEMU examples of getting this sort of thing running, I’d appreciate it. I’m using PCem now and it crashes on Linux so I’m forced to use my Windows machine for it, and it’s very CPU intensive. I’m looking for alternatives. (VARCem was worse.)

                                              1. 1

                                                I tried this in the past before, although I cheated and maybe used DOSBox. You can get pretty far with Borland’s compilers (I stuck with Turbo C++, but perhaps Turbo Pascal was even more popular back then?). I think another option is using Bochs where you can set the cycles.

                                                1. 1

                                                  Borland Turbo C is truly great software, as was Turbo Pascal. I’m mostly focusing on C environments. I’ve got five set up at the moment. The manuals can be a bit of a trick to find.

                                                  1. 2

                                                    Watcom? I still have my circa 1997 box Watcom 11 (the actual cardboard box) I used with OS/2.

                                                    How about Mix C? Remember that? I still have those floppies around.

                                                    1. 1

                                                      Great suggestions. Thanks!

                                                      Although I am old enough to have used all this software when it came out, I didn’t because, well, I didn’t have a computer at the time. I’ve never heard of Mix C, so thanks for that.

                                                      1. 2

                                                        See here and original company site.

                                                        They’re free at least on CPM. Packrat that I am, I have the v2.2.0 for DOS on 720k 3.5” floppies. The DOS versions look to be still for sale. There’s also the PCC/DeSmet C and for 386+ djgpp.

                                                        I’ve reminded my teenage son that the phone in his pocket has more CPU power, RAM, and storage than the first six or seven personal computers I owned combined (and I walked uphill to school, both ways, in the snow ;) ).

                                                2. 1

                                                  It’s always better to get real hardware for this sort of stuff. Cycle-accurate emulation is a thing but even then, in most emulators, things like hard-drive or floppy-drive latency and seek times aren’t too accurate. (With variable consequences, depending on hardware; probably not that big a deal on a C64, but a Windows 95 machine thrashing the swap file will be nothing like in the good ol’ days…). Plus there are things that were significant but you can’t replicate in a virtual-only setting, like the inherent physical latency introduced by swapping floppy disks (and dealing with the occasional bad one). Due to the recent retrogaming craze, lots of 1980s systems are selling at pretty outrageous prices these days, but 386-era laptops are still cheap and plentiful, and don’t take up that much space.

                                                  Now for some useful advice:

                                                  For x86, Dosbox may actually be reasonably hassle-free, and you can slow it down until it’s annoying :). Bochs may also be an option but it might be pretty CPU-intensive, too – last time I used it, it didn’t have a JIT engine, it was a good ol’ interpreter, which tends to be slow and instruction-heavy.

                                                  For other 1980s-era systems, some emulators do have settings that give you “native” speeds. For the C64 & friends, the VICE emulator has, or used to have, pretty good support for that. For the Amiga, fs-uae can do a reasonably convincing rendition of the A500’s pace.

                                                  These come off the top of my head but if it’s some other particular system that you’re curious about let me know, there’s a chance I may have played with an emulator for it at some point and I might at least be able to point you in some right directions :).

                                                  1. 1

                                                    Real hardware is a no-go. Not only do I not have the space, I don’t have the patience for it. I did all kinds of that in the 90s. Messing with the BIOSes of the various machines in PCem is enough. :)

                                                    Emulation of cycle accuracy is enough for what I have planned. I’m not embarking on any serious software projects.

                                                    Thanks for advice. I may send you a private message with questions later.

                                                    1. 2

                                                      Not only do I not have the space, I don’t have the patience for it. I did all kinds of that in the 90s. Messing with the BIOSes of the various machines in PCem is enough. :)

                                                      Oh, yeah, I know what you mean :). Don’t get me wrong, I love my old machines, but unless I have a specific reason and specifically need something that’s only available on the real stuff, or a lot of free time to kill, I go by the emulator route, too.

                                                1. 3

                                                  In this thread: programmers fail to realize static site generators are tools only a programmer could love. (In their current form, at least. iWeb works for normie people.)

                                                  1. 5

                                                    I’ve been in the programming game for over 20 years and one thing programmers love to do is torture themselves with bad tools. There needs to be a serious study done on this by anthropologists because clearly the development community can’t break the cycle.

                                                    1. 2

                                                      We lionize terrible tools and practice lots of ancestor worship; that’s ground for anthropologists if I found any…

                                                      (On topic: I might be a programmer, but I don’t like the kind of programmer tools programmers love; that and SSGs require so much ceremony or fragile automation that I never bothered. Between WP being more effort to administrate, I write plain HTML…)

                                                  1. 64

                                                    I wrote that tweet that is making the rounds. Not many things fit in 280 characters and Twitter being immutable, there’s not many things you can go back to clarify. So let me give some more details on this forum.

                                                    1. I speak for my experience, not for all of Uber. Heck, we have hundreds of teams, 95% of whom I don’t know. And teams are autonomous and decide how and what they do (including following guidelines or ignoring them partially or fully) - even if I wanted to, I couldn’t make sweeping statements.
                                                    2. Uber has - and still has - thousands of microservices. Last I checked it was around 4,000. And, to be very clear: this number is (and will keep) growing.
                                                    3. I’ve been working here for almost 4 years and see some trends in my org / area (payments). Back in the day, we’d spin up a microservice that did one, small thing just like that. We had a bunch of small services built and maintained by one person. This was great for autonomy, iteration speed, learning and making devops a no-brainer. You could spin up a service anytime: but you’d be oncall for it.
                                                    4. Now, as my area is maturing and it’s easier to look ahead, as we create new platforms, we’re doing far more thoughtful planning on new services. These services don’t just do one thing: they serve one business function. They are built and maintained by a team (5-10 engineers). They are more resilient and get far more investment development and maintenance-wise than some of those early microservceis. Cindy called these macroservices and I said we’re doing something similar. The only difference in what we do is a service is owned by one team, not multiple teams.
                                                    5. While many microservices are evolving like this, the majority, frankly, stays as is. Thousands of microservices bring a lot of problems that need to be solved. Monitoring. Testing. CI/CD, SLAs. Library versions across all of them (security, timezone issues). And so on. There are good initiatives we keep doing - and sharing what works and open sourcing some of the tools we build to deal with the problems, as they pop up. Like testing microservices with a multi-tenancy approach. Distributed tracing across the services. All of this is a lot of investment. Only do microservices at scale if you’re ready to make this investment.

                                                    So no, Uber is not going no-microservices like I’m seeing many people interpret it. It’s not even going to less microservices. And when I said “we’re moving”, that was not exact phrasing. New microservices are more thoughtfully created in my team and in my org. These are “larger” services than some of the early, small, focused, microservices.

                                                    Microservices worked well at Uber in many ways and keep helping in others areas. There are problems, of course, and you deal with the problems as you go. This would be the same with e.g. a monolith with thousands of developers, SOA with thousands of developers or {you name whatever you do} with thousands of developers. The number of services is still growing, as a whole, as the business grows - though in some orgs, like mine, they are at level, or even going down a bit (though this is not the goal itself). But not all microservices are equal any more. The critical ones look less like your classic microservice - or at least what I called microservices years back.

                                                    On another note: everyone interprets the name “microservice” differently. I’ll write a post summarizing my experiences with the ups and downs of microservices at scale. For now, hopefully this gives some more color.

                                                    Any other questions, just ask.

                                                    1. 4

                                                      Thank you for posting a clarification. It is of much more value than the Twitter thread and would love to see a blog post about it.

                                                      This is a good example demonstrating why I think linking Twitter threads is poor form and I discourage people from doing it.

                                                      1. 4

                                                        It’s basically the whole microkernel thing all over again, innit?

                                                        Monolithic kernels: “Yikes, one bug can bring the whole OS down, let’s split that out”

                                                        Microkernels: “Yikes, having loads of little services effectively work together is actually much harder than we thought!”

                                                        Current “hybrid” kernels: “Let’s take the best of both approaches and combine them where it gives us the best bang for our buck”

                                                      1. 3

                                                        Great article! I’d also mention that magit works transparently with tramp so you can take advantage of magit’s awesomeness on remote machines.

                                                        1. 5

                                                          You make it sound like it’s a feature that was intentionally built in, but what makes TRAMP great is that it’s transparently built in between buffer and file access, meaning that all halfway good Elisp code (eg. also eshell, compile, …) can employ TRAMP without having to worry about it.

                                                          1. 1

                                                            Sadly, gdb in the GUD does not work quite right. I can run M-x gdb just fine, but the file references do not work properly (and there are some complaints about the terminal from GDB). I suspect I’ll have to fiddle with the source locations to get it to work, if it will work at all.

                                                          2. 2

                                                            Also Dired, which means also Sunrise Commander, which is an orthodox two-pane file manager. Now you can have both panes show any combination of remote and local directories, and manipulate files between them as if everything was local.

                                                            It’s a rather slow for big transfers and/or many files, but for smaller operations it is insanely convenient.

                                                          1. 2

                                                            If you are going to use TRAMP, I highly recommend something akin to this near the start of your shell rc file:

                                                            [[ "${TERM}" == dumb ]] && PS1='$ ' && return
                                                            

                                                            This way TRAMP has fewer chances to run into any odd interactive settings that generally don’t play well with it and it will easily recognize the prompt (it will just hang if these things aren’t set correctly). In my case, with zsh, I added unsetopt zle in there as well. TRAMP is sensitive to all this because it basically logs in to access files. Using some sort of remote file access API would be nice, but *nix seems to have nothing useful for that, so relying on tenuous shell access settings is the name of the game.

                                                            1. 2

                                                              Using some sort of remote file access API would be nice, but *nix seems to have nothing useful for that.

                                                              Not sure I follow. scp avoids all the issues with prompts. Looks like you can have emacs use it in preference to ssh. https://www.gnu.org/software/emacs/manual/html_node/tramp/External-methods.html

                                                              1. 2

                                                                The issue with that (when I used it more frequently a couple years ago) is that scp will log in anew everytime you save a file. That will make the saving progress take quite some time. When you’re used to hitting C-x C-s every once in a while, this can easily become annoying.

                                                                1. 1

                                                                  All I can tell you is that neither scp nor ssh worked until I did what I stated for the shell. When I typed C-x C-f /scp:host:/ Emacs just sat there, unresponsive. After I changed the remote shell rc file, all was fine.

                                                              1. 2

                                                                Myth: Common Lisp is a procedural, object-oriented language.

                                                                Is that what people are saying about it now? I’ve never heard CL be accused of being either of those things. It was always accused of being “functional”.

                                                                1. 6

                                                                  New directive from the Common Lisp Central Comittee: “We Have Always Been Functional”.

                                                                  <a weary sigh arises from the Emacs buffer *Ministry of Truth*>

                                                                1. 12

                                                                  Requirements change. Every software engineering project will face this hard problem at some point. […] With this in mind, all software development processes can be seen as different responses to this essential truth. The original (and naive) waterfall process simply assumed that you could start with a firm statement of the requirements to be met.

                                                                  This isn’t something unique to software. I’ve talked with at least two civil engineers who had to move bridges because the requirements changed after they were built.

                                                                  1. 6

                                                                    Yes, the whole article is rehashing the “Agile” argument without anything of substance.

                                                                    My father has worked in construction for nearly 50 years, and I did as well for a few. Requirements and environment conditions always happen. What good construction and engineering firms do is seriously account for that. The guys who lowball a bid inevitably run over cost and time by significant amounts. Sadly, clients fall for the lowball bids too often.

                                                                    1. 3

                                                                      And they get what they pay for. Probably.

                                                                    2. 1

                                                                      Wasn’t waterfall the “do not do this” example in the paper that coined the term?

                                                                      1. 1

                                                                        Are these stories published somewhere?

                                                                        Lots of developers are under the impression that civil engineers don’t have to deal with crazy requirements changes. Examples on orange site:

                                                                        And no civil engineer will ever have to deal with the update to Gravity 2.0 now even better at 10m/s2

                                                                        Also a civil engineer usually understands exactly what the bridge is supposed to do and how it is going to be used.

                                                                        there’s a mutual agreement that such change comes with significant cost, and it’s this part that is missing in the software world.

                                                                        Everything about a bridge is planned in extreme detail before work on the ground ever starts. This level of planning is absent in software.

                                                                        client needs are easy to transport into the mind of [a civil engineer]

                                                                        1. 3

                                                                          I’m working on it! This is part of a broader project where I interviewed a bunch of people who worked as both “trad” engineers and software engineers to figure out how they’re actually different. Most software engineers don’t actually know what trad people actually have to deal with and are working on stereotypes.

                                                                          And no civil engineer will ever have to deal with the update to Gravity 2.0 now even better at 10m/s2

                                                                          Here’s a page on how individual screws can wildly vary in terms of structural strength: https://www.fastenal.com/en/76/metric-system-and-specifications.

                                                                          And here’s one on how mixing aluminum and steel bolts can corrode the bridge! https://www.fastenal.com/en/70/corrosion

                                                                          Also a civil engineer usually understands exactly what the bridge is supposed to do and how it is going to be used.

                                                                          What about electrical engineering? Chemical engineering? Off-the-shelf integrated circuits? Mines?

                                                                          there’s a mutual agreement that such change comes with significant cost, and it’s this part that is missing in the software world.

                                                                          Not really, stuff changes all the time.

                                                                          Everything about a bridge is planned in extreme detail before work on the ground ever starts. This level of planning is absent in software.

                                                                          As one oil rig engineer told me: “we file three blueprints: what we originally planned, what we ended up building, and how we kludged it after we built it.”

                                                                          client needs are easy to transport into the mind of [a civil engineer]

                                                                          See above.

                                                                      1. 5

                                                                        Could @geoffwozniak give some context here? The last time I knew, repo surgeon was not the top dog in the gcc git conversion race. Did that change, or is esr an unreliable narrator?

                                                                        1. 9

                                                                          This was talked about for a long time on the mailing list, but once the GNU Cauldron happened in September 2019, it kind of lit a fire under the whole “let’s get this converted to Git” movement. There was already a mirror that many (including myself) were using and because the whole reposurgeon thing seemed stuck, some argued to just get on with it and use the existing mirror.

                                                                          Some of the long-time contributors were a little more concerned about the tags and historical aspects of the repo, going back to the CVS days. As a result, the enthusiasm of the mirror and an existing conversion script waned. Personally, I was fine with the mirror.

                                                                          At any rate, the wiki page lays out the pros and cons of each. It was sometime in December 2019 that the reposurgeon route was chosen. It’s somewhere in this thread (I’m too lazy to find the exact message).

                                                                          1. 2

                                                                            In addition, a previous story about this with some more context on why reposurgeon was chosen: https://lobste.rs/s/ykr0ct/gcc_has_really_high_quality_git

                                                                        1. 1

                                                                          I’m for fewer tags, not more. I do not find specificity in tags to be a good thing. For one thing, it encourages more tags to be added, both to the set of tags available and to the number of tags attached to each submission. “Tag soup” on a submission is already an annoying problem.

                                                                          Adding more tags just means we have to manage more tags. We have too many tags already, many of which seemed to follow a flurry of submissions at one point (systemd and illumos come to mind, as do all the programming language ones, frankly) which then inevitably tapered off and leaving the tag to essentially become an orphan.

                                                                          Personally, I prefer broad topic tags (themes, really). I have not found the addition of more tags to be all that helpful in filtering or discovering content. All it’s done is add more tags to submissions. The propensity to add them has resulted in lots of tags being added that are tangential at best in the hopes of getting noticed or trying to appear relevant. It feels very spammy.

                                                                          1. 1

                                                                            For one thing, it encourages more tags to be added, both to the set of tags available and to the number of tags attached to each submission. “Tag soup” on a submission is already an annoying problem.

                                                                            It’s only a problem because people don’t understand that it only hurts their post. The more tags they add, the higher chance any of the tags would be filtered out meaning a smaller audience for your post. Accuracy is preferable, and it’s impossible to be accurate with the generic tags we have now.

                                                                            Adding more tags just means we have to manage more tags.

                                                                            Why do you have to manage them? They are just there to be used on your post and people can vote to add tags if you don’t tag correctly, just like today.

                                                                            Personally, I prefer broad topic tags (themes, really). I have not found the addition of more tags to be all that helpful in filtering or discovering content.

                                                                            They aren’t there to discover content. They are there to filter content. Be real, if we remove all programming language tags over night and required them to use the programming tag instead, then 50% of all submissions, if not more, would be put in that tag and it would be impossible to filter out content you aren’t interested in.

                                                                            Specificity is always preferable in a tagging system, with tags that imply other tags for generality.

                                                                            The propensity to add them has resulted in lots of tags being added that are tangential at best in the hopes of getting noticed or trying to appear relevant. It feels very spammy. Again, that only hurts the submission itself. You are more likely to hit a tag that people filter out and your post is effectively hidden for a majority of the site. People using many tags (as long as they are relevant) are actually using the tagging system how it should be used. Real spammers would want to use as few tags as possible to reach a wider audience.

                                                                            1. 1

                                                                              The simple answer to your retort, then, is that Powershell just isn’t that important.

                                                                          1. 6

                                                                            I’ve omitted explanations of why I believe these things, mostly so that I could get this post out the door at all - each one of these could easily be it’s own blog post. Think about them for a bit, and possibly you’ll find them compelling :)

                                                                            I don’t find any of them compelling precisely because there are no explanations. Most of the statements are provocative and so vague that you can read whatever you want into them. That’s not productive discussion, that’s clickbait.

                                                                            I’d much rather see a deeper exploration of these topics as opposed to just getting a “post out the door”.

                                                                            1. 6

                                                                              Connectivity will be the great equalizer in the future.

                                                                              Equalizer of what, exactly? This statement and the ensuing paragraph betray the misguided belief that technology is the prescription for social problems. Starlink feels like nothing more than a giant ego trip. If this was truly an egalitarian effort, the people behind it would have consulted the rest of the world.

                                                                              1. 4

                                                                                The author also seems to assume that somehow Starlink will provide a cheap and high-quality service, while complaining about the “greedy last-mile monopolists” in another paragraph. Would it not make more sense to assume that this company will be just as greedy once it has established its own monopoly, at the detriment of us all?

                                                                                1. 3

                                                                                  once it has established its own monopoly

                                                                                  You can’t create a monopoly by adding another competitor.

                                                                                  Existing telcos are a natural monopoly because trenches, poles and wires are astonishingly expensive and it doesn’t make economic sense to build a duplicate set of them in the same location.

                                                                                  While Starlink is also astonishingly expensive (more expensive per unit bandwidth for all but the lowest-density regions), it’s not locked to a single physical location. Being able to rearrange the fleet to serve different regions at different densities is a huge deal because it means every monopoly ISP on the planet now has plausible competition.

                                                                                  Monopoly ISPs (eg Comcast in many US cities) will be forced to adapt and offer a reasonable level of service to fight off this competition (as they did when google fiber came out).

                                                                                  I don’t think Starlink will offer particularly great value for money, but a capitalist market cannot operate well without competition, and Starlink will provide that.