1. 8

    This leaves out a pretty important part of work: you work on a team. Increasingly it’s acceptable for people to work hours that suit them, and for many people that means coming in at 10 or 11. That means they are staying later and they are probably most productive around 3 or 4 or 5. That means they’ll be dropping the most PRs on you then or asking the most questions.

    That isn’t to say that this suggestion won’t work, but you probably can’t just institute it and call it a day. The post doesn’t even mention colleagues or teams.

    1. 13

      This leaves out a pretty important part of work: you work on a team.

      I don’t think it matters whether you work 9-5 or 11-7. If other people on the team are working within a certain time period (such as 11-7), then by all means try to accommodate them by adjusting your hours to overlap with theirs to the extent that doing so doesn’t impact your productivity or get in the way of the rest of your life.

      The fundamental principle is to do a solid day’s work in eight hours or less because unpaid overtime is for suckers. Not only are you not getting paid for the extra hours when you draw a salary, but working more than 40 hours a week reduces the amount of money you earn per hour.

      1. 8

        unpaid overtime is for suckers

        It’s not only stupid, but unethical too. If somebody works overtime without pay, it creates pressure for other workers to do it as well. If you do it regularly, your output gets worse, which means that your employer benefits nothing either. It’s just loss/loss.

        1. 3

          This was my take too. 9 and 5 are arbitrary fence posts. The key here is working an 8ish hour day and not a 10ish or 12ish hour day.

          1. 5

            4-6 hours would be better, IMO, but I find myself turning into some kind of dirty long-haired pinko as I approach middle age.

            1. 3

              I would agree if the workday were actually one solid block of nothing but writing code or thinking about writing code. However in the real world (or at least MY real world) the workday consists of that plus a whole host of scheduled and unscheduled interruptions like meetings, chats with manager and coworkers, etc.

              When you add in those things, a 4-6 hour workday starts to look kinda sketchy :)

        2. 4

          For teams, I think it’s fundamental to estabilish a common ground from the get go. I feel that team members should (ideally) agree on a (flexible as much as possible) schedule that accomodates everyone needs, instead of just individually decide which work hours suite them. Personally, I think that, when other team members depends on some measure of your availability, showing up “whenever you feel like it” is a sign of lack of respect for your peers (and I won’t allow it on my team).


            My team is doing mostly 10-8 (so working more than the 8h/d). Now I usually do 8-4/5 (depending on the work pressure, my commitments, if I took an additional personal time at lunch break …) if a team member throws a PR when I have to leave, I have absolutely no scruples to let it for tomorrow. Once or twice some asked for a review when I was leaving. To that you just have to answer that you’re leaving because you called it a day and that except if it’s critical to have it reviewed it today, it can probably wait for tomorrow.

            To me the teams are not an issue as long as you communicate.

          1. 15

            As a junior developer doing my best to learn as much as I can, both technically and in terms of engineering maturity, I’d love to hear what some of the veterans here have found useful in their own careers for getting the most out of their jobs, projects, and time.

            Anything from specific techniques as in this post to general mindset and approach would be most welcome.

            1. 32

              Several essentials have made a disproportionate benefit on my career. In no order:

              • find a job with lots of flexibility and challenging work
              • find a job where your coworkers continuously improve themselves as much (or more) than you
              • start writing a monthly blog of things you learn and have strong opinions on
              • learn to be political (it’ll help you stay with good challenging work). Being political isn’t slimy, it is wise. Be confident in this.
              • read programming books/blogs and develop a strong philosophy
              • start a habit of programming to learn for 15 minutes a day, every day
              • come to terms with the fact that you will see a diminishing return on new programing skills, and an increasing return on “doing the correct/fastest thing” skills. (e.g. knowing what to work on, knowing what corners to cut, knowing how to communicate with business people so you only solve their problems and not just chase their imagined solutions, etc). Lean into this, and practice this skill as often as you can.

              These have had an immense effect on my abilities. They’ve helped me navigate away from burnout and cultivated a strong intrinsic motivation that has lasted over ten years.

              1. 5

                Thank you for these suggestions!

                Would you mind expanding on the ‘be political’ point? Do you mean to be involved in the ‘organizational politics’ where you work? Or in terms of advocating for your own advancement, ensuring that you properly get credit for what you work on, etc?

                1. 13

                  Being political is all about everything that happens outside the editor. Working with people, “managing up”, figuring out the “real requirements’, those are all political.

                  Being political is always ensuring you do one-on-ones, because employees who do them are more likely to get higher raises. It’s understanding that marketing is often reality, and you are your only marketing department.

                  This doesn’t mean put anyone else down, but be your best you, and make sure decision makers know it.

                  1. 12

                    Basically, politics means having visibility in the company and making sure you’re managing your reputation and image.

                    A few more random bits:

              2. 14

                One thing that I’ve applied in my career is that saying, “never be the smartest person in the room.” When things get too easy/routine, I try to switch roles. I’ve been lucky enough to work at a small company that grew very big, so I had the opportunity to work on a variety of things; backend services, desktop clients, mobile clients, embedded libraries. I was very scared every time I asked, because I felt like I was in over my head. I guess change is always a bit scary. But every time, it put some fun back into my job, and I learned a lot from working with people with entirely different skill sets and expertise.

                1. 11

                  I don’t have much experience either but to me the best choice that I felt in the last year was stop worrying about how good a programmer I was and focus on how to enjoy life.

                  We have one life don’t let anxieties come into play, even if you intellectually think working more should help you.

                  1. 8

                    This isn’t exactly what you’re asking for, but, something to consider. Someone who knows how to code reasonably well and something else are more valuable than someone who just codes. You become less interchangeable, and therefore less replaceable. There’s tons of work that people who purely code don’t want to do, but find very valuable. For me, that’s documentation. I got my current job because people love having docs, but hate writing docs. I’ve never found myself without multiple options every time I’ve ever looked for work. I know someone else who did this, but it was “be fluent In Japanese.” Japanese companies love people who are bilingual with English. It made his resume stand out.

                    1. 1

                      . I got my current job because people love having docs, but hate writing docs.

                      Your greatest skill in my eyes is how you interact with people online as a community lead. You have a great style for it. Docs are certainly important, too. I’d have guessed they hired you for the first set of skills rather than docs, though. So, that’s a surprise for me. Did you use one to pivot into the other or what?

                      1. 7

                        Thanks. It’s been a long road; I used to be a pretty major asshole to be honest.

                        My job description is 100% docs. The community stuff is just a thing I do. It’s not a part of my deliverables at all. I’ve just been commenting on the internet for a very long time; I had a five digit slashdot ID, etc etc. Writing comments on tech-oriented forums is just a part of who I am at this point.

                        1. 2

                          Wow. Double unexpected. Thanks for the details. :)

                    2. 7

                      Four things:

                      1. People will remember you for your big projects (whether successful or not) as well as tiny projects that scratch an itch. Make room for the tiny fixes that are bothering everyone; the resulting lift in mood will energize the whole team. I once had a very senior engineer tell me my entire business trip to Paris was worth it because I made a one-line git fix to a CI system that was bothering the team out there. A cron job I wrote in an afternoon at an internship ended up dwarfing my ‘real’ project in terms of usefulness to the company and won me extra contract work after the internship ended.

                      2. Pay attention to the people who are effective at ‘leaving their work at work.’ The people best able to handle the persistent, creeping stress of knowledge work are the ones who transform as soon as the workday is done. It’s helpful to see this in person, especially seeing a deeply frustrated person stand up and cheerfully go “okay! That’ll have to wait for tomorrow.” Trust that your subconscious will take care of any lingering hard problems, and learn to be okay leaving a work in progress to enjoy yourself.

                      3. Having a variety of backgrounds is extremely useful for an engineering team. I studied electrical engineering in college and the resulting knowledge of probability and signal processing helped me in environments where the rest of the team had a more traditional CS background. This applies to backgrounds in fields outside engineering as well: art, history, literature, etc will give you different perspectives and abilities that you can use to your advantage. I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                      4. Learn about the concept of the ‘asshole filter’ (safe for work). In a nutshell, if you give people who violate your boundaries special treatment (e.g. a coworker who texts you on your vacation to fix a noncritical problem gets their problem fixed) then you are training people to violate your boundaries. You need to make sure that people who do things ‘the right way’ (in this case, waiting for when you get back or finding someone else to fix it) get priority, so that over time people you train people to respect you and your boundaries.

                      1. 3

                        I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                        The methodology from that talk is here: http://codecrit.com/methodology.html

                        I would change “If the code doesn’t work, we shouldn’t be reviewing it”. There is a place for code review of not-done work, of the form “this is the direction I’m starting to go in…what do you think”. This can save a lot of wasted effort.

                      2. 3

                        The biggest mistake I see junior (and senior) developers make is key mashing. Slow down, understand a problem, untangle the dependent systems, and don’t just guess at what the problem is. Read the code, understand it. Read the code of the underlying systems that you’re interacting with, and understand it. Only then, make an attempt at fixing the bug.

                        Stabs in the dark are easy. They may even work around problems. But clean, correct, and easy to understand fixes require understanding.

                        1. 3

                          Another thing that helps is the willingness to dig into something you’re obsessed with even if it is deemed not super important by everyone around you. eg. if you find a library / language / project you find fun and seem to get obsessed with, that’s great, keep going at it and don’t let the existential “should i be here” or other “is everyone around me doing this too / recommending this” questions slow you down. You’ll probably get on some interesting adventures.

                          1. 3

                            Never pass up a chance to be social with your team/other coworkers. Those relationships you build can benefit you as much as your work output.

                            (This doesn’t mean you compromise your values in any way, of course. But the social element is vitally important!)

                          1. 31

                            at this point most browsers are OS’s that run (and build) on other OS’s:

                            • language runtime - multiple checks
                            • graphic subsystem - check
                            • networking - check
                            • interaction with peripherals (sound, location, etc) - check
                            • permissions - for users, pages, sites, and more.

                            And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                            1. 10

                              Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.

                              It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?

                              1. 9

                                but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)

                                1. 2

                                  New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                                  (Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)

                                  1. 3

                                    moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                                    Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.

                                    Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.

                                    1. 2

                                      I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                                      Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)

                                      Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.

                                      As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.

                                      1. 3

                                        Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                                        I agree with that totally.

                                        “Multicore doesn’t affect performance at all for single-threaded applications “

                                        Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.

                                        “There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”

                                        Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.

                              2. 9

                                Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.

                                Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.

                                1. 6

                                  Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.

                                  1. 12

                                    I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.

                                    1. 10
                                      1. 3

                                        Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.

                                        1. 1

                                          Fascinating; how had I never heard of this before?

                                          Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/

                                          Looks promising. I wonder how it fares on keyboard control in particular.

                                          1. 1

                                            Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org

                                            Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?

                                            Neat idea; hope they get it into a usable state in the future.

                                          2. 1

                                            AFAIK, it doesn’t support “modern” non-standards.

                                            But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.

                                          3. 6

                                            No. Modern web standards are too complicated to implement in a simple manner.

                                            1. 3

                                              Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its <img> tag, but not CSS).

                                              1. 2

                                                I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.

                                                1. 6

                                                  It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.

                                                  I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.

                                              2. 2

                                                A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…

                                            2. 4

                                              And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                                              user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.

                                              1. 3

                                                When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.

                                                Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!

                                                1. 3

                                                  Firefox uses a fork of jemalloc by default.

                                                  1. 2

                                                    IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.

                                                    Anyway, there are good reasons Firefox uses its own malloc.

                                                    Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.

                                                2. 3

                                                  In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).

                                                  The hardest problems are due the different implementation details of same origin policy.
                                                  The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.

                                                  BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.

                                                  Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.

                                                  This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.

                                                  However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).

                                                  We need better protocols and better distributed operating systems.

                                                  Unfortunately it’s not easy to create them.
                                                  (Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)

                                                1. 4

                                                  People should stop ranting about agile when they are in fact complaining about scrum. I’ve used scrum and kanban and there’s a big difference in workflow. I feel less stressed by the later and I feel it’s more realistic.

                                                  1. 9

                                                    The article is significant in that it comes from one of the original signatories of the Manifesto for Agile Software Development. I had a huge knee-jerk reaction when I read the title of the article, “Who the hell are you to tell me that I should abandon Agile?” and yeah it turns out the guy is actually pretty important in the story of “Agile”. Much more than I am, anyway.

                                                    It’s also not just a rant. I forced myself to read the article before I commented, just to be sure I didn’t just throw random anger at the internet. Author provides tentative solutions to get out of the bad situation of being forced to do “Certified-Agile-as-a-product”, as sold by, well, businesses. I kind of begrudgingly agree with everything in the article, minus everything preachy about XP, of which I have no real-world experience.

                                                  1. 1

                                                    Learn C, well, then very well, then learn what OOP is (theoretically), try to implement a OOP Lang yourself (the easiest way is to implement a superset of C) and then learn C++.

                                                    When people learn C++ straights away they usually (there are exception obviously) very bad developers.

                                                    1. 5

                                                      What I don’t really understand is how Andrew has a comfortable standard of living in NYC on $600 per/month.


                                                      I’m guessing that there must be another source of Zig donations aside from Patreon?

                                                      1. 7


                                                        1. 2

                                                          Oh woops, I misread the first paragraph, I thought it stated that Zig was supporting him entirely, when it’s actually about his programming supporting him.

                                                          1. 3

                                                            Note that this isn’t his first attempt at doing this. But the project he was working on before Genesis didn’t find the same traction as Zig has. BUT, if I recall correctly, he also didn’t live in NYC the last time… Anyway, he’s got experience with living frugally, so I’m sure he knows what he’s doing here.

                                                            1. 2

                                                              he extrapolated the donations growth versus his savings.

                                                          2. 2

                                                            What I don’t understand is if you are not working in NYC anymore, and only working on your own and getting donation, why doesn’t he move to anywhere but NYC to minimise his personal expense?

                                                            I’m sure there are cities in the US with 80% the fun of NYC at lower than 80% of the cost.

                                                            1. 17

                                                              I work remote, and there are places I could move that are < 20% of the cost.

                                                              My friends aren’t going to move with me, and I have enough money to live where I am. Why be wealthy and lonely?

                                                              1. -10

                                                                Didn’t know your city is the only source of friends in the world. That must be good for the economy.

                                                                1. 32

                                                                  I know that this is very hard for some people to believe (seems to be harder the more western the society is), but some people don’t consider their friends a replaceable commodity. Not that I don’t want to make new friends, but these are my friends right now and I am more loyal to them than I am to a meaningless job or to money.

                                                                  1. 4

                                                                    Maybe because your partner has a job he/she really enjoys in this city? I mean, we’re lucky in our field to have a lot of different possibilities, in remote or not, mostly well paid. Let’s not forget that it’s a chance and not something everybody has.

                                                                2. 2

                                                                  The usual reason is the significant other.

                                                                  1. 1

                                                                    There’s a shit-ton of them. Even Memphis TN that’s close to me with all its problems is low cost of living with all kinds of fun stuff to do. Just don’t live in or shop around the hood. Solves most of problems if you don’t have kids going to school or college.

                                                                    There’s plenty of cities in the US that similarly have low cost of living with plenty going on. One can also live in areas 30-40 min from cities to substantially reduce their rent. The fun stuff still isn’t that far away. The slight inconvenience just knocks quite a bit off the price.

                                                                    1. 4

                                                                      I don’t remember the details, and I can’t find the link, but a few years ago someone did some research here in Berlin where they compared the cost of rent in more-or-less the city proper, and the cost of rent + public transportation tickets when you lived in the outskirts. It ended up being not much of a difference.

                                                                      1. 2

                                                                        Well, if you don’t workin in the city and need to commute then you spend even less. Though OTOH, you get tax returns for commutes in Germany so probably the commute is not that expensive to begin with.

                                                                        1. 2

                                                                          Berlin is currently the city with the highest increase in rent world-wide and a few years ago, it was unusually low.

                                                                          Also, Berlin is hard to compare in many aspects, possibly because of a very unique city history.

                                                                  1. 3

                                                                    Nice, I can see myself using this. Search seems to be missing.

                                                                    1. 4

                                                                      Not there yet:

                                                                      • Search (that’s a tough one)
                                                                      • Dark theme.
                                                                      • 329 bugs and improvements on my Todo list
                                                                    1. 5

                                                                      Missed an opportunity to pick up http(s)://crate.rs, surely

                                                                      1. 3

                                                                        Expiration date: 30.07.2018 06:24:45

                                                                        Author should probably put a reminder for the possible event!

                                                                      1. 8

                                                                        Contrary to most opinions I think that acquisition might be a good thing:

                                                                        • Microsoft is a different company now their open source strategy recently is quite good.
                                                                        • Github seems in decline for some time, the inability to lock a permanent CEO, low number of new features
                                                                        • MS has all the resources to put Github on the next level if they don’t screw it up.

                                                                        What could be bad is if MS buys Gitlab, then they would control that space. Gitlab is also a heavily VC backed so investors will seek for an exit at some point.

                                                                        1. 10

                                                                          Honestly, my hope is that the fear factor of this, convinces more people to consider alternatives.

                                                                          I’ve setup gitlab (a few years ago) for a client, and it was a fucking pig. I’ve looked at your solution too, and wasn’t completely sold on some aspects of it (sorry, dont remember what right now) - but these things obviously work for some people, and getting out of this mindset that “GitHub is just what all developer use” is crucial to me.

                                                                          Monoculture should scare people a lot more than the boogey man from Redmond.

                                                                          This line from the Bloomberg article sums up the issue:

                                                                          San Francisco-based GitHub is an essential tool for coders.

                                                                          This is honestly like claiming “Windows is essential for technology companies”.

                                                                          1. 13

                                                                            Microsoft is a different company now their open source strategy recently is quite good.

                                                                            I’m getting awfully tired of people saying there’s nothing to worry about because they’ve been nicer for the past handful of years. They have been antagonistic to free software for decades.

                                                                            Microsoft changed their tune because it became profitable to do so. If it becomes profitable to be dicks again, they will be dicks again.

                                                                            I’m glad we have a kinder, gentler Microsoft. Don’t kid yourself about their motivations.

                                                                            1. 10

                                                                              Also good to remember: they still routinely attack Linux and related free software by threatening with their patents, and extract patent royalties from many terrified companies who use Linux

                                                                              1. 4

                                                                                They’ve collected over a billion dollars on Android alone.

                                                                              2. 1

                                                                                I never said to not worry about. I’m writing this based on a feeling MSFT will do good with Github. Time will show, and their motivation is quite simple, buy more power and make more money.

                                                                              3. 1

                                                                                That would be very cool that as @nickpsecurity mentioned, RedHat take a shot at gitlab.

                                                                                1. 3

                                                                                  Given the high performance has for on-premise installations, that would be a great addition for RedHat, TBH.

                                                                                  (I see GitLab, even paid, everywhere at clients and I have yet to see a GH Enterprise installation in the wild)

                                                                                  1. 3

                                                                                    Riot games had GH enterprise a few years ago, just for their web team. The rest of the company was using perforce.

                                                                                    1. 2

                                                                                      I’ve got the opposite experience. I’m seeing big installations/companies use GHE all the time, and none of them Gitlab.

                                                                                1. 16

                                                                                  Well, there goes the neighbourhood.

                                                                                  Bitbucket are good. - Free private repos make these people super compelling.

                                                                                  GitLab are good too. - Explorer interface makes this more of a public destination too, like GitHub.

                                                                                  We do not forget the Halloween documents, nor the phrase embrace, extend, extinguish.

                                                                                  1. 7

                                                                                    Bitbucket being closed source and super expensive to self-host I’d prefer gitlab tbh…

                                                                                    1. 2

                                                                                      There are a lot of options actually, if you’re after paid, private repos.

                                                                                      I’m sure I’ve missed a lot too - those are the ones I remember working with or reviewing for clients in the last couple of years.

                                                                                      1. 1

                                                                                        I used Assembla for a number of years until 2017. The UI was somewhat clunky but I had no other issues so it’s a workable option. It also offered free private Git repos, although I’m not sure it still does.

                                                                                    1. 42

                                                                                      GitLab is really worth a look as an alternative. One big advantage of GitLab is that the core technology is open source. This means that anybody can run their own instance. If the company ends up moving in a direction that the community isn’t comfortable with, then it’s always possible to fork it.

                                                                                      There’s also a proposal to support federation between GitLab instances. With this approach there wouldn’t even be a need for a single central hub. One of the main advantages of Git is that it’s a decentralized system, and it’s somewhat ironic that GitHub constitutes a single point of failure.

                                                                                      1. 17

                                                                                        Federated GitLabs sound interesting. The thing I’ve always wanted though is a standardised way to send pull requests/equivalent to any provider, so that I can self-host with Gitea or whatever but easily contribute back and receive contributions.

                                                                                        1. 7

                                                                                          git has built-in pull requests They go to the project mailing list, people code review via normal inline replies Glorious

                                                                                          1. 27

                                                                                            It’s really not glorious. It’s a severely inaccessible UX, with basically no affordances for tracking that review comments are resolved, for viewing different slices of commits from a patchset, or integrating with things like CI.

                                                                                            1. 7

                                                                                              I couldn’t tell if singpolyma was serious or not, but I agree, and I think GitHub and the like have made it clear what the majority of devs prefer. Even if it was good UX, if I self-host, setting up a mail server and getting people to participate that way isn’t exactly low-friction. Maybe it’s against the UNIX philosophy, but I’d like every part of the patchset/contribution lifecycle to be first-class concepts in git. If not in git core, then in a “blessed” extension, à la hub.

                                                                                              1. 2

                                                                                                You can sort of get a tracking UI via Patchwork. It’s… not great.

                                                                                                1. 1

                                                                                                  The only one of those Github us better at is integration with CI. They also have an inaccessible UX (doesn’t even work on my mobile devices, can’t imagine if I had accessibility needs…), doesn’t track when review comments are resolved, and there’s no UX facility for viewing different slices, you have to know git stuff to know the links

                                                                                                2. 3

                                                                                                  I’ve wondered about a server-side process (either listen on http, poll a mailbox, etc) that could parse the format generated by git request-pull, and create a new ‘merge request’ that can then be reviewed by collaborators.

                                                                                                  1. 2

                                                                                                    I always find funny that usually, the same people advocating that emails are a technology with many inherent flaws that cannot be fixed, are the same people that advocate using the built in fit feature using emails…

                                                                                                3. 6

                                                                                                  Just re: running your own instance, gogs is pretty good too. I haven’t used it with a big team so I don’t know how it stacks up there, but I set it up on a VPS to replace a paid Github account for private repos, where it seems fast, lightweight and does everything I need just fine.

                                                                                                  1. 20

                                                                                                    Gitea is a better maintained Gogs fork. I run both Gogs on an internal server and Gitea on the Internet.

                                                                                                    1. 9

                                                                                                      Yeah, stuff like gogs works well for private instances. I do find the idea of having public federated GitLab instances pretty exciting as an alternative to GitHub for open source projects though. In theory this could work similarly to the way Mastodon works currently. Individuals and organizations could setup GitLab servers that would federate between each other. This could allow searching for repos across the federation, tagging issues across projects on different instances, and potentially fail over if instances mirror content. With this approach you wouldn’t be relying on a single provider to host everybody’s projects in one place.

                                                                                                    2. 1

                                                                                                      Has GitLab’s LFS support improved? I’ve been a huge fan of theirs for a long time, and I don’t really have an intense workflow so I wouldn’t notice edge cases, but I’ve heard there are some corners that are lacking in terms of performance.

                                                                                                      1. 4

                                                                                                        GitLab has first-class support for git-annex which I’ve used to great success

                                                                                                    1. 6

                                                                                                      I absolutely understand the economic motivations behind this move. I’m concerned because it seems that if other smaller players do the same (migrate over to established players like Reddit) then it’ll drive more users to incumbent companies. Someone in the thread included a link to an instruction how to deal with GDPR trolling which I’m bookmarking right now just in case.

                                                                                                      1. 6

                                                                                                        If you get a data request and it’s reasonable to do so, simply answer it. “Hi, my online identifier is ‘geocar’ what information do you have on me?” - I only have the information you already know about1. If that’s true, it’s easy. If you’re building a profile of me, likes/preferences and whether you sell them individually or in aggregate, then you have to tell me that, but if it’s just my own comments and my own email address (which I entered) then I should already know about that.

                                                                                                        If it’s worded in legalese, or you find it otherwise difficult, you can ask for administrative costs to be posted with the request to you. That way you’ll know they’re at least serious too. You’re not required to figure out what information might be connected to the person2, so if you buy some audience data from somebody like Lotame to show targeted ads to your users, even though you have a online identifier (username) you aren’t required to link them if you don’t ordinarily do this, and only very big websites will do this.

                                                                                                        Finally, if it’s onerous, you can ask for further “reasonable fees”. Trolls will get bored, but if you need to pull logs out of your s3 glacier and it’s going to take a week (or more) without paying the expedited fees there’s no reason you have to be on the hook for this.

                                                                                                        Right now, all this seems scary because it’s “new”, but eventually it will become normal, and we’ll realise the GDPR isn’t the boogeyman out to get us.

                                                                                                        1. 2

                                                                                                          “Hi, my online identifier is ‘geocar’ what information do you have on me?”

                                                                                                          You also need to prove that you are indeed geocar, otherwise anyone could have requested to view/delete the personal data. So some kind of vetting needs to be done.

                                                                                                          1. 1

                                                                                                            Indeed, but in this case, I think of at least one way to do that :)

                                                                                                          2. 2

                                                                                                            This is reasonable, but when you’re a tiny little startup (I’m pretty sure drone.io is a one man operation) any of this could still be onerous.

                                                                                                            1. 0

                                                                                                              Like already stated in another comment before, you can simply ask people to use predefined forms of request from your website once logged in, and have pre-defined answers to them.

                                                                                                          3. 2

                                                                                                            Seems like there’s a business opportunity here - “we will host and run your forum / comments / community in a fully GDPR-compliant fashion” or “we give you all the tools to easily comply with GDPR requests”

                                                                                                            1. 3

                                                                                                              The link I posted above also suggests another solution which might be a better fit for smaller companies and projects: provide a self-service interface for users where they’ll be able to access all GDPR-related stuff. I’d love to see this approach gain traction so that we’d avoid centralization.

                                                                                                              1. 2

                                                                                                                In other words: “pay us money or the government will shut you down”.

                                                                                                                All to “protect the consumers” of course. The very same consumers who willingly put all their information up on facebook.

                                                                                                                1. 0

                                                                                                                  That’s what Im thinking. Lets them pool resources on legal and maybe operational side. Even an existing seller of forum software might make it an extra servicevor differentiator. Alternatively, this stuff might get outsourced to specialized firms.

                                                                                                              1. 2

                                                                                                                Since version 3.4 (launched around August of 2016), diff supports the –color flag print colorised output in the terminal.

                                                                                                                Sadly, Debian 8 doesn’t have it yet, but hey there is colordiff too :)

                                                                                                                1. 2

                                                                                                                  Eh? Debian 8 is oldstable. It will never have it?

                                                                                                                  Debian 9 has it! And I too was relying on colordiff, and I knew the patch for –color was sitting in the diff repo for a while. It’s nice that it finally landed! GNU moves slowly.

                                                                                                                1. 12

                                                                                                                  Wow, that’s a lot of bloat, and a great demonstration of why I don’t use Gnome (or KDE).

                                                                                                                  I’m much happier with StumpWM, which just does its job and doesn’t try to integrate with everything.

                                                                                                                  1. 12

                                                                                                                    Unfortunately, if you want Wayland — and I do, as it really has made all my vsync/stuttering/tearing issues go away; Fedora is now as smooth as my mac/windows — your choices are limited. Sway is starting to look good but otherwise there’s not much at the minimal end of the spectrum.

                                                                                                                    If I have to choose between GNOME and KDE, I pick GNOME for the same reasons the author of this piece does. I was hoping the tips would come down to more than “uninstall tracker, evolution daemons et al. and hope for the best”. I’ve done that before on Fedora and ended up wrangling package dependancies in yum. I really wish GNOME/Fedora would take this sort of article to heart and offer a “minimal GNOME” option which is effectively just gnome-shell.

                                                                                                                    1. 3

                                                                                                                      Why is Wayland so poorly implemented? Is it because few distributions have it as default or is it because it’s harder? I see many tilling wm written in 50 different languages and it seems that sway is getting slowly it’s way to a usable wm, but it seems like a slow adoption from my point of view.

                                                                                                                      1. 4

                                                                                                                        It is a slow adoption, I’m not particularly sure why. Most (all?) of the tiling wms for X leverage Xlib or XCB, right? Perhaps it’s just needed some time for a similarly mature compositor lib to appear for Wayland (indeed, Sway is replacing their initial use of wlc with wlroots which may end up being that).

                                                                                                                        As for why Wayland in general isn’t more prevalent, I’d guess compatibility. X is just so well established that replacing it is inherently a lot of work in the “last mile”. Fedora/GNOME/Wayland works great for me with my in-kernel open AMD driver. Maybe it’s not as good for Intel iGPUs? Maybe it’s not so good on Nvidia systems? Maybe it doesn’t work at all on arm SoC things? I have no idea, but I can easily understand distros holding off on making it default.

                                                                                                                        1. 3

                                                                                                                          Maybe it’s not so good on Nvidia systems?

                                                                                                                          Exactly, the proprietary driver does not support GBM, they’ve been pushing their own thing (EGLStreams) that compositors don’t want.

                                                                                                                          Maybe it’s not as good for Intel iGPUs? Maybe it doesn’t work at all on arm SoC things?

                                                                                                                          Everything works great with any open drivers, including VC4 for the RPi.

                                                                                                                          1. 2

                                                                                                                            Maybe it’s not as good for Intel iGPUs?

                                                                                                                            Just a data point: I’ve got a new thinkpad recently, installed linux on it, together with gnome3. Only yesterday I’ve discovered it was running on wayland the whole time, with no apparent problems what-so-ever. And that includes working with a dock with two further displays attached, and steam games. Even the touch panel on the screen works without any further config.

                                                                                                                        2. 1

                                                                                                                          Unfortunately, if you want Wayland — and I do, as it really has made all my vsync/stuttering/tearing issues go away; Fedora is now as smooth as my mac/windows

                                                                                                                          And effortless support for multiple displays with different DPIs, plus better isolation of applications. I completely agree, when I switched to Wayland on Fedora 25 or 26, it was the first time I felt in a long time that the Linux desktop is on par again with macOS and Windows (minus some gnome-shell bugs that seem to have been mostly fixed now).

                                                                                                                          At some point, I might switch to Sway. But with Sway 0.15, X.org applications are still scaled up and blurry on a HiDPI screen (whereas they work fine in GNOME). I’ll give it another go once Sway 1.0 is out.

                                                                                                                          1. 1

                                                                                                                            not much at the minimal end of the spectrum

                                                                                                                            Weston! :)

                                                                                                                            My fork even has fractional scaling (Mac/GNOME style downscaling) and FreeBSD support.

                                                                                                                            1. 1

                                                                                                                              There’s a Wayland for FreeBSD? I thought Wayland had a lot of Linux specific stuff in it?

                                                                                                                              1. 3

                                                                                                                                Sure, there is some, but who said you can’t reimplement that stuff?

                                                                                                                                • libwayland, the reference implementation of client and server libraries, uses epoll. We have an epoll implementation on top of kqueue.
                                                                                                                                • Most compositors use libinput to read from input devices, and libinput:
                                                                                                                                  • reads from evdev devices (via libevdev but that’s a really thin lib). We have evdev support in many drivers, including Synaptics (with TrackPoint support).
                                                                                                                                  • uses libudev for device lookup and hotplug. We have a partial libudev implementation on top of devd.
                                                                                                                                • For GPU acceleration, compositors need a modern DRM/KMS/GBM stack with PRIME and whatnot. We have that.
                                                                                                                                • Compositors also need some way of managing a virtual terminal (vt), this is the fun part (not).
                                                                                                                                  • direct vt manipulation / setuid wrapper (weston-launch) is pretty trivial to modify to support FreeBSD, that’s how Weston and Sway work right now
                                                                                                                                  • I’m building a generic weston-launch clone: loginw
                                                                                                                                  • ConsoleKit2 should work?? I think we might get KDE Plasma’s kwin_wayland to work on this??
                                                                                                                                  • there were some projects aimed at reimplementing logind for BSD, but they didn’t go anywhere…
                                                                                                                                1. 1

                                                                                                                                  For GPU acceleration, compositors need a modern DRM/KMS/GBM stack with PRIME and whatnot. We have that.

                                                                                                                                  Do NVidia’s drivers use the same stack, or are they incompatible with the Wayland port? I’d give Wayland a try, but it seems hard to find a starting point… I’m running CURRENT with custom Poudriere-built packages, so patches or non-standard options aren’t a problem, I just can’t find any info on how to start.

                                                                                                                                  1. 2

                                                                                                                                    No, proprietary nvidia drivers are not compatible. Nvidia still does not want to support GBM, so even on Linux, support is limited (you can only use compositors that implemented EGLStreams, like… sway 0.x I think?) Plus, I’m not sure about the mode setting situation (nvidia started using actual proper KMS on Linux recently I think?? But did they do it on FreeBSD?)

                                                                                                                                    It should be easy to import Nouveau to drm-next though, someone just has to do it :)

                                                                                                                                    Also, you can get it to work without hardware acceleration (there is an scfb patch for Weston), but I think software rendering is unacceptable.

                                                                                                                            2. 1

                                                                                                                              I tried to give Wayland a try twice, on both my media PC and a new Laptop. It’s still really not there yet. I use i3 on X11 and Sway is really buggy, lacks a lot of backwards compatibility stubs (notification tray icons are a big one) and just doesn’t quite match i3 yet. Weston, the reference window manager, had a lot of similar problems when using it with my media PC.

                                                                                                                              I want to move on to Wayland, and I might give that other i3 drop-in for Wayland a try in the future, but right now it’s still not there yet.

                                                                                                                          1. 2

                                                                                                                            Open positions currently lists a Linux job, should we add a linux tag?

                                                                                                                            1. 2

                                                                                                                              The title has Linux but the actual job description has OpenBSD listed. I guess that Linux-only won’t be added to this job board.

                                                                                                                              1. 4

                                                                                                                                True. Thank you for the comment.

                                                                                                                                I should clarify that for job posters on the site. BSD should be in description or/and in the title of a job post.

                                                                                                                            1. 1

                                                                                                                              We use Jenkins, but all it does for us is accept webhooks from our central VCS repo on each commit and run make(or other build system) against the new revisions. And to properly yell when things go wrong.

                                                                                                                              Pushing all this stuff into your CI doesn’t seem to make a lot of sense, as then you get locked in, for little to no gain as I can see.

                                                                                                                              We treat Jenkins as a distributed task queue, with nice VCS features. Really if Nomad’s task queue or celery or what have you added support for webhooks and a nice way to yell and scream when something went wrong, it would 100% replace Jenkins for us in no time.

                                                                                                                              1. 1

                                                                                                                                At $PREV_JOB we used Jenkinsfiles extensively until some people complained that they couldn’t replicate it on their machine. We were building more and more complex tests scenarios and people wanted to run some parts of them on their laptops before pushing. Some complained that they had to read Groovy code to wrap they head around how tests were launched, but the worse for people were when doing offline remote… So some people started to write bash scripts to launch stuff (make was another candidate that people found too weird).

                                                                                                                                1. 1

                                                                                                                                  Make isn’t that weird, but, it’s not well understood, sadly. But I do agree with your $PREV_JOB, that all of your tests should be runnable , pretty much anywhere. That’s certainly our philosophy.

                                                                                                                                  I’ve been playing with tmuxp using tmux, and running dependencies for testing that way, so that it can all be interactive very easily. Not sure it works out very well for very complex testing scenarios, but it seems to work out for low to medium complexity so far.

                                                                                                                              1. 22

                                                                                                                                Even a blog post needs tags, categories, and images. When it comes to stock images, there are two sites that have a wide selection of free to use, no credit required works:

                                                                                                                                Strong disagree here. I’m a firm believer of the “clear and cold” writing style: your writing should be clear and concise. The reader isn’t there for your memes or hero headers or zany gifs. They’re there for your words. If an image doesn’t make the words clearer, then it doesn’t belong.

                                                                                                                                Case in point: at 97 KB, your typewriter image is the heaviest thing on the site. All it does is make me have to scroll in order to read your actual content.

                                                                                                                                1. 11

                                                                                                                                  The useless practice of hero images has become so prevalent that I acquired a habit of scrolling past them without even looking.

                                                                                                                                  The memes also have negative value because they take up space but carry no information.

                                                                                                                                  Related to this, in newspaper articles I often see random images that have nothing to do with the article like, say, a man waiting for a bus in an article about mass transit. To add insult to injury, it’s also captioned with “A man waiting for a bus”.

                                                                                                                                  1. 3

                                                                                                                                    I do agree with you for images, but not for tags and categories. Sometimes I stumble upon a great article on a subject and would like to find more articles from the same author on this subject. When there are tags, they are often useful, when there is not… you have to go the the archives (that sometimes you cannot even have…) and ctrl+f on several pages several key words to find what you’re looking for. Sometimes, some Google foo helps but sometimes not.

                                                                                                                                    To me it’s like some blogs that don’t serve RSS because the author don’t use it himself. This drives me mad.

                                                                                                                                    1. 2

                                                                                                                                      It seems many recommend the use of these header images to increase engagement. Of course, this is often from SEO websites that may do that to compensate for lack of contents. And there is no source for such a claim. I didn’t find if there was any appropriate research work on this topic. I also don’t like to scroll an unrelated image to see content, but maybe many people find it engaging.

                                                                                                                                      1. 3

                                                                                                                                        To be precise: to increase engagement on social media, for the post to have a thumbnail.

                                                                                                                                    1. 12

                                                                                                                                      Output should be simple to parse and compose

                                                                                                                                      No JSON, please.

                                                                                                                                      Yes, every tool should have a custom format that needs a badly cobbled together parser (in awk or whatever) that will break once the format is changed slighly or the output accidentally contains a space. No, jq doesn’t exist, can’t be fitted into Unix pipelines and we will be stuck with sed and awk until the end of times, occasionally trying to solve the worst failures with find -print0 and xargs -0.

                                                                                                                                      1. 11

                                                                                                                                        JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).

                                                                                                                                        In a JSON shell tool world you will have to spend time parsing and re-arranging JSON data between tools; as well as constructing it manually as inputs. I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).

                                                                                                                                        Sidestory: several months back I had a co-worker who wanted me to make some code that parsed his data stream and did something with it (I think it was plotting related IIRC).

                                                                                                                                        Me: “Could I have these numbers in one-record-per-row plaintext format please?”

                                                                                                                                        Co: “Can I send them to you in JSON instead?”

                                                                                                                                        Me: “Sure. What will be the format inside the JSON?”

                                                                                                                                        Co: “…. it’ll just be JSON.”

                                                                                                                                        Me: “But it what form? Will there be a list? Name of the elements inside it?”

                                                                                                                                        Co: “…”

                                                                                                                                        Me: “Can you write me an example JSON message and send it to me, that might be easier.”

                                                                                                                                        Co: “Why do you need that, it’ll be in JSON?”

                                                                                                                                        Grrr :P

                                                                                                                                        Anyway, JSON is a format, but you still need a format inside this format. Element names, overall structures. Using JSON does not make every tool use the same format, that’s strictly impossible. One tool’s stage1.input-file is different to another tool’s output-file.[5].filename; especially if those tools are for different tasks.

                                                                                                                                        1. 3

                                                                                                                                          I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).

                                                                                                                                          Except that standardized, popular formats like JSON get the side effect of tool ecosystems to solve most problems they can bring. Autogenerators, transformers, and so on come with this if it’s a data format. We usually don’t get this if it’s random people creating formats for their own use. We have to fully customize the part handling the format rather than adapt an existing one.

                                                                                                                                          1. 2

                                                                                                                                            Still, even XML that had the best tooling I have used so far for a general purpose format (XSLT and XSD in primis), was unable to handle partial results.

                                                                                                                                            The issue is probably due to their history, as a representation of a complete document / data structure.

                                                                                                                                            Even s-expressions (the simplest format of the family) have the same issue.

                                                                                                                                            Now we should also note that pipelines can be created on the fly, even from binary data manipulations. So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.

                                                                                                                                            1. 2

                                                                                                                                              “Still, even XML”

                                                                                                                                              XML and its ecosystem were extremely complex. I used s-expressions with partial results in the past. You just have to structure the data to make it easy to get a piece at a time. I can’t recall the details right now. Another I used trying to balance efficiency, flexibility, and complexity was XDR. Too bad it didn’t get more attention.

                                                                                                                                              “So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.”

                                                                                                                                              The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated. Works well enough for them. Camkes is an example.

                                                                                                                                              1. 3

                                                                                                                                                XML and its ecosystem were extremely complex.

                                                                                                                                                It is coherent, powerful and flexible.

                                                                                                                                                One might argue that it’s too flexible or too powerful, so that you can solve any of the problems it solves with simpler custom languages. And I would agree to a large extent.

                                                                                                                                                But, for example, XHTML was a perfect use case. Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.

                                                                                                                                                The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated.

                                                                                                                                                Yes but they generate OS modules that are composed at build time.

                                                                                                                                                Pipelines are integrated on the fly.

                                                                                                                                                I really like strongly typed and standard formats but the tradeoff here is about composability.

                                                                                                                                                UNIX turned every communication into byte streams.

                                                                                                                                                Bytes byte at times, but they are standard, after all! Their interpretation is not, but that’s what provides the flexibility.

                                                                                                                                                1. 4

                                                                                                                                                  Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.

                                                                                                                                                  While I am definitely not a proponent of JavaScript, computations in XSLT are incredibly verbose and convoluted, mainly because XSLT for some reason needs to be XML and XML is just a poor syntax for actual programming.

                                                                                                                                                  That and the fact that while my transformations worked fine with xsltproc but did just nothing in browsers without any decent way to debug the problem made me put away XSLT as an esolang — lot of fun for an afternoon, not what I would use to actually get things done.

                                                                                                                                                  That said, I’d take XML output from Unix tools and some kind of jq-like processor any day over manually parsing text out of byte streams.

                                                                                                                                                  1. 2

                                                                                                                                                    I loved it when I did HTML wanting something more flexible that machines could handle. XHTML was my use case as well. Once I was a better programmer, I realized it was probably an overkill standard that could’ve been something simpler with a series of tools each doing their little job. Maybe even different formats for different kinds of things. W3C ended up creating a bunch of those anyway.

                                                                                                                                                    “Pipelines are integrated on the fly.”

                                                                                                                                                    Maybe put it in the OS like a JIT. Far as bytestreams, that mostly what XDR did. They were just minimally-structured, byte streams. Just tie the data types, layouts, and so on to whatever language the OS or platform uses the most.

                                                                                                                                            2. 3

                                                                                                                                              JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).

                                                                                                                                              This is true, but but it does not mean heaving some kind of common interchange format does not improve things. So yes, it does not tell you what the data will contain (but “custom text format, possibly tab separated” is, again, not better). I know the problem, since I often work with JSON that contains or misses things. But the problem is not to not use JSON but rather have specifications. JSON has a number of possible schema formats which puts it at a big advantage of most custom formats.

                                                                                                                                              The other alternative is of course something like ProtoBuf, because it forces the use of proto files, which is at least some kind of specification. That throws away the human readability, which I didn’t want to suggest to a Unix crowd.

                                                                                                                                              Thinking about it, an established binary interchange format with schemas and a transport is in some ways reminiscent of COM & CORBA in the nineties.

                                                                                                                                            3. 7

                                                                                                                                              will break once the format is changed slighly

                                                                                                                                              Doesn’t this happens with json too?
                                                                                                                                              A slight change in the key names or turning a string to a listof strings and the recipient won’t be able to handle the input anyway.

                                                                                                                                              the output accidentally contains a space.

                                                                                                                                              Or the output accidentally contact a comma: depending on the parser, the behaviour will change.

                                                                                                                                              No, jq doesn’t exis…

                                                                                                                                              Jq is great, but I would not say JSON should be the default output when you want composable programs.

                                                                                                                                              For example JSON root is always a whole object and this won’t work for streams that get produced slowly.

                                                                                                                                              1. 5

                                                                                                                                                will break once the format is changed slighly

                                                                                                                                                Doesn’t this happens with json too?

                                                                                                                                                Using a whitespace separated table such as suggested in the article is somewhat vulnerable to continuing to appear to work after the format has changed while actually misinterpreting the data (e.g. if you inserted a new column at the beginning, your pipeline could happily continue, since all it needs is at least two columns with numbers in). Json is more likely to either continue working correctly and ignore the new column or fail with an error. Arguably it is the key-value aspect that’s helpful here, not specifically json. As you point out, there are other issues with using json in a pipeline.

                                                                                                                                              2. 3

                                                                                                                                                On the other hand, most Unix tools use tabular format or key value format. I do agree though that the lack of guidelines makes it annoying to compose.

                                                                                                                                                1. 2

                                                                                                                                                  Hands up everybody that has to write parsers for zpool status and its load-bearing whitespaces to do ZFS health monitoring.

                                                                                                                                                  1. 2

                                                                                                                                                    In my day-to-day work, there are times when I wish some tools would produce JSON and other times when I wish a JSON output was just textual (as recommended in the article). Ideally, tools should be able to produce different kinds of outputs, and I find libxo (mentioned by @apy) very interesting.

                                                                                                                                                    1. 2

                                                                                                                                                      I spent very little time thinking about this after reading your comment and wonder how, for example, the core utils would look like if they accepted/returned JSON as well as plain text.

                                                                                                                                                      A priori we have this awful problem of making everyone understand every one else’s input and output schemas, but that might not be necessary. For any tool that expects a file as input, we make it accept any JSON object that contains the key-value pair "file": "something". For tools that expect multiple files, have them take an array of such objects. Tools that return files, like ls for example, can then return whatever they want in their JSON objects, as long as those objects contain "file": "something". Then we should get to keep chaining pipes of stuff together without having to write ungodly amounts jq between them.

                                                                                                                                                      I have no idea how much people have tried doing this or anything similar. Is there prior art?

                                                                                                                                                      1. 9

                                                                                                                                                        In FreeBSD we have libxo which a lot of the CLI programs are getting support for. This lets the program print its output and it can be translated to JSON, HTML, or other output forms automatically. So that would allow people to experiment with various formats (although it doesn’t handle reading in the output).

                                                                                                                                                        But as @Shamar points out, one problem with JSON is that you need to parse the whole thing before you can do much with it. One can hack around it but then they are kind of abusing JSON.

                                                                                                                                                        1. 2

                                                                                                                                                          That looks like a fantastic tool, thanks for writing about it. Is there a concerted effort in FreeBSD (or other communities) to use libxo more?

                                                                                                                                                          1. 1

                                                                                                                                                            FreeBSD definitely has a concerted effort to use it, I’m not sure about elsewhere. For a simple example, you can check out wc:

                                                                                                                                                            apy@bsdell ~> wc -l --libxo=dtrt dmesg.log
                                                                                                                                                                 238 dmesg.log
                                                                                                                                                            apy@bsdell ~> wc -l --libxo=json dmesg.log
                                                                                                                                                            {"wc": {"file": [{"lines":238,"filename":"dmesg.log"}]}
                                                                                                                                                      2. 1

                                                                                                                                                        powershell uses objects for its pipelines, i think it even runs on linux nowaday.

                                                                                                                                                        i like json, but for shell pipelining it’s not ideal:

                                                                                                                                                        • the unstructured nature of the classic output is a core feature. you can easily mangle it in ways the programs author never assumed, and that makes it powerful.

                                                                                                                                                        • with line based records you can parse incomplete (as in the process is not finished) data more easily. you just have to split after a newline. with json, technically you can’t begin using the data until a (sub)object is completely parsed. using half-parsed objects seems not so wise.

                                                                                                                                                        • if you output json, you probably have to keep the structure of the object tree which you generated in memory, like “currently i’m in a list in an object in a list”. thats not ideal sometimes (one doesn’t have to use real serialization all the time, but it’s nicer than to just print the correct tokens at the right places).

                                                                                                                                                        • json is “java script object notation”. not everything is ideally represented as an object. thats why relational databases are still in use.

                                                                                                                                                        edit: be nicer ;)

                                                                                                                                                      1. 2

                                                                                                                                                        Jenkins Pipelines are becoming the standard way to programmatically specify your CI flow.

                                                                                                                                                        Please, please don’t be true.

                                                                                                                                                        The nicest thing I can say about Jenkins is that it has a fantastic deployment experience. I wish I could say that about literally any other competing hosted CI system— Concourse especially.

                                                                                                                                                        1. 1

                                                                                                                                                          That’s how you feel when you use only Jenkins… It’s like this huge echo chamber.

                                                                                                                                                          Gitlab (also self hosted) is doing a terrific job at keeping up to date the workflow with other solutions like Travis or Circle. Jenkins is a good solution but has a huge amount of legacy and still an inertia that will keep it from catching up at the same time.

                                                                                                                                                        1. 4

                                                                                                                                                          This awesome hall of shame preferred to shutdown than to comply apparently.