1. 15

    As a junior developer doing my best to learn as much as I can, both technically and in terms of engineering maturity, I’d love to hear what some of the veterans here have found useful in their own careers for getting the most out of their jobs, projects, and time.

    Anything from specific techniques as in this post to general mindset and approach would be most welcome.

    1. 33

      Several essentials have made a disproportionate benefit on my career. In no order:

      • find a job with lots of flexibility and challenging work
      • find a job where your coworkers continuously improve themselves as much (or more) than you
      • start writing a monthly blog of things you learn and have strong opinions on
      • learn to be political (it’ll help you stay with good challenging work). Being political isn’t slimy, it is wise. Be confident in this.
      • read programming books/blogs and develop a strong philosophy
      • start a habit of programming to learn for 15 minutes a day, every day
      • come to terms with the fact that you will see a diminishing return on new programing skills, and an increasing return on “doing the correct/fastest thing” skills. (e.g. knowing what to work on, knowing what corners to cut, knowing how to communicate with business people so you only solve their problems and not just chase their imagined solutions, etc). Lean into this, and practice this skill as often as you can.

      These have had an immense effect on my abilities. They’ve helped me navigate away from burnout and cultivated a strong intrinsic motivation that has lasted over ten years.

      1. 5

        Thank you for these suggestions!

        Would you mind expanding on the ‘be political’ point? Do you mean to be involved in the ‘organizational politics’ where you work? Or in terms of advocating for your own advancement, ensuring that you properly get credit for what you work on, etc?

        1. 13

          Being political is all about everything that happens outside the editor. Working with people, “managing up”, figuring out the “real requirements’, those are all political.

          Being political is always ensuring you do one-on-ones, because employees who do them are more likely to get higher raises. It’s understanding that marketing is often reality, and you are your only marketing department.

          This doesn’t mean put anyone else down, but be your best you, and make sure decision makers know it.

          1. 12

            Basically, politics means having visibility in the company and making sure you’re managing your reputation and image.

            A few more random bits:

        2. 1

          start a habit of programming to learn for 15 minutes a day, every day

          Can you give an example? So many days I sit down after work or before in front of my computer. I want to do something, but my mind is like, “What should I program right now?”

          As you can probably guess nothing gets programmed. Sigh. I’m hopeless.

          1. 1

            Having a plan before you sit down is crucial. If you sit and putter, you’ll not actually improve, you’ll do what’s easy.

            I love courses and books. I also love picking a topic to research and writing about it.

            Some of my favorite courses:

        3. 14

          One thing that I’ve applied in my career is that saying, “never be the smartest person in the room.” When things get too easy/routine, I try to switch roles. I’ve been lucky enough to work at a small company that grew very big, so I had the opportunity to work on a variety of things; backend services, desktop clients, mobile clients, embedded libraries. I was very scared every time I asked, because I felt like I was in over my head. I guess change is always a bit scary. But every time, it put some fun back into my job, and I learned a lot from working with people with entirely different skill sets and expertise.

          1. 11

            I don’t have much experience either but to me the best choice that I felt in the last year was stop worrying about how good a programmer I was and focus on how to enjoy life.

            We have one life don’t let anxieties come into play, even if you intellectually think working more should help you.

            1. 8

              This isn’t exactly what you’re asking for, but, something to consider. Someone who knows how to code reasonably well and something else are more valuable than someone who just codes. You become less interchangeable, and therefore less replaceable. There’s tons of work that people who purely code don’t want to do, but find very valuable. For me, that’s documentation. I got my current job because people love having docs, but hate writing docs. I’ve never found myself without multiple options every time I’ve ever looked for work. I know someone else who did this, but it was “be fluent In Japanese.” Japanese companies love people who are bilingual with English. It made his resume stand out.

              1. 1

                . I got my current job because people love having docs, but hate writing docs.

                Your greatest skill in my eyes is how you interact with people online as a community lead. You have a great style for it. Docs are certainly important, too. I’d have guessed they hired you for the first set of skills rather than docs, though. So, that’s a surprise for me. Did you use one to pivot into the other or what?

                1. 7

                  Thanks. It’s been a long road; I used to be a pretty major asshole to be honest.

                  My job description is 100% docs. The community stuff is just a thing I do. It’s not a part of my deliverables at all. I’ve just been commenting on the internet for a very long time; I had a five digit slashdot ID, etc etc. Writing comments on tech-oriented forums is just a part of who I am at this point.

                  1. 2

                    Wow. Double unexpected. Thanks for the details. :)

              2. 7

                Four things:

                1. People will remember you for your big projects (whether successful or not) as well as tiny projects that scratch an itch. Make room for the tiny fixes that are bothering everyone; the resulting lift in mood will energize the whole team. I once had a very senior engineer tell me my entire business trip to Paris was worth it because I made a one-line git fix to a CI system that was bothering the team out there. A cron job I wrote in an afternoon at an internship ended up dwarfing my ‘real’ project in terms of usefulness to the company and won me extra contract work after the internship ended.

                2. Pay attention to the people who are effective at ‘leaving their work at work.’ The people best able to handle the persistent, creeping stress of knowledge work are the ones who transform as soon as the workday is done. It’s helpful to see this in person, especially seeing a deeply frustrated person stand up and cheerfully go “okay! That’ll have to wait for tomorrow.” Trust that your subconscious will take care of any lingering hard problems, and learn to be okay leaving a work in progress to enjoy yourself.

                3. Having a variety of backgrounds is extremely useful for an engineering team. I studied electrical engineering in college and the resulting knowledge of probability and signal processing helped me in environments where the rest of the team had a more traditional CS background. This applies to backgrounds in fields outside engineering as well: art, history, literature, etc will give you different perspectives and abilities that you can use to your advantage. I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                4. Learn about the concept of the ‘asshole filter’ (safe for work). In a nutshell, if you give people who violate your boundaries special treatment (e.g. a coworker who texts you on your vacation to fix a noncritical problem gets their problem fixed) then you are training people to violate your boundaries. You need to make sure that people who do things ‘the right way’ (in this case, waiting for when you get back or finding someone else to fix it) get priority, so that over time people you train people to respect you and your boundaries.

                1. 3

                  I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.

                  The methodology from that talk is here: http://codecrit.com/methodology.html

                  I would change “If the code doesn’t work, we shouldn’t be reviewing it”. There is a place for code review of not-done work, of the form “this is the direction I’m starting to go in…what do you think”. This can save a lot of wasted effort.

                2. 3

                  The biggest mistake I see junior (and senior) developers make is key mashing. Slow down, understand a problem, untangle the dependent systems, and don’t just guess at what the problem is. Read the code, understand it. Read the code of the underlying systems that you’re interacting with, and understand it. Only then, make an attempt at fixing the bug.

                  Stabs in the dark are easy. They may even work around problems. But clean, correct, and easy to understand fixes require understanding.

                  1. 3

                    Another thing that helps is the willingness to dig into something you’re obsessed with even if it is deemed not super important by everyone around you. eg. if you find a library / language / project you find fun and seem to get obsessed with, that’s great, keep going at it and don’t let the existential “should i be here” or other “is everyone around me doing this too / recommending this” questions slow you down. You’ll probably get on some interesting adventures.

                    1. 3

                      Never pass up a chance to be social with your team/other coworkers. Those relationships you build can benefit you as much as your work output.

                      (This doesn’t mean you compromise your values in any way, of course. But the social element is vitally important!)

                    1. 15

                      The biggest issue I have with the defaults, and the borrow checker is that places in FP where you would normally >pass by copy — pass by value, in Rust instead it assumes you want to pass by reference. Therefore, you need to >clone things by hand and pass the cloned versions instead. Although it has a mechanism to do this automatically, it’s >far from ergonomic.

                      The argument of pass by reference, or borrowing is that it’s more performant than cloning by default. In general, >computers are getting faster, but systems are getting more complex.

                      It’s actually not the case that computers are getting faster in general anymore - Moore’s law has been slowing as we get closer to fundamental physical limits in terms of how small we can build transistors, and actual effective clock speeds for hardware haven’t been increasing significantly for about a decade now. Consequently programmers should be more leery than they are in practice in using non-performant but easy-to-write programming languages and constructs - even ignoring the fact that Moore’s law gains can no longer be counted on, it’s easy for people writing in the middle or towards the top of a large software stack to write non-performant code that stacks on top of other people’s non-performant code, leading to user-visible slowdown and latency even on modern, fast hardware. This is one of the huge issues with applications built on the modern web (in fact, my browser is chugging a little as I write this in the text box, which really shouldn’t be happening on a 2018 computer, and I think it’s the result of a shitty webapp in another tab).

                      In any case, one of Rust’s explicit design goals is to be a useful modern language in contexts where minimal use of computing resources like CPU time and memory is important, which is exactly why Rust generally avoids copies unless you explicitly tell it to with .clone() or something similar. Personally, I’ve written a fair amount of Rust code where I do make inefficient copies to avoid complexity (especially while developing an algorithm that I plan to make more efficient later), and I don’t find it particularly onerous to stick a few .clone()s here and there to make the compiler happy.

                      1. 6

                        I agree with you, and would go further and say that resource usage always matters. In my opinion, performance is an accessibility issue; programs that care about performance can be used on cheaper/older hardware. Not everyone can afford the latest, greatest hardware.

                      1. 31

                        at this point most browsers are OS’s that run (and build) on other OS’s:

                        • language runtime - multiple checks
                        • graphic subsystem - check
                        • networking - check
                        • interaction with peripherals (sound, location, etc) - check
                        • permissions - for users, pages, sites, and more.

                        And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                        1. 11

                          Browsers rarely link out the system. FF/Chromium have their own PNG decodes, JPEG decodes, AV codecs, memory allocators or allocation abstraction layers, etc. etc.

                          It bothers me everything is now shipping as an electron app. Do we really need every single app to have the footprint of a modern browser? Can we at least limit them to the footprint of Firefox2?

                          1. 10

                            but if you limit it to the footprint of firefox2 then computers might be fast enough. (a problem)

                            1. 2

                              New computers are no longer faster than old computers at the same cost, though – moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                              (Maybe secondary storage speed will have a big bump, if you’re moving from hard disk to SSD, but that only happens once.)

                              1. 3

                                moore’s law ended in 2005 and consumer stuff has caught up with the lag. So, the only speed-up from replacement is from clearing out bloat, not from actual hardware improvements in processing speed.

                                Are you claiming there have been no speedups due to better pipelining, out-of-order/speculative execution, larger caches, multicore, hyperthreading, and ASIC acceleration of common primitives? And the benchmarks magazines post showing newer stuff outperforming older stuff were all fabricated? I’d find those claims unbelievable.

                                Also, every newer system I had was faster past 2005. I recently had to use an older backup. Much slower. Finally, performance isn’t the only thing to consider: the newer, process nodes use less energy and have smaller chips.

                                1. 2

                                  I’m slightly overstating the claim. Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                                  Once we’ve picked all the low-hanging fruit (simple optimization tricks with major & general impact) we’ll need to start seriously milking performance out of multicore and other features that actually require the involvement of application developers. (Multicore doesn’t affect performance at all for single-threaded applications or fully-synchronous applications that happen to have multiple threads – in other words, everything an unschooled developer is prepared to write, unless they happen to be mostly into unix shell scripting or something.)

                                  Moore’s law isn’t all that matters, no. But, it matters a lot with regard to whether or not we can reasonably expect to defend practices like electron apps on the grounds that we can maintain current responsiveness while making everything take more cycles. The era where the same slow code can be guaranteed to run faster on next year’s machine without any effort on the part of developers is over.

                                  As a specific example: I doubt that even in ten years, a low-end desktop PC will be able to run today’s version of slack with reasonable performance. There is no discernible difference in its performance between my two primary machines (both low-end desktop PCs, one from 2011 and one from 2017). There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.

                                  1. 5

                                    Performance increases have dropped to incremental from exponential, and are associated with piecemeal attempts to chase performance increase goals that once were a straightforward result of increased circuit density through optimization tricks that can only really be done once.

                                    I agree with that totally.

                                    “Multicore doesn’t affect performance at all for single-threaded applications “

                                    Although largely true, people often forget a way multicore can boost single-threaded performance: simply letting the single-threaded app have more time on CPU core since other stuff is running on another. Some OS’s, esp RTOS’s, let you control which cores apps run on specifically to utilize that. I’m not sure if desktop OS’s have good support for this right now, though. I haven’t tried it in a while.

                                    “There isn’t a perpetually rising tide that makes all code more performant anymore, and the kind of bookkeeping that most web apps spend their cycles in doesn’t have specialized hardware accelerators the way matrix arithmetic does.”

                                    Yeah, all the ideas I have for it are incremental. The best illustration of where rest of gains might come from is Cavium’s Octeon line. They have offloading engines for TCP/IP, compression, crypto, string ops, and so on. On rendering side, Firefox is switching to GPU’s which will take time to fully utilize. On Javascript side, maybe JIT’s could have a small, dedicated core. So, there’s still room for speeding Web up in hardware. Just not Moore’s law without developer effort like you were saying.

                          2. 9

                            Although you partly covered it, I’d say “execution of programs” is good wording for JavaScript since it matches browser and OS usage. There’s definitely advantages to them being smaller. A guy I knew even deleted a bunch of code out of his OS and Firefox to achieve that on top of a tiny, backup image. Dude had a WinXP system full of working apps that fit on one CD-R.

                            Far as secure browsers, I’d start with designs from high-assurance security bringing in mainstream components carefully. Some are already doing that. An older one inspired Chrome’s architecture. I have a list in this comment. I’ll also note that there were few of these because high-assurance security defaulted on just putting a browser in a dedicated partition that isolated it from other apps on top of security-focused kernels. One browser per domain of trust. Also common were partitioning network stacks and filesystems that limited effect of one partition using them on others. QubesOS and GenodeOS are open-source software that support these with QubesOS having great usability/polish and GenodeOS architecturally closer to high-security designs.

                            1. 6

                              Are there simpler browsers optimised for displaying plain ol’ hyperlinked HTML documents, and also support modern standards? I don’t really need 4 tiers of JIT and whatnot for web apps to go fast, since I don’t use them.

                              1. 12

                                I’ve always thought one could improve on a Dillo-like browser for that. I also thought compile-time programming might make various components in browsers optional where you could actually tune it to amount of code or attack surface you need. That would require lots of work for mainstream stuff, though. A project like Dillo might pull it off, though.

                                1. 10
                                  1. 3

                                    Oh yeah, I have that on a Raspberry Pi running RISC OS. It’s quite nice! I didn’t realise it runs on so many other platforms. Unfortunately it only crashes on my main machine, I will investigate. Thanks for reminding me that it exists.

                                    1. 2

                                      Fascinating; how had I never heard of this before?

                                      Or maybe I had and just assumed it was a variant of suckless surf? https://surf.suckless.org/

                                      Looks promising. I wonder how it fares on keyboard control in particular.

                                      1. 1

                                        Aw hell; they don’t even have TLS set up correctly on https://netsurf-browser.org

                                        Does not exactly inspire confidence. Plus there appears to be no keyboard shortcut for switching tabs?

                                        Neat idea; hope they get it into a usable state in the future.

                                      2. 1

                                        AFAIK, it doesn’t support “modern” non-standards.

                                        But it doesn’t support Javascript either, so it’s way more secure of mainstream ones.

                                      3. 8

                                        No. Modern web standards are too complicated to implement in a simple manner.

                                        1. 3

                                          Either KHTML or Links is what you’d like. KHTML would probably be the smallest browser you could find with a working, modern CSS, javascript and HTML5 engine. Links only does HTML <=4.0 (including everything implied by its <img> tag, but not CSS).

                                          1. 2

                                            I’m pretty sure KHTML was taken to a farm upstate years ago, and replaced with WebKit or Blink.

                                            1. 6

                                              It wasn’t “replaced”, Konqueror supports all KHTML-based backends including WebKit, WebEngine (chromium) and KHTML. KHTML still works relatively well to show modern web pages according to HTML5 standards and fits OP’s description perfectly. Konqueror allows you to choose your browser engine per tab, and even switch on the fly which I think is really nice, although this means loading all engines that you’re currently using in memory.

                                              I wouldn’t say development is still very active, but it’s still supported in the KDE frameworks, they still make sure that it builds at least, along with the occasional bug fix. Saying that it was replaced is an overstatement. Although most KDE distributions do ship other browsers by default, if any, and I’m pretty sure Falkon is set to become KDE’s browser these days, which is basically an interface for WebEngine.

                                          2. 2

                                            A growing part of my browsing is now text-mode browsing. Maybe you could treat full graphical browsing as an exception and go to the minimum footprint most of the time…

                                        2. 4

                                          And more importantly, is there any (important to the writers) advantage to them becoming smaller? Security maybe?

                                          user choice. rampant complexity has restricted your options to 3 rendering engines, if you want to function in the modern world.

                                          1. 3

                                            When reimplementing malloc and testing it out on several applications, I found out that Firefox ( at the time, I don’t know if this is still true) had its own internal malloc. It was allocating a big chunk of memory at startup and then managing it itself.

                                            Back in the time I thought this was a crazy idea for a browser but in fact, it follows exactly the idea of your comment!

                                            1. 3

                                              Firefox uses a fork of jemalloc by default.

                                              1. 2

                                                IIRC this was done somewhere between Firefox 3 and Firefox 4 and was a huge speed boost. I can’t find a source for that claim though.

                                                Anyway, there are good reasons Firefox uses its own malloc.

                                                Edit: apparently I’m bored and/or like archeology, so I traced back the introduction of jemalloc to this hg changeset. This changeset is present in the tree for Mozilla 1.9.1 but not Mozilla 1.8.0. That would seem to indicate that jemalloc landed in the 3.6 cycle, although I’m not totally sure because the changeset description indicates that the real history is in CVS.

                                            2. 3

                                              In my daily job, this week I’m working on patching a modern Javascript application to run on older browsers (IE10, IE9 and IE8+ GCF 12).

                                              The hardest problems are due the different implementation details of same origin policy.
                                              The funniest problem has been one of the used famework that used “native” as variable name: when people speak about the good parts in Javascript I know they don’t know what they are talking about.

                                              BTW, if browser complexity address a real problem (instead of being a DARPA weapon to get control of foreign computers), such problem is the distribution of computation among long distances.

                                              Such problem was not addressed well enough by operating systems, despite some mild attempts, such as Microsoft’s CIFS.

                                              This is partially a protocol issue, as both NFS, SMB and 9P were designed with local network in mind.

                                              However, IMHO browsers OS are not the proper solution to the issue: they are designed for different goals, and they cannot discontinue such goals without loosing market share (unless they retain such share with weird marketing practices as Microsoft did years ago with IE on Windows and Google is currently doing with Chrome on Android).

                                              We need better protocols and better distributed operating systems.

                                              Unfortunately it’s not easy to create them.
                                              (Disclaimer: browsers as platforms for os and javascript’s ubiquity are among the strongest reasons that make me spend countless nights hacking an OS)

                                            1. 2

                                              “If you were to go back in time to 1987, this is probably similar to what would have replaced the Amiga if Jack Tramiel had never left Commodore.”

                                              Cool project, but I don’t think this is true. Amiga 500 had 512KB of RAM because it was bloody expensive. So did majority of competitors. Nobody would put 1.5MB in a computer at that time because it would severely reduce number of units you could shift for little benefit. Pretty much all software written at that point needed far less than that (even on multitasking Amiga).

                                              Also, I believe 65C816 did not run at 14Hz back then. Not many chips did and both Amiga and Atari were running at 7-8Hz.

                                              1. 3

                                                The A500 could be expanded up to 7 MB though, so I don’t think it’s completely out of line.

                                                I wonder if the CPU is actually the W65C816S, which is readily available at 14 MHz. I sent an email to Stefany and asked about it.

                                                Edit: it is indeed the W65C816S from Western Design Center.

                                              1. 2

                                                At my previous job, we only used a few seconds, each. Move tasks into doing/done/blocked, and maybe say something about it. Any discussion was deferred to after standup, and those interested could stick around. We were usually around 8 people, and it would take maybe 5 minutes.

                                                1. 3

                                                  I’ve been swimming in Rust for a while (even wrote a book). I’m currently looking at Swift because I think it paints a much more attractive picture ergonomics-wise than Rust. Mostly because of the different memory handling, but also because Swift opted for the (nowadays) more classical OO style. Also I like that Swift has a REPL.

                                                  For a possibly superficial and silly reason, Slava Pestov is working on Swift, and I have mad respect for him from Factor.

                                                  1. 4

                                                    In a similar vein, Graydon Hoare is working on Swift and I have mad respect for him from Rust…

                                                    1. 2

                                                      It will be interesting to see where the proposal to add a Rust-like, opt-in ownership system to Swift goes.

                                                    1. 3

                                                      Goes to show how individual these things are. I have pretty much the opposite experience; I feel much more productive in Swift than in Objective-C, and think it’s a fine language. The biggest issues I had, when I used it regularly, were with the tooling rather than the language.

                                                      1. 4

                                                        I’m in a new country, so I’m studying the official language. It’s very hard, but also very rewarding. The natives get quite enthusiastic when I try to communicate in their language, and immediately switch back to it if I have to use English for something. They enunciate more clearly, and some want to chat for a bit and exchange language tidbits. “So, now you have to give me some [my first language] in return!”

                                                        1. 3

                                                          That was a fun and informative read, thank you for sharing!

                                                          1. 3

                                                            I agree with a lot of the points raised in the article: microservices/SOA is ultimately more about scaling your engineering team than scaling your backend. But I’d add one more useful reason for SOA, though, that’s applicable even when you’re small: simple, robust fault tolerance. If one service is failing, you can circuit-break and isolate the failure to that particular set of functionality; when everything’s deployed as a monolith, if one piece of code starts acting up it can be very difficult to prevent the damage from spreading. For example, if someone ships a bug that chews through all the IOPS available on your EC2 machines, or exhausts all available file descriptors, or writes logs faster than you can rotate them, or… etc; it’ll hose even the working code if it’s deployed as one big bundle that services any kind of request. With small, independent services running on separate machines, you can isolate these kinds of issues much more cleanly and keep most of the backend up even when something’s gone wrong with a single service. It’s obviously not perfect — what if what’s broken is something centralized like your deployment tools? — but it minimizes a lot of otherwise-scary bugs in practice.

                                                            1. 6

                                                              it’ll hose even the working code if it’s deployed as one big bundle that services any kind of request.

                                                              You can still have separate machines that only handle certain types of requests/are optimised for differen workloads, with a monolithic codebase.

                                                              I quite like building things that way, and basically build each “microservice” as a separate library, to enforce modularity. Then, link them all together. These can also be tested separately. I admit I have not tried this at massive scale, however.

                                                              1. 2

                                                                On top of alva’s comment, some setups will have features to both restrict resource use and detect craziness that indicates a bug. They’ll take action ranging from notifying an admin to halting the application. Instances not using that buggy functionality will be unaffected. From there, the admins might put in a temporary filter for packets calling that function that makes them fail fast before even reaching the instances’s buggy code. This is removed after the application is patched.

                                                              1. 14

                                                                When I was learning C++, it felt to me like very little of that knowledge was transferable. I wasn’t learning concepts or techniques that I could apply elsewhere, just the idiosyncrasies of C++. I guess that depends on what experience you bring into it; I picked it up relatively late in my computer life. It’s come in handy at some jobs, though; there’s always that legacy system that no-one really wants to touch, and I get to feel like a badass for volunteering. So for me, the value of learning C++ is simply that I can work on existing code, as well as read/learn from cool projects that happen to be written in it. I don’t think it’s rewarding in itself, but I agree on your point about confidence.

                                                                1. 6

                                                                  it felt to me like very little of that knowledge was transferable. I wasn’t learning concepts or techniques that I could apply elsewhere, just the idiosyncrasies of C++.

                                                                  That’s a good way to phrase it, I thought that as well, but failed to articulate it. Also, c++ is the ultimate technical interview “stump the chump” quizz show language. I was once on a java interview, and I have c++ on my resume because I used to use it a lot. The interviewer started in on some c++ questions. I said ‘woah, is this a c++ position here?” No, it’s java was the reply. “Let’s stick to java shall we?” I did not get an offer, wouldn’t have accepted anyway.

                                                                  1. 4

                                                                    it felt to me like very little of that knowledge was transferable. I wasn’t learning concepts or techniques that I could apply elsewhere, just the idiosyncrasies of C++

                                                                    Same here. Later, I learned it was features of wildly-different languages merged together in a C-compatible way. Then, they kept extending it. The result is a mess of a language. PreScheme and Modula-3 had cleaner, consistent designs with plenty of features. With good design, they also compile really fast. Slow compiles were a major factor in keeping me away from C++. For modern stuff, D’s design eliminated that with many benefits. Rust was slow to compile last I looked but brought major benefits to justify it. Nim has nice syntax, macros, and C compatibility. Idk about compile times.

                                                                    So, looking at competition, I find C++ to just be unnecessarily hard to learn and use due to its design choices. Different design choices could’ve improved syntax, safety, compile time, or even more runtime efficiency. The good news is there’s alternatives now with decent ecosystems, one a great ecosystem. Unless doing legacy code, I’d say invest effort into one or more of those instead. Further, remember that ports of legacy components don’t necessarily require knowing the language: one might team up with a person that knows that language for them to translate the code for you into pseudocode or something.

                                                                    1. 5

                                                                      Yeah, Rust is my weapon of choice for new projects. Compile times are pretty bad on my old T420, but is slowly improving. I am eagerly looking forward to possible alternative backends for debug builds.

                                                                      1. 2

                                                                        I keep using C++ for two reasons. One, Bjarne is the greatest language designer because he never gave up and created a language that is only becoming more relevant over time. Other designers give up and end up making new languages, one after another. Two, it turns out the world is a messy place and C++ has lots of symmetry breaking features. “Clean elegant” languages fail to make programming for humans as easy.

                                                                        1. 4

                                                                          What is the value in a symmetry breaking feature?

                                                                          1. 2

                                                                            I think one is addressed mostly by the economic and social side of it like with C/UNIX’s spread. It can be an advantage but allows for designing better languages that similarly use ecosystem power. Even lots of C++ compatibility without as many disadvantages. ZL was an attempt that gets no attention. On your second point, C++ seems to have unnecessary complexity and performance impact vs its competitors which mean it’s unclean or inelegant for unjustified reasons. A better C++ could be created that reduced the difficulties or gave even more benefits justifying dealing with the language complexity. I already named some doing that.

                                                                            There’s definitely a lot of critical apps written in C++, though. Very unfortunate, too, since I wanted to apply strong assurance technology to some of them. There was hardly anything to use compared to C. The learning curve was also huge compared to some others. Had to back off on that but still thinking conceptually about translaters, including like ZL.

                                                                      1. 2

                                                                        I have pretty basic needs, that have been served well by Hopper. I definitely want to try this, though; radare seemed cool but daunting the first time I came across it, but this doesn’t look too different from Hopper.

                                                                        1. 3

                                                                          I didn’t know about USSD, that’s neat! I wonder if that’s what they use here in [redacted]; when I opened a bank account, they simply asked for my phone number, and then pushed dialogs to my phone, asking me to accept terms, set a PIN, things like that. I did not have to install anything, and I’ve never seen that UI on my phone before.

                                                                          1. 2

                                                                            Simultaneously disappointed and relieved. Disappointed because I was hoping to see a successor to UIKit/Cocoa, as they are quite tedious, IMO (why is it so hard in Cocoa to make a table view with row heights that adapt to a text view?) I’ve also been curious what frameworks designed for Swift would be like, ever since Swift was announced. They’ve done a good job adding features/annotations to Objective-C to improve interoperability (remember when T* was Optional<T>?,) but it’s still a mismatch, and (thus far) a wasted opportunity.

                                                                            And I’m relieved because I don’t have a mac anymore, and this doesn’t tempt me back. :-þ

                                                                            1. 1

                                                                              Laptop grade sounds promising I guess. Cortex cores so far have been really weak. Hopefully this is actually better.

                                                                              1. 6

                                                                                I think their direction is more exciting than the hardware itself, to be honest. The current crop of Snapdragon 835 PCs are appealing to me, because I’d rather have outstanding battery life than top-tier performance (they’re obviously nowhere close.) So, that-but-better is an attractive prospect to me.

                                                                              1. 6

                                                                                I quite miss the immediacy of older home computers, even though I was a bit too young to understand much of it. I could just switch on the C64, and instantly have an interactive programming environment. Type in POKE 53281,4 and watch the background change to purple. I had no idea what I was doing, but there was no friction. I’d copy code from magazines and learn by playing around with it. I started sketching on a little device dedicated to running TIC-80, that I hope to put together some day.

                                                                                1. 2

                                                                                  I have a suspicion that this is partly why web development is so popular – you can manipulate the DOM and get immediate feedback without really having any idea what you’re doing.

                                                                                  1. 2

                                                                                    That’s a good point, and I share your suspicion. “Have a web browser” is basically the only requirement nowadays, then you have a decent REPL, instant-feedback GUI manipulation (and you can fiddle with any web page you’re viewing,) and so on. I think most would agree that the barrier of entry is enticingly low.

                                                                                    1. 1

                                                                                      Although I’m usually anti-web-app, one of the great developments in that space is being able to play with code in the browser without installing anything. Especially in conjunction with tutorials on learning that language. It would’ve been nice to have when I was new to programming. I had QBasic, though, so next best thing. :)

                                                                                      1. 2

                                                                                        I agree. I don’t particularly enjoy web dev, or web apps, but I would be delighted if more projects would strive for this kind of accessibility. FWIW, I think the Rust folks have done an amazing job here, considering the space Rust operates in. Cargo is just delightful, super easy to get started, to build/run a project, to generate documentation; no wrangling virtual environments, packages that fail to install for incomprehensible reasons, or the other things Python/Ruby expect me to put up with (let’s not even talk about C++.) And, of course, the rustc diagnostics are just amazing. It’s not quite QBasic though, and requires more effort before you have something fun running. But it’s clear that they consider this a priority, and it shows.

                                                                                  2. 2

                                                                                    Have you been using TIC-80 a lot? I used to spend some time messing around in PICO-8, how does it compare?

                                                                                    1. 3

                                                                                      It’s a big plus to me that TIC-80 is open-source. I also like that it exposes functionality similar to raster interrupts, so I can try some of the old-school techniques I used to see in games.

                                                                                  1. 2

                                                                                    Arm is getting serious about the laptop market.

                                                                                    1. 22

                                                                                      I would like to add that, for some websites (such as mine,) being AMP-compliant requires adding more stuff (CSS, JS, some HTML,) resulting in worse load-times. Google already incentivises performance; ensuring every website I’ve helped out on scores 100/100 on PageSpeed Insights has resulted in a significant ranking boost. Why, then, do heavier AMP pages get preferential treatment? Maybe Google’s CDN is faster than the one I use, I don’t know, but does it matter when the page loads in < 200ms anyway? Out of non-scientific curiosity, I tossed a few AMP pages into PageSpeed and Pingdom speed test, and they were all substantially slower.

                                                                                      1. 4

                                                                                        The whole AMP thing seems pretty weird to me, the intent is fairly interesting but the way I see the actual project and its results makes me think of a POC project, really not fit for production.

                                                                                        1. 38

                                                                                          Because the AMP initiative is fundamentally about control, namely getting more user data. This is being pushed under the guise of “fixing” problems created by the organizations/developers themselves, as this article does a good job of laying out.

                                                                                          It’s funny how some will say it is “unreasonable” to de-bloat a website, while AMP is a “good idea.” The lack of critical thinking by the developer community at large is quite scary on this issue. :(

                                                                                          1. 2

                                                                                            Having worked at a high-traffic news website, my (anecdotal) experience is that many professional frontend developers understand the problems of bloat and want to fix them. Unfortunately as always it can be a hard sell to management to fix technical debt.

                                                                                            We were lucky enough to have some excellent product owners who really fought and made thr argument that speed improvements would bring business benefits, but even then there is limited time and budget available.

                                                                                            By contrast the management sell for AMP is that Google will give you better search results for comparatively little developer time.

                                                                                            For what it’s worth, much the same value proposition is driving publishers to Apple News and Facebook’s Instant Articles, which are AMP clones to some degree. It’s partly fear of missing out on an audience.

                                                                                            1. 3

                                                                                              which are AMP clones to some degree

                                                                                              Apple News scrapes existing websites, RSS/ATOM feeds, and an apple-defined JSON spec from news sites, and presents articles to the user. Which part of that is the same as “force news sites to deliver their content using a shitty JS renderer that routes all traffic via google’s CDN” ?

                                                                                              1. 1

                                                                                                That’s fair.

                                                                                                From the publisher’s perspective it feels similar in that content is taken from your site and presented in a format largely outside your control.

                                                                                                My point was that this is accepted because publishers don’t want to lose out on a potential audience, but it’s not necessarily a good deal otherwise. For example, advertising is managed by the provider and the publisher is cut in at some set rate. It’s hard to negotiate with a giant like Google, Apple or Facebook and so you take the rate you’re given.

                                                                                                I’m actually ok with anything that forces people to rely less on advertising as a source of revenue - my personal opinion is that it’s a hostile experience and not sustainable.

                                                                                                However, I think it’s fair to say that many publications will feel that a reader on AMP/Apple News/Facebook is worth less in terms of advertising revenue than a direct website reader.

                                                                                      1. 42

                                                                                        GitLab is really worth a look as an alternative. One big advantage of GitLab is that the core technology is open source. This means that anybody can run their own instance. If the company ends up moving in a direction that the community isn’t comfortable with, then it’s always possible to fork it.

                                                                                        There’s also a proposal to support federation between GitLab instances. With this approach there wouldn’t even be a need for a single central hub. One of the main advantages of Git is that it’s a decentralized system, and it’s somewhat ironic that GitHub constitutes a single point of failure.

                                                                                        1. 17

                                                                                          Federated GitLabs sound interesting. The thing I’ve always wanted though is a standardised way to send pull requests/equivalent to any provider, so that I can self-host with Gitea or whatever but easily contribute back and receive contributions.

                                                                                          1. 7

                                                                                            git has built-in pull requests They go to the project mailing list, people code review via normal inline replies Glorious

                                                                                            1. 27

                                                                                              It’s really not glorious. It’s a severely inaccessible UX, with basically no affordances for tracking that review comments are resolved, for viewing different slices of commits from a patchset, or integrating with things like CI.

                                                                                              1. 7

                                                                                                I couldn’t tell if singpolyma was serious or not, but I agree, and I think GitHub and the like have made it clear what the majority of devs prefer. Even if it was good UX, if I self-host, setting up a mail server and getting people to participate that way isn’t exactly low-friction. Maybe it’s against the UNIX philosophy, but I’d like every part of the patchset/contribution lifecycle to be first-class concepts in git. If not in git core, then in a “blessed” extension, à la hub.

                                                                                                1. 2

                                                                                                  You can sort of get a tracking UI via Patchwork. It’s… not great.

                                                                                                  1. 1

                                                                                                    The only one of those Github us better at is integration with CI. They also have an inaccessible UX (doesn’t even work on my mobile devices, can’t imagine if I had accessibility needs…), doesn’t track when review comments are resolved, and there’s no UX facility for viewing different slices, you have to know git stuff to know the links

                                                                                                  2. 3

                                                                                                    I’ve wondered about a server-side process (either listen on http, poll a mailbox, etc) that could parse the format generated by git request-pull, and create a new ‘merge request’ that can then be reviewed by collaborators.

                                                                                                    1. 2

                                                                                                      I always find funny that usually, the same people advocating that emails are a technology with many inherent flaws that cannot be fixed, are the same people that advocate using the built in fit feature using emails…

                                                                                                  3. 6

                                                                                                    Just re: running your own instance, gogs is pretty good too. I haven’t used it with a big team so I don’t know how it stacks up there, but I set it up on a VPS to replace a paid Github account for private repos, where it seems fast, lightweight and does everything I need just fine.

                                                                                                    1. 20

                                                                                                      Gitea is a better maintained Gogs fork. I run both Gogs on an internal server and Gitea on the Internet.

                                                                                                      1. 9

                                                                                                        Yeah, stuff like gogs works well for private instances. I do find the idea of having public federated GitLab instances pretty exciting as an alternative to GitHub for open source projects though. In theory this could work similarly to the way Mastodon works currently. Individuals and organizations could setup GitLab servers that would federate between each other. This could allow searching for repos across the federation, tagging issues across projects on different instances, and potentially fail over if instances mirror content. With this approach you wouldn’t be relying on a single provider to host everybody’s projects in one place.

                                                                                                      2. 1

                                                                                                        Has GitLab’s LFS support improved? I’ve been a huge fan of theirs for a long time, and I don’t really have an intense workflow so I wouldn’t notice edge cases, but I’ve heard there are some corners that are lacking in terms of performance.

                                                                                                        1. 4

                                                                                                          GitLab has first-class support for git-annex which I’ve used to great success

                                                                                                      1. 5

                                                                                                        I like the sound of the headline. Ironically, I couldn’t read the article, as this banner covered most of the page:

                                                                                                        This site uses cookies, tokens, and other third party scripts to recognize visitors of our sites and services, remember your settings and privacy choices, and - depending on your settings and privacy choices - enable us and some key partners to collect information about you so that we can improve our services and deliver relevant ads. By continuing to use our site or clicking Agree, you agree that CBS and our key partners may collect data and use cookies for personalized ads and other purposes, as described more fully in our privacy policy. You can change your settings at any time by clicking Manage Settings.

                                                                                                        It appears I have to visit every “key partner’s” website to opt out. I use a tracking blocker extension, but still found it humorous.

                                                                                                        1. 6

                                                                                                          That is, indeed, very funny. I didn’t notice, likely due to NoScript, but it’s unfortunate that so many tech news outlets pursue such scummy tactics while preaching about privacy.