1. 57

    Meaningful is…overrated, perhaps.

    A survey of last four jobs (not counting contracting and consulting gigs, because I think the mindset is very different)

    • Engineer at small CAD software startup, 50K/yr, working on AEC design and project management software comfortably 10 years ahead of whatever Autodesk and others were offering at the time. Was exciting and felt very important, turned out not to matter.
    • Cofounder at productivity startup, no income, felt tremendously important and exciting. We bootstrapped and ran out of cash, and even though the problems were exciting they weren’t super important. Felt meaningful because it was our baby, and because we’d used shitty tools before. We imploded after running out of runway, very bad time in life, stress and burnout.
    • Engineering lead at medical startup, 60K/yr, working on health tech comfortably 20 years ahead of the curve of Epic, Cerner, Allscripts, a bunch of other folks. Literally saving babies, saving lives. I found the work very interesting and meaningful, but the internal and external politics of the company and marketplace soured me and burned me out after two years.
    • Senior engineer at a packaging company, 120K/yr, working on better packaging. The importance of our product is not large, but hey, everybody needs it. Probably the best job I’ve ever had after DJing in highschool. Great team, fun tech, straightforward problem space.

    The “meaningful” stuff that happened in the rest of life:

    • 3 relationships with wonderful partners, lots of other dating with great folks
    • rather broken family starting to knit together slowly, first of a new generation of socks has been brought into the world
    • exciting and fun contracting gigs with friends
    • two papers coauthored in robotics with some pals in academia on a whim
    • some successful hackathons
    • interesting reflections on online communities and myself
    • weddings of close friends
    • a lot of really rewarding personal technical growth through side projects
    • a decent amount of teaching, mentoring, and community involvement in technology and entrepreneurship
    • various other things

    I’m a bit counter-culture in this, but I think that trying to do things “meaningful for humanity” is the wrong mindset. Look after your tribe, whatever and whoever they are, the more local the better. Help your family, help your friends, help the community in which you live.

    Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included). The work, though, can free up resources for you to go and do things locally to help. Meaningful things, like:

    • Paying for friend’s healthcare
    • Buying extra tech gear and donating the balance to friends’ siblings or local teaching organizations
    • Giving extra food or meals to local homeless
    • Patronizing local shops and artisans to help them stay in business
    • Supporting local artists by going to their shows or buying their art
    • Paying taxes

    Those are the things I find meaningful…my job is just a way of giving me fuckaround money while I pursue them.

    1. 14

      I’m a bit counter-culture in this, but I think that trying to do things “meaningful for humanity” is the wrong mindset. Look after your tribe, whatever and whoever they are, the more local the better. Help your family, help your friends, help the community in which you live.

      Same (in the sense that I have the same mindset as you, but I’m not sure there is anything right or wrong about it). I sometimes think it is counter-culture to say this out loud. But as far as I can tell, despite what anyone says, most peoples’ actions seem to be consistent with this mindset.

      There was an interesting House episode on this phenomenon. A patient seemingly believed and acted as if locality wasn’t significant. He valued his own child about the same as any other child (for example).

      1. 9

        I pretty much agree with this. Very few people have the privilege of making their living doing something “meaningful” because we live within a system where financial gains do not correspond to “meaningful” productivity. That’s not to say you shouldn’t seek out jobs that are more helpful to the world at large, but not having one of those rare jobs shouldn’t be too discouraging.

        1. 4

          Meaningful is…overrated, perhaps.

          I think specifically the reason I asked is because I find it so thoroughly dissatisfying to be doing truly meaningless work. It would be nice to be in a situation where I wake up and don’t wonder if the work I spend 1/3rd of my life on is contributing to people’s well-being in the world or actively harming them.

          Even ignoring “the world,” it would be nice to optimize for the kind of fulfillment I get out of automating the worst parts of my wife’s job, mentoring people in tech, or the foundational tech that @cflewis talks about here.

          Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included).

          I think about this a lot.

          1. 10

            In general I find capitalism and being trapped inside of capitalism to generally be antithetical to meaningful work in the sense that you’ll rarely win at capitalism if you want to do good for the world, no matter what portion of the world you’re interested in helping.

            A solution I found for this is to attain a point where financially I don’t have to work anymore to maintain my standard of living. It’s a project in the making, but essentially, passive income needs to surpass recurring costs and you’re pretty much good to go. To achieve that, you can increase the passive income, diminish the recurring costs, or both (which you probably want to be doing. Which i want to be doing, anyway.

            As your passive income increases, you (potentially) get to diminish your working hours until you don’t have to do it anymore (or you use all the extra money to make that happen faster). Freedom is far away. Between now and then, there won’t be a lot of “meaningful” work going on, at least, not software related.

            [Edit: whoever marked me as incorrect, would you mind telling me where? I’m genuinely interested in this; I thought I was careful in exposing this in a very “this is an opinion” voice, but if my judgement is fundamentally flawed somehow, knowing how and why will help me correct it. Thanks.]

            1. 8

              Agree re. ‘get out of capitalism any way you can’, but I don’t agree with passive income. One aspect of capitalism is maximum extraction for minimum effort, and this is what passive income is. If you plan to consciously bleed the old system dry whole you do something which is better and compensates, passive income would be reasonable; if you want to create social structures that are as healthy as possible for as many people as possible, passive income is a hypocrisy.

              I prefer getting as much resource (social capital, extreme low cost of living) as fast as possible so you can exit capitalism as quickly as possible.

              1. 1

                Are you talking about the difference between, say, rental income (passive income) and owning equities (stockpile)? Or do you mean just having a lot of cash?

                1. 1

                  Yes, if you want to live outside capitalism you need assets that are as far as possible conceptually and with least dependencies on capitalism whilst supporting your wellbeing. Cash is good. Social capital, access to land and resource to sustain yourself without needing cash would be lovely, but that’s pretty hard right now while the nation state and capitalism are hard to separate.

                  1. 1

                    Do you ever worry about 70’s (or worse) style inflation eroding the value of cash? In this day and age, you can’t even live off the land without money for property taxes.

          2. 3

            Work–at least in our field!–is almost certainly not going to help humanity. The majority of devs are helping run arbitrage on efficiencies of scale (myself included).

            This 100%. A for-profit company can’t make decisions that benefit humanity as their entire goal is to take more than they give (AKA profit).

            1. 2

              Sure they can. They just have to charge for a beneficial produce at a rate higher than the cost. Food, utilities, housing, entertainment products, safety products… these come to mind.

              From there, a for-profit company selling a wasteful or damaging product might still invest profits into good products/services or just charity. So, they can be beneficial as well just more selectively.

            2. 2

              I think you’re hitting at a similar truth that I was poking at in my response, but from perhaps a different angle. I would bet my bottom dollar that you found meaning in the jobs you cited you most enjoyed, but perhaps not “for humanity” as the OP indicated.

              1. 1

                What is the exact meaning of “run arbitrage on efficiencies of scale”? I like the phrase and want to make sure I understand it correctly.

                1. 5

                  So, arbitrage is “taking advantage of the price difference in two or more markets”.

                  As technologists, we’re in the business of efficiency, and more importantly, efficiency of scale. Given how annoying it is to write software, and how software is duplicated effortlessly (mostly, sorta, if your ansible scripts are good or if you can pay the Dread Pirate Bezos for AWS), we find that our talents yield the best result when applied to large-scale problems.

                  That being the case, our work naturally tends towards creating things that are used to help create vast price differences by way of reducing the costs of operating at scale. The difference between, for example, having a loose federation of call centers and taxis versus having a phone app that contractors use. Or, the difference between having to place classified ads in multiple papers with a phone call and a mailed check versus having a site where people just put up ads in the appropriate section and email servers with autogenerated forwarding rules handle most of the rest.

                  The systems we build, almost by definition, are required to:

                  • remove as many humans from the equation as possible (along with their jobs)
                  • encode specialist knowledge into expert systems and self-tuning intelligences, none of which are humans
                  • reduce variety and special-cases in economic and creative transactions
                  • recast human labor, where it still exists, into a simple unskilled transactional model with interchangeable parties (every laborer is interchangeable, every task is as simple as possible because expertise are in the systems)
                  • pass on the savings at scale to the people who pay us (not even the shareholding public, as companies are staying private longer)

                  It is almost unthinkable that anything we do is going to benefit humanity as a whole on a long-enough timescale–at least, given the last requirement.

                2. 1

                  Care about your tribe, but also care about other tribes. Don’t get so into this small scope thinking that you can’t see outside of it. Otherwise your tribe will lack the social connections to survive.

                  Edit: it’s likely my mental frame is tainted by being angry at LibertarianLlama, so please take this comment as generously as possible :).

                  1. 1

                    Speaking of that, is there any democratic process that we could go through such that someone gets banned from the community? Also what are the limits of discussion in this community?

                1. 6

                  In the process of replacing an old huge heavy laptop with no battery that I carry around to do my personal stuff when I’m on the move with a hand-me-down EeePC. My brother had no need for it anymore, so I asked him to send it my way. I appreciate the form factor and the lightness. It’s not powerful, but I can run a tiny linux on top of it and it’s gonna work great for what I want to do with it anyway.

                  For work, I’m going to be running more load tests using apib. Probably fix some nodejs code along the way as well.

                  1. 1

                    Noob question that I did not figure out about 9front when I installed it on a VM: how do you manage auth? I could never figure out how to not default to a login that was essentially an admin for the VM, which is not ideal if you intend to use it as a main system (which I’d like to try if possible)

                    1. 1

                      you use factotum for that. in my network i have a separate cpu server and auth server, so i kinda have to make two accounts. one account for the fs on my cpu server, and an account on the auth server. that way, users authenticate themselves using the auth server, then access the filesystem using their fs account on the cpu server. you can also use secstore to make that data persistent

                    1. 4

                      I wish syncthing allowed for an easy directory-based filtering of the stuff you want to sync. I want to be able to tell all my other syncthing-enabled devices “Here’s all my things”, and pick and choose what I want to synchronize based on my current need (in some cases, music, in some cases, photos, in the case of my backup server, all the things).

                      In a sense, what I’m missing is very much like how in torrent clients you can pick a subdirectory or other, and/or cherry pick files.

                      I’d also like a solution that enables me to paste a link to one of those things to a friend so that he can access a file I have in all of my giant data bag.

                      Essentially, I want to have the first promises of “upspin”:

                      When did you last…

                      download a file just to upload to another device?
                      download a file from one web service just to upload to another?
                      make a file public just to share it with one person?
                      accidentally make something visible to the wrong people?

                      Syncthing does almost all the things I need. I just want to not start synchronization automatically for my ginormous collection of digitized pictures, movies and songs.

                      1. 2

                        For the mobile on demand portion, there’s a thing called Syncthing-lite for Android. I found this today, I’ll test it out later, I’m hopeful.

                        1. 1

                          I hadn’t seen that yet – thanks! The “official” Android app is OK – but setup is a bit of a pain.

                        2. 1

                          agreed on all counts. it’s a bit like the promise of https://en.wikipedia.org/wiki/Perkeep – but that’s a ways off

                        1. 1

                          I expect it would be a very small meetup, but I might be interested in making the trip from Mtl to TO.

                          1. 1

                            I’d go to an MTL meetup, TO is a bit too far!

                            1. 1

                              We can meet halfway in Ottawa.

                          1. 2

                            I’m back from vacation this week. I’ll be catching up on email, and probably figuring out what’s happening to me in the next few weeks. My future is a bit unclear team-wise, I don’t know where I’ll end up, but I’m kind of in limbo for now.

                            Until I know what’s up, I’ll just continue what I was doing, which is a bit of Ruby that grabs a swagger file to do a (very synthetic) performance test of diverse endpoints exposed by said swagger file. The idea is to do a short test at different concurrency values to see how well the endpoint holds for some level of concurrency. Nothing fancy, and definitely very artificial.

                            1. 3

                              So correct me if I’m wrong, but other than the large investment of performing these audits, the only reason businesses would be upset about this is if the business is unsuspectingly selling that user’s data. It makes me believe the GDPR is actually a really good thing more countries need to implement.

                              1. 1

                                If unable to comply, a business has a few options: fight it out in court, which will cost a pretty penny. Refuse to do business with the EU, which in the case of international businesses probably means a loss in revenue. You can also fork over the money, but unless you figure out a way to fix your stuff, you can be slapped again.

                              1. 9

                                We spent two Christmases ago in the ICU at the local children’s hospital, where my six month old daughter had open heart surgery. As a result, we gained a new appreciation for the good work of the Ronald McDonald Houses, and so we give them a gift every year, as well as dropping off food donations at Sick Kids. I encourage people to do the same, if one is looking for a worthy cause.

                                ETA: She’s doing great, thanks for asking. Then and now.

                                1. 2

                                  My youngest had heart surgery this year, we also used a Ronald McDonald house, they provide an invaluable service. Modern health care is a wonder. Glad to see she got better!

                                1. 2

                                  GDPR is covered by trashing encryption keys.

                                  1. 2

                                    I’d like trashable per-customer keys to be a good answer, but:

                                    • You have to back up the keys (or risk losing everything), and those backups need to be mutable (so you’re back to square one with backups)
                                    • Your marketing department still want a spreadsheet of unencrypted customer data
                                    • Your fraud department need to be able to efficiently identify similar customer records (hard when they’re all encrypted with different keys)
                                    • Your customer support department wants to use SAAS instead of a crufty in-house thing (and answer users who tweet/facebook at them)
                                    1. 3

                                      You have to back up the keys (or risk losing everything), and those backups need to be mutable (so you’re back to square one with backups)

                                      Generally backups are done daily and expire over time. GDPR requires that a user deleting itself is effective within 30 days, so this can be solved by expiring backups after 30 days.

                                      Your marketing department still want a spreadsheet of unencrypted customer data

                                      Depending on what marketing is doing, often aggregates are sufficient. I’m not sure how often marketing needs personally identifiable information.

                                      Your fraud department need to be able to efficiently identify similar customer records (hard when they’re all encrypted with different keys)

                                      Again, aggregates are usually sufficient here. But to do more one probably does need to build specialized data pipeline jobs that know how to decrypt the data for the job.

                                      Your customer support department wants to use SAAS instead of a crufty in-house thing (and answer users who tweet/facebook at them)

                                      I’m not quite sure what this means so I don’t have a response to it.

                                      1. 1

                                        you also have to make sure re-identification is not possible… This is quite challenging and they are no guidelines to which extent this should be achieved

                                        1. 1

                                          Generally backups are done daily and expire over time. GDPR requires that a user deleting itself is effective within 30 days, so this can be solved by expiring backups after 30 days.

                                          Fair point - that’s really only a slight complication.

                                          Depending on what marketing is doing, often aggregates are sufficient. I’m not sure how often marketing needs personally identifiable information.

                                          Marketing don’t like being beholden to another team to produce their aggregates, but this is much more of an organizational problem than a technical one. Given the size of the fines I think the executive team will solve it.

                                          Again, aggregates are usually sufficient here. But to do more one probably does need to build specialized data pipeline jobs that know how to decrypt the data for the job.

                                          Fraud prevention is similar in difficulty to infosec, and it can hit margins pretty hard.

                                          There are generally two phases: detecting likely targets, and gathering sufficient evidence.

                                          For instance, I worked on a site where you could run a contest with a cash prize. Someone was laundering money through it by running lots of competitions and awarding their sockpuppets (which was bad for our community since they kept trying to enter the contests).

                                          The first sign something was wrong came from complaints that obviously-bad entries were winning contests. We found similarities between the contest holder accounts and sockpuppet accounts by comparing their PII.

                                          Then, we queried everyones PII to find out how often they were doing this, and shut them down. I’m not clear how we could have done this without decrypting every record at once (I suppose we could have done it to an ephemeral DB and then shut it down after querying).

                                          Customer support

                                          For instance, lots of companies use (eg) ZenDesk to help keep track of their dealings with customers. This can end up holding information from emails, phone systems, twitter messages, facebook posts, letters, etc.

                                          This stuff isn’t going to be encrypted per-user unless each of your third-party providers happen to also use the technique.

                                          Summary: It’s not a complete technique, but you’ve gotten past my biggest objections and I could see it making the problem tractable.

                                      2. 1

                                        Lobsters is open source. Anybody want to make a patch to make it use per user keys? I’m curious to see what’s involved.

                                        1. 1

                                          Good question though: what happens if a citizen of the EU uses his right to be forgotten? Does the user have a shiny “permanently forget me” button? The account deletion feature seems to fall a bit short of that?

                                          1. 1

                                            I suspect it’s “the site admin writes a query”.

                                        2. 1

                                          Actually you are wrong… as you have to make sure that user’s data is portable, meaning that it can be exported and transferred to someone else, and you cannot keep data if you do not need it… You also have to be able to show what data you have about the user… so if you cannot decrypt what you have to show the user… you are not compliant.

                                          1. 1

                                            Those are two separate requirements of GDPR, and being able to export a user’s data in a reusable format is only required if they haven’t asked for their data to be deleted.

                                            I think you’re missing a key part. If a user asks for their account to be deleted, you don’t need to be able to make their data portable anymore, you just need to get rid of it. If you delete the encryption key for your user’s data, you can no longer decrypt any data you have on a user - which means legally you don’t have that data. There is nothing to show the user, or make portable.

                                            1. 2

                                              I see your point and that indeed works only for deletion requests.

                                        1. 5

                                          I am trying to minimize manually checking websites for updates, so I just download everything and look at the list in a text editor. Also, I try to generally increase the fraction of things online that I read by downloading, converting to text using document.documentElement.innerText (with some minor extra scripts to put hyperlink targets inside the text), and opening the result in an editor. Of course I don’t bother to delete either HTML or text afterwards (and I — well, my scripts — do record source URLs).

                                          1. 2

                                            Why did you decide to browse this way?

                                            1. 5

                                              Well, there are multiple things.

                                              I consider most of the web design actively harmful, as in: a text dump has minor inconveniences, but the site as designed is usually even less readable. Comment threads are sometimes an exception, and in case of comment threads Elinks is usually better than innerText (but it has other drawbacks; maybe I should find a way to combine best of both worlds in some way).

                                              I want to have tools that gradually reduce the attack surface of web browsing. Grab-then-read workflow (and once Firefox instance exits, all processes of the corresponding user are killed) will hopefully let me to gradually increase the sandboxing.

                                              This workflow means that if I save something, I actually see what I have saved.

                                              Most of the sites see almost-normal Firefox visits; and I do have an option to apply the grabbing script to something opened in an interactive Firefox instance (which is still UID-isolated, network-namespaced etc.), which might be in a state that is hard to obtain automatically (for example, some subset of threads is loaded to a greater depth).

                                              1. 2

                                                I’m working towards something similar, except as much as possible I want to send the resulting data either to my printer (or, I might get an e-reader for christmas?), as a batch job every morning. I was planning on using a dockerized chrome that I found somewhere. How are you automating Firefox to do this? Selenium? The print-to-pdf seems to be missing from the Selenium API, so I might have to use another tool to get my pdfs.

                                                1. 4

                                                  No, I cut out the middle man. I just use Marionette and the official Marionette Python client from Mozilla. Which is used to execute Javascript code sometimes generated by bash scripts, but oh well. I also use networking namespaces to allow each instance to have port 2828 for Marionette.

                                                  Marionette allows execution of Javascript code in the context of Firefox UI. For example, (the code is lifted from Firefox tests, which are the main use of Marionette) Components.classes["@mozilla.org/gfx/printsettings-service;1"].getService(Components.interfaces.nsIPrintSettingsService) seems to evaluate to an instance of nsIPrintSettingsService. Hopefully some browsing through XUL reference could give you a solution for printing in the current Firefox release; no guarantees when something will change…

                                                  Another option is to run Firefox in its own (virtual) X session, run window.print() then find the print dialog and send it the needed input events.

                                                  1. 1

                                                    Are your scripts available somewhere? Does a write-up of your method exist? I’d be a huge fan of using that.

                                                    1. 2

                                                      A separate problem is that I need to create/delete a ton of users, and that requires root access, and my current permission-check code for that is a part of a Lisp project where I use sinit as PID 1 and most of the system management stuff is performed inside an SBCL process.

                                                      I hope to clean up and write up that Lisp part at some point…

                                                      Are you interested enough to participate in cleanup of the part relevant to your interests (and probably in implementation of an alternative UID-spawning backend, as you are interested only in the Firefox part)?

                                                      1. 1

                                                        I’m interested, sure, but I can’t say in all honesty that I’d have enough time to inject significant effort in the project. Is the code already on a public repository somewhere, or is that in the future too? I’d rather not promise anything, but I really would like an opportunity to touch some Lisp code.

                                                        1. 1

                                                          Well, there are too many assumptions to just put it in the public repository and hope anyone could reproduce it (some parts assume my Lisp daemon, some parts assume Nix — the package manager — is available, various parts use my scripts initially written for other reasons, etc.) Have I mentioned I feel significantly less comfortable when more than one assumption is broken on something I use as a laptop/desktop?

                                                          I could set up a repository for that, put a layer by layer there and ask you to check if simple tests work in your environment (for each layer). Then at some point the simple test will be «please check if it correctly downloads 20 entries from Schneier’s blog starting with {URL}». I am not asking you to write much code for that, but I need some feedback (and some positive reinforcement) to undertake it.

                                                          If you are willing to skim and run as root a trimmed-down version of my Common Lisp system-management daemon (you don’t run as root code provided by conspiciously pseudonymous strangers on the Web without skimming the code first, right?), even I would need just to separate the relevant parts of my setup without writing much new code.

                                                          In any case I plan to eventually publish some rewritten version of all that; hopefully in February 2018, as a write-up to submit to European Lisp Symposium (this would hopefully be about Lisp-for-system-policy, and controlling Firefox would be one of the features).

                                                          1. 1

                                                            I think you underestimate the value of reading code even without the ability to run it.

                                                            But deciding to publish it at any date is generous of you, so don’t read this as me pressuring you to up your schedule :)

                                                            1. 2

                                                              OK, I tried to look what can be pushed as-is. But even cleaning up the first step (network namespace wrapper script, with passthru done via socat) turns out to be already not completely trivial… It still leaves socat behind from time to time (doesn’t matter as much for my specific case where lack-of-persistency incentivizes me to run a reaper anyway, but obviously should be cleaned up, and I failed to do it cheaply)

                                                              https://bitbucket.org/me9bpg75mony/marionette-grab

                                                  2. 1

                                                    I understand where are you coming from. Thanks for sharing your approach, which I guess is most likely unpopular.

                                              1. 1

                                                I’ve been thinking about writing a Go program to parse out a swagger file and output commands for use in a script so that I could automate some form of endpoint testing.

                                                Turns out I can do that in one clever line of Ruby. Rest of my week is going to be about making the script better and more clever so that I can automate my testing on other future projects.

                                                1. 2

                                                  I’m reading Fearless Change. So far it’s pretty interesting, I don’t think I’ll necessarily gain a lot of wisdom for reading it. I’m hoping that it will fix a few bugs in the way I try to champion change in the workplace, and maybe teach this old dog a few new tricks.

                                                  1. 1

                                                    I hope you’ll come back and report changes if it works !

                                                  1. 1

                                                    Trying to figure out clever ways to test performance on a given project.

                                                    Lies, damned lies and benchmarks.

                                                    1. 4

                                                      I host my own instance of Pleroma on Google’s free tier machine; I’m @otremblay@thezombie.net

                                                      1. 1

                                                        It looks like I need to create an account to try to follow you at https://thezombie.net/@otremblay?

                                                        1. 1

                                                          You’re not supposed to need that, I might be behind in my updates

                                                          1. 1

                                                            try https://thezombie.net/users/otremblay, or search from your instance for @otremblay@thezombie.net

                                                            1. 1

                                                              Ah, searching worked!

                                                      1. 4

                                                        The Plan 9 C compilers are fast. Really fast. I remember compiling kernels served from remote filesystems in ~6 seconds… Does marvels for quick turnaround time…

                                                        1. 5

                                                          Yes. The whole plan 9 toolchain is a joy to use, and it’s amazing to see how the Plan9front people have kept it up to date and usable, with working SSH clients, wifi drivers, USB 3, hardware virtualization that can run Linux and Openbsd, and the cleanest NVMe driver I’ve ever seen.

                                                          I actually use the system regularly for hacking on things, while it’s definitely not the most practical choice, I really enjoy it.

                                                          1. 3

                                                            Wait up, wifi drivers? I need to set that up on one of my several gazillion laptops posthaste

                                                            1. 1

                                                              9front uses openbsd’s wireless firmware, so if your card works on obsd, itll probably work on 9front

                                                              1. 3

                                                                It has far fewer drivers, though. You’ll probably have good luck with an older Intel card, but you should check the manual. As with all niche OSes, don’t expect it to work on random hardware out of the box. And as with many niche OSes, older thinkpads are usually a good bet for working machines.

                                                                1. 2

                                                                  yes good point, now that i am thinking about it, even then the support for wireless is quite bare. when i ran 9front on a thinkpad a couple years ago i think i recall the centrino wireless-n line of cards working well. for anyone interested, here are the docs

                                                          2. 1

                                                            What made it (or make it) so fast? Did.it have to leave out some other feature to achieve that? Or was ist just the plan9 source, I remember hearing that it had no ifdefs and other preprocessor instructions, that let it compile so quickly.

                                                            1. 3

                                                              the plan9 source was definitely a part of it (include files did not include other files), but the compiler itself also eschewed optimizations in favour of fast code generation. the linker wasn’t that fast, though.

                                                              Here’s a quote from the original papers that came out in the early nineties:

                                                              The new compilers compile quickly, load slowly, and produce medium quality object code.

                                                              All in all the kernel was a few megabytes in size, compiling several hundred thousand lines of code. Comparably less than the core of the Linux kernel at the time, and not counting the many myriads of drivers linux had that Plan 9 didn’t. More here: https://9p.io/sys/doc/compiler.html

                                                          1. 3

                                                            This is pretty horrible. I really hope it doesn’t make its way up north to Canada, but the Internet being what it is, I can’t see how the impact won’t be felt.

                                                            1. 12

                                                              It kind of varies, ranging between Excellent and Exceptional. It’s pretty flexible, to be honest. I work at a large corporation and it’s pretty laid back. Part of it is also that I refuse to work overtime. If a job requires frequent overtime, it’s just not the gig for me. I have 3 kids and a wife, I can’t be bothered spending all of my time at work.

                                                              1. 3

                                                                This is also my situation, except now I got 4 kids.

                                                              1. 5

                                                                When I was but a wee child I was told by a fellow colleague that UML would fundamentally change the way code was written, that it was the next revolution. I didn’t really believe him back then mostly because I hated his guts but it’s almost ten years later now and I have decent experience now, and I like to think I’m also slightly wiser.

                                                                “I assure you this one tool will fix all our problems”, isn’t that exactly what we talk about when we talk about silver bullets? Don’t get me wrong, I’m convinced AI has a very important place in our collective future as programmers (heck, even the present for a lucky few). Experience has shown for me that it’s unlikely to be a magical problem solving thing, very much like UML was not. I’m sure history is littered with such broken promises of things that are nowadays tools we use daily.

                                                                1. 2

                                                                  Exactly. And it’s actually easy to spot the problem right in the text where the author says that “we specify some constraints on the behavior of a desirable program”, and the “program” is then deduced by a neural network. Shifting the actual hard part — decision making — to some other stage and not consider it a part of coding still doesn’t make it go away.

                                                                1. 2

                                                                  I’m not really following this, I’m just aware that it’s a privacy nightmare and that it’s running on pretty much all the hardware I own, so I have a question: does this mean we’re anywhere near finding a way to reliably turn it off?

                                                                  1. 7

                                                                    Hardware from mid-2008 and earlier, the ME can be completely erased.

                                                                    After 2008, the ME is required for booting (specifically the bup “bring-up” module). Additionally, if the ME is missing then the CPU will reboot every thirty minutes. The me_cleaner project is able to neuter the ME by deleting most of the modules. Certain modules (especially bup) need to remain intact. It works with most chips, but check the wiki for compatibility.

                                                                    For Skylake and onward, there is also a HAP flag (Hardware Assurance Platform) which, if toggled, disables the ME after boot. The ME is still required for powering on, though.

                                                                    Purism has some really great blog posts documenting their struggles with the ME: https://puri.sm/posts/deep-dive-into-intel-me-disablement/

                                                                  1. 5

                                                                    What a goofy article. It’s essentially just a bunch of straw man arguments attempting to defend crappy coding. Most people aren’t writing code for an obfuscated coding contest or working in an industry without established best practices.

                                                                    Just because NASA’s “best practices” aren’t the same as a small time web developer’s “best practices”, doesn’t mean that no “best practices” exist. It means a competent developer (or team) needs to understand their particular situation and decide for themselves the relevant “best practices” to use.

                                                                    If the value your code provided to your organization is sufficiently higher than the cost of maintenance, it can’t really be said to be “bad” code.

                                                                    That’s the only semi-reasonable argument in the whole article, and it doesn’t hold up because more readable, “better” code would decrease maintenance costs, making it an even better value for the company.

                                                                    1. 2

                                                                      Well that last part is just one dimension of bad. Could also be that the code is esthetically bad, not idiomatic, inefficient, ill organized, or a plethora of other flavors of bad. Now, you might want to optimize for value, and not care about all the other ways code can be bad, and that is perfectly okay. If you decide to make all variables have people names instead of something meaningful, I’m gonna call the code pure crap no matter how valuable it is. “Bad” is a complex thing, and oftentimes a meaningless one unless qualified.