1. 12

    I was also at Deconstruct! It was a fantastic conference with speakers from a really diverse range of perspectives and experiences. A few that stood out to me:

    Sandi Metz gave what I thought was a really solid analysis of what a lot of teams get wrong in OOP. It definitely had some immediately actionable takeaways for me and the code base that I work on in my day job. I really like the idea that conditionals -> complexity and that you should try as best as possible to remove them from the procedural-style code that orchestrates objects and move it into deciding which types of objects to create in the first place. It mostly consisted of her most recent two blog posts: Breaking up the Behemoth and What does OO Afford?

    Nabil Hassein’s talk was really powerful. I don’t really know what to write about it here but it was a pretty sobering look at how we in the business of creating technology affect the world around us, and how we in the “Global North” consume resources far in excess of those in the “Global South”. The talk seemed to really resonate with the crowd and he received a standing ovation afterwards. When the video comes out I’ll definitely be sharing it with people.

    Allison Parrish’s talk seemed to be a big hit with a lot of people. As far as I can tell it expanded on some work she has previously talked about with applying algorithms to the case of poetic manipulation of English text. She’s previously talked about it in terms of apply JPEG-like compression to text to see what happens. She talked about her recent efforts to apply machine learning techniques to this end via semantic analysis, as well as some playful text transformations via searches/random walks in semantic space as well as pronunciation space. The resulting poetry was very impressive!

    Tom 7’s talk was pure fun, technical weirdness. Halfway through, he gave us the punch line which was that his slides were being run through a video output of a NES. He stuck a Raspberry Pi inside of a NES cartridge, emulated a SNES, and played it back through the NES! There was some meta-analysis of his own humor and of the presentation itself (as it was happening!) in there too, which I thought was both funny and interesting.

    Pablo Meier talked about distributed systems and what we might desire in a programming language that treats distribution of work as a first-class operation (spoiler alert: he ended up talking about golang). He expanded on how a lot of these ideas come from Erlang and the interesting, perhaps underappreciated/underutilized innovations also present in Erlang, relating back to its supervisor process model and how.

    Elle Vargas’ talk surrounded the history and investigation of the 2016 election interference, focusing especially on DEFCON 25’s “Voting Village” and the frankly disturbing findings (voting machines compromised within minutes, and even some real voter data found on one of the test machines which were procured from previous state elections). She had a great call to action at the end in terms of how technically-proficient people can be civically engaged.

    Anjana Vakil, who has a background in linguistics, talked about the relationships between human languages and programming languages, especially with respect to how a language can shape your thinking. Her thesis was that programming languages are fundamentally human languages intended for human consumption, and that we should examine how those languages inform our reasoning and how they can affect or be affected by our cultural contexts.

    1. 3

      Sandi Metz has posted her slides for “Polly Want a Message”. It’s an interesting refactoring and I hope video of the talk is posted online. As I read, I was mentally doing a functional refactoring, mostly centered on a Line type with a line number and string. From there, most of the strategy objects she extracts instead become functions to be composed. Her final version of Listing on slide 297 reminded me strongly of a blog post I wrote a few years ago on composition.

      1.  

        I hope that the talk ends up in video form at some point, because I think there something missing from the slides.

        I didn’t quite put together a functional refactoring in my head, but I was wondering about one the whole time. There was something that felt a little… uncanny about the preponderance of strategy objects in play.

        I wasn’t sure how I felt about the “now we’re down to one execution path” slides. This is a refactoring, so the same number of decisions are being made, and the same combinations of execution paths exist in the code base, they just being selected for differently. The increased isolation is good for testability, which means that if the code is properly isolated you can get away with a linear increase in testing as opposed to a quadratic one. But I’m split as to if the numbers at the end are a fair assessment.

        The code does seem somewhat improved, but it also hides complexity more now.

        Thought provoking slide deck, and I’m very curious as to what didn’t make it to the slides.

    1. 15

      Tangential comment: I assume Google funded this in some way (either by paying a person to do it or offering money to the open source project). But, AFAIK, Git is not a Google project so I don’t really like that this introduction is coming from their blog as if it’s theirs. Maybe there is the official release somewhere else? Maybe I’m missing something and all this is kosher.

      1. 14

        Most of the Git core team is employed by Google to work on Git. The maintainer Junio Hamano is a Google employee whose job is to work on Git.

        That is why this announcement is happening on the Google blog.

        1. 8

          Git 2.18 is not yet released, so this is more of call for testing. Google-specific part here is that you can test against googlesource.com, because Google deployed v2 enabled server.

          1. 11

            It reads like an official announcement on behalf of the git project though, while not being on a git-related domain, which is what is somewhat surprising. Well, the first sentence does. The rest of the post wouldn’t have raised my eyebrow, but this part also confused me on first read regarding on whose behalf “we” is speaking here (Google? git? both?):

            Today we announce Git protocol version 2, a major update of Git’s wire protocol…

            1. 3

              Google employs many git and mercurial developers. Very few organizations do source control on the scale of Google so it makes sense for them to fund developers of the tools they use.

              1. 10

                Myself, and I don’t think @mjn, are disagreeing that Google does a lot with source control and probably spends a lot of money on supporting git. My concern/issue is that git is not a Google project so it doesn’t quite feel right that, what feels like an official announcements should be on their website.

                1. 3

                  A google employee wanted to share some open source work they’d been doing so they used a company blog. That doesn’t seem weird to me.

                  edit: I guess it’s worth adding that it wasn’t really announced in this blog post. You could have seen the discussions about this if you followed the git mailing lists.

                  1. 6

                    Google could have done a better job in this post explaining the relationship between Google, the author and the git project. One phrase would have made a ton of difference. For example, “I am John Foo, a Google employee and a member of the git core team” (with a link to some sort of proof on the git website)

                    1. 2

                      Well, there is such phrase, but at the end:

                      By Brandon Williams, Git-core Team

          2. 1

            Did you mean Git is not a google project?

            1. 1

              Fixed, thanks.

            1. 1

              tl;dw;?

              1. 5

                According to the guy behind handmade hero:

                The fact that we currently have hardware vendors shipping both hardware and drivers (with USB and GPUs being to major examples), rather than just shipping hardware with a defined/documented interface, a la x64, or the the computers of the 80s, is a very large contributor to the fact that we have basically 3 consumer-usable OSes, and each one is well over 15 million lines of code. These large codebases are a big part of the reason that using software today can be rather unpleasant

                He proposes that if hardware vendors switched form a hardware+drivers to hardware that was well-documented in how it was controlled, so that most programmers could program it by feeding memory to/from it (which he considers an ISA of sorts), we’d be able to eliminate the need for drivers as such, and be able to go back to the idea of a much simpler OS.

                I haven’t watched the whole thing yet, but that’s the highlights

                1. 7

                  Oh I would so, so, so, love that to happen…..

                  …but as a guy whose day job is at that very interface I will point this out.

                  The very reason for the existence of microcomputers is to soak up all the stuff that is “too hard to do in hardware”.

                  Seriously, go back to the original motivations for the first intel micros.

                  And as CPU’s have become faster, more and more things get “winmodemed”.

                  Remember ye olde modems? Nice well defined rs-232 interface and standardized AT command set?

                  All gone.

                  What happen?

                  Well, partly instead of having a separate fairly grunty/costly CPU inside the modem and a serial port… you could just have enough hardware to spit the i/q’s at the PC and let the PC do the work, and shift the AT command set interpretor into the driver. Result, cheaper better modems and a huge pain in the ass for open source.

                  All the h/w manufacturers regard their software drivers as an encryption layer on top of their “secret sauce”, their competitive advantage.

                  At least that’s what the bean counters believe.

                  Their engineers know that the software drivers are a layer of kludge to make the catastrophe that is their hardware design limp along enough to be saleable

                  But to bring their h/w up to a standard interface level would require doing some hard (and very costly) work at the h/w level.

                  Good luck convincing the bean counters about that one.

                  Of course, WinTel regard the current mess as a competitive advantage. It massively raises the barriers to entry to the market place. So don’t hold your breathe hoping WinTel will clean it up. They created this mess for Good (or Bad depending on view) reasons of their own.

                  1. 1

                    All the h/w manufacturers regard their software drivers as an encryption layer on top of their “secret sauce”, their competitive advantage.

                    I thought the NDA’s and obfuscations were about preventing patent suits as much as competitive advantage. The hardware expert that taught me the basics of cat and mouse games in that field said there’s patents on about everything you can think of in implementation techniques. The more modern and cutting edge, the more dense the patent minefield. Keeping the internals secret means they have to get a company like ChipWorks (now TechInsights) to tear it down before filing those patent suits. Their homepage prominently advertises the I.P.-related benefits of their service.

                    1. 2

                      That too definitely! Sadly, all this comes at a huge cost to the end user. :-(

                  2. 1

                    The obvious pragmatic problem with this model is that hardware vendors sell the most hardware (and sell it faster) when people can immediately use their hardware, not when they must wait for interest parties to write device drivers from it. If the hardware vendor has to write and ship their own device drivers anyway, writing and shipping documentation is an extra cost.

                    (There are also interesting questions about who gets to pay the cost of writing device drivers, since there is a cost involved here. This is frequently going to be ‘whoever derives the most benefit from having the device driver exist’, which is often going to be the hardware maker, since the extra benefit to major OSes is often small.)

                1. 4

                  I currently mostly host some websites and file backups on my current VPS. I’ve run a couple instances of an IRC bot in the past, but don’t host it currently.

                  Websites Utilities
                  • Linx as a file/pastebin server
                  • Syncthing for my keepass file
                  • Weechat for IRC purposes
                  Code repositories (using Fossil)

                  All of these various sites and services (other than weechat) are currently behind a single nginx instance.

                  1. 12

                    For when you want to take your python/rust/haskell and make it look like APL

                    1. 6

                      APL.swift is a Swift1 package that adds the APL operators.

                    1. 8

                      In which the Microsoft OneNote team write their own database, and seemingly don’t regret the decision.

                      1. 4

                        OneNote just corrupted my notes locally, so who knows, maybe they should have followed the general advice?..

                        1. 2

                          Speaking of OneNote’s local notebook format, it seems to be documented (also this one) but it seems quite complex - I’ve never seen an implementation of it other than Microsoft’s own.

                          It might be interesting to poke around with…

                        1. 3

                          I think the biggest thing that would be nice for a language designed for working remotely is encouraging designs that make it easy to isolate parts of your system during development. This leans in a functional/pure data sort of way, but generally reducing the need for VPN/WAN network IO to a database would be the biggest gain I could see for my current day to day work in a remote specific context. Being able to quickly, safely and correctly break pieces out (Erlang seems better than average at this, but I’m not very experienced with it) would be the sort of thing I’d aim for.

                          1. 2

                            So, if you want to pick a language or language(s) that enables remote work, the main thing I’d suggest is picking one that has enough jobs in it to allow for a large selection of jobs. By far the most important thing in a remote situation is going to be being part of a team that has a good culture of communication and documentation, this will trump whatever language you choose.

                            I’ve worked in C# since 2013, and have worked remotely doing so twice, once because I was a high schooler that was cheaply available to do so after an on-prem internship and had done good work during said internship, and a second (and current) time because I had the recommendation of a friend into the job.

                            I’m a bit hesitant to add any more advice on the matter, because the rest of how to get a remote job really varies from person to person, depending on if you’re skilled at negotiating, or can get the recommendation of a friend, or can build a very in demand skillset (mobile comes to mind based on other comments) that makes it so that you don’t have to negotiate as hard.

                            All of my friends (2 doing C# web dev, one working in the iOS stack) that are currently working remotely ended up in a rather long process of getting to that point, either in job searching or waiting for the right opportunity, but it can and does happen outside of the Ruby/php/nodejs sphere, it might just take longer to find a job that is a good fit. Like any job search, networking is helpful.

                            1. 20

                              The author doesn’t mention the popular GUI library that’s the best fit for his use case – TK. (I can’t blame him – TK has poor PR, since it’s marginally less consistent than larger and more unweildy toolkits like GTK and QT, while having many of the drawbacks of a plain X implementation.)

                              That said, the fact that TK is the easiest way to go from zero to a simple GUI is frankly pretty embarassing. There’s no technical reason GUI toolkits can’t be structured better – only social reasons (like “nobody who knows how to do it cares enough”).

                              1. 13

                                The problem is that TK still has terrible looking widgets. Just because UI fashion has moved away from consistent native look and feel doesn’t mean TK is passable.

                                1. 12

                                  TTK mostly takes care of this, by creating a Look and Feel that matches up with the platform in question.

                                  1. 3

                                    TK ships with TTK, which provides native widget styles for every major platform. It has shipped that way for nine years.

                                    1. 1

                                      I was not aware of TTK, thank you! I tried out TK a few times and seeing how awful it looked made me leave it really quickly for other technologies.

                                      1. 4

                                        TTK has been around for a long time, and built into TK for a long time too. It’s a mystery to me why they don’t enable it by default. I discovered it six years after it got bundled!

                                        1. 1

                                          I tried to look into it a little bit today but it looks like there is pretty much only one getting started guide for it, written in python. Do you know any guides for it in other languages?

                                          1. 2

                                            Not really. It provides native-styled clones of existing widgets, so if it’s wrapped by your target language, all you should need to do is import it and either overwrite the definitions of your base widget-set or reference the ttk version instead (ex., by running ‘s/tk./ttk./g’ on your codebase).

                                  2. 5

                                    When he put out the JSON protocol, Tcl/Tk came right to mind. This is exactly how people do UI with Python and tkinter.

                                    1. 3

                                      Interesting — I have almost no experience with TK. I will look into it, thanks!

                                      1. 3

                                        TK is used by Mozart/Oz for the GUI, with a higher level library QTk on top of it. It works well and is easy to program with.

                                    1. 9

                                      APL-oids are are probably the biggest example (J, Jelly, k/q and so on), followed by stack based and/or concatenative languages (Forth, PostScript and 8th being examples of the former, Cat, Joy and Factor being examples of the latter). After that would likely be functional programming languages (especially with things like Haskell’s point free notation), and then function chains in OO-esqe languages.

                                      Having played around with a stack based language, naming things is hard to get right in the documentation sense, but not naming things is also hard, but in the getting-it-right sense, because then you have to track a lot of implicit stuff, and/or have the compiler show you what the structure at a given point looks like.

                                      I don’t recall the name of it at the moment, but I ran across a programming language that allowed you to just re-use the name of the type if you didn’t have more than one instance of that type in a given scope, which was certainly a novel approach.

                                      1. 2

                                        I just set up a sit/stand desk converter so that I’ll be able to switch between the two at my desk, and will be around for the releasing of the sprinting that’s been going on for the past two weeks at work.

                                        1. 16

                                          Unless I am misunderstanding “Brutalism”, shouldn’t a “brutalist web site” be something like a no- or minimal-css web page? The examples the article gives, while pretty in their own right, don’t appear (to me) to go counter to much of everyday web-design one tends to see.

                                          1. 12

                                            That’s my understanding, too. My touchstones for web Brutalism are this motherfucking website and this other motherfucker.

                                            1. 6

                                              There’s another school that considers the Instragram iOS app as an example of brutalism. Other examples:

                                              Brutalist websites
                                              Brutalism in UX

                                              I consider the two sites that you linked to more an example of design minimalism (only using as much as you need), as opposed to the minimalism of information that seems to typify “modern” design today.

                                              I’d also be curious if y’all would consider something like my wiki as an example of brutalism

                                              1. 3

                                                I’d say it’s pretty brutalist. It probably wasn’t designed or implemented with accessibility in mind. :)

                                              2. 2

                                                Those are just trivial documents. It would be like calling a lost pet poster an example of brutalist graphic design!

                                              3. 9

                                                I would imagine that to draw a useful analog to architecture, we have to imagine what it is we’re saving or optimizing for under a digital brutalism (in the same way that architectural brutalism is cheaper, easier, faster, less specialized, in addition to its aesthetic impact). As a programmer, I would imagine therefore that digital brutalism would have to at least partially be motivated by a desire for simplicity in construction: avoiding a reliance external resources that might not be available, avoiding a reliance on technologies or techniques that require specialization, avoiding techniques that require complexity in order to be correct (in favor of technologies that, while maybe less rich, can be correct more simply), and optimizing resource usage for browser speed and compatibility.

                                                I think it’s perfectly fair to associate the above motivations with a particular aesthetic if they happen to be accompanied by one (after all, when I think about architectural brutalism I don’t think about the equipment, specialized or un-, that was used to construct it). But to say anything useful or interesting with term, it can’t just be the way they look.

                                                1. 4

                                                  But to say anything useful or interesting with term, it can’t just be the way they look.

                                                  Isn’t look (legibility) in print/web design fundamental? If we’re talking about a brutalist web design, it’s certainly not brutalist because the author used tables for layout, though that might contribute to a look that has hard edges (defining sections/compartments, etc)–an element often associated with brutalism.

                                                  1. 1

                                                    Legibility and aesthetic are not the same, though. As I said above, it’s not that aesthetics are irrelevant; but using modern whizbang web design and tech, and just replacing your full-screen white-people-typing-together background video with graphics and video of a different aesthetic, is just another flavor of the status quo.

                                                    1. 2

                                                      Legibility and aesthetic are not the same, though.

                                                      I agree with this.

                                                      and just replacing your full-screen white-people-typing-together background video with graphics and video of a different aesthetic

                                                      What I think you’re saying is that Brutalism is a philosophy that can’t simply be replicated by copying an aesthetic. Is that right?

                                                      1. 1

                                                        Sure, that’s fair.

                                                2. 8

                                                  Wikipedia has a nice passage which I think can be applied in spirit to websites:

                                                  Brutalist buildings are usually formed with repeated modular elements forming masses representing specific functional zones, distinctly articulated and grouped together into a unified whole. Concrete is used for its raw and unpretentious honesty, contrasting dramatically with the highly refined and ornamented buildings constructed in the elite Beaux-Arts style. Surfaces of cast concrete are made to reveal the basic nature of its construction, revealing the texture of the wooden planks used for the in-situ casting forms. Brutalist building materials also include brick, glass, steel, rough-hewn stone, and gabions. Conversely, not all buildings exhibiting an exposed concrete exterior can be considered Brutalist, and may belong to one of a range of architectural styles including Constructivism, International Style, Expressionism, Postmodernism, and Deconstructivism.

                                                  Another common theme in Brutalist designs is the exposure of the building’s functions—ranging from their structure and services to their human use—in the exterior of the building.

                                                  So don’t do elaborate styling, expose how the site was built, and organize the site into functional zones in ways visible to the user. A thoughtful version of non-CSS, non-Javascript, image-light design might be the best Web “version” of Brutalism, with visual grouping being the only organization. Getting people to give up CSS and Javascript might be a bit much, but in terms of basic construction, HTML is the equivalent of concrete (the “raw structural members” of the Web site) and Brutalism is very much about not hiding or ornamenting that.

                                                  1. 3

                                                    Getting people to give up CSS and Javascript might be a bit much, but in terms of basic construction, HTML is the equivalent of concrete (the “raw structural members” of the Web site) and Brutalism is very much about not hiding or ornamenting that.

                                                    I have no trouble giving up JavaScript. I’d prefer to use mostly semantic HTML5, with just enough CSS to make the text more readable (because browser defaults are trash).

                                                    1. 2

                                                      Great excerpt and follow up. I think you can keep CSS so long as what it’s doing is (a) visible in source behind the scenes, maybe even removable and (b) keeps the fundamental structure of the site or page. It might even give it the structure.

                                                    2. 4

                                                      Maybe GeoCities was brutalist? http://oneterabyteofkilobyteage.tumblr.com/

                                                      1. 9

                                                        The Classic Geocities, with all of its animated GIFs and background images and using images as dividers, is too ornamented to be Brutalist. It’s best described as Vernacular, which Wikipedia describes as:

                                                        Vernacular architecture is an architectural style that is designed based on local needs, availability of construction materials and reflecting local traditions. At least originally, vernacular architecture did not use formally-schooled architects, but relied on the design skills and tradition of local builders. However, since the late 19th century many professional architects have worked in this style.

                                                        The Geocities Vernacular was definitely the “architecture from people who weren’t architects” Vernacular.

                                                        In fact, the Terabyte Of The Kilobyte Age describes Geocities as Vernacular:

                                                        http://blog.geocities.institute/archives/5983

                                                        More to the point, Vernacular design is bottom-up unplanned design, with no large-scale goals in mind, whereas Brutalism is top-down planned design, and capable of designing in the large.

                                                        1. 1

                                                          I always thought of it as rococo (in the sense that it’s maximalist in the distribution of small decorative features), but I don’t really have a strong background in the history of architecture.

                                                      2. 1

                                                        That would be in line with architectural brutalism, but the term came out of critiques of architectural brutalism (which basically came down to “it’s ugly because it breaks convention in non-decorative ways”). “Web brut” has been used as an insult for longer than its current (3-4 year) rehabilitation.

                                                        I think both senses are useful for different reasons. Web brutalism in the sense of avoiding bloated web standards that necessitate bloated browsers is important for usability and for minimizing waste, while web brutalism in the sense of rejecting faux-minimalist aesthetics in favor of direct & straightforward mapping of form to function is important as a UX concern. (I’ve argued for the latter in https://lobste.rs/s/cyopoi/against_ui_standardization and the former in https://hackernoon.com/on-the-web-size-matters-e52ac0f5fdbe and https://hackernoon.com/an-alternate-web-design-style-guide-1aae8d0b5df5)

                                                        1. 1

                                                          Regarding your hackernoon article where you say

                                                          Use only the following tags: a, b, body, br, center, h1, head, i, li, ol, p, table, th, title, td, tr, ul. All other tags are unnecessary distractions. If, for some reason, you must include images, the img and align tags are also suitable.

                                                          what would your thoughts be on directly hostilng markdown, probably without literal html, instead of the “more powerful” full-html standards and deviations? Maybe protocols like Gopher could serve as a base for this?

                                                          1. 1

                                                            I consider hosting markdown marginally more reasonable than hosting html, but to be honest I don’t think we, as writers, should be controlling how the text is formatted except in the rare cases when the formatting is truly necessary and part of the point (like, if we’re writing concrete poetry or something).

                                                            In other words, something like gophermaps-as-document-format seems ideal: we get jump links, but literally nothing else.

                                                            The alternate web design style guide, despite apparently looking pretty radical to a lot of web devs, was very much a compromise – in the vein of “oh, if we MUST have web standards at all, at least ditch everything other than HTML 1.0!”

                                                      1. 10

                                                        I don’t see the point (other than coolness for the sake of coolness) of trying to recreate a vector-based, inherently imperative language on a virtual machine that does not lean itself to imperative programming in general. BEAM primitives do not include tools to deal with mutability, at all, and array support is non-existent in BEAM [1]. Compiling a vector-based language into BEAM is just combining the worst of both worlds.

                                                        A more viable approach would be to extend BEAM itself to include efficient mutable vectorized operations, but it’s still unclear who would need a hybrid of Octave and Erlang and why.

                                                        [1] “Arrays are implemented as a structure of nested tuples.” – https://stackoverflow.com/questions/28676383/erlang-array-vs-list

                                                        1. 4

                                                          Maybe just for intellectual challenge like with the demoscene.

                                                          Alternatively with wild speculation, you have a legacy application in Erlang but no library or language for easily expressing or checking certain types of solutions. You build something to handle that which integrates with legacy app. Ive seen that kind of thing done with C, Java, etc.

                                                          1. 2

                                                            It would be interesting to see some kind of cross-section between apl and a language like translucid . So instead of having mutable arrays you have multidimensional streams.

                                                            1. 1

                                                              Does it need to be truly mutable? I mean, if we are talking about a BEAM language as APL-ish as Erlang is PROLOG-ish, then we’re not really talking about underlying semantics at all.

                                                              1. 2

                                                                On hn, the author of the article mentions about he’s far more interested in a good way of expressing set operations and business rules in those terms, rather than writing a high performance array processing language for Erlang.

                                                                1. 1

                                                                  Makes sense.

                                                                  Does BEAM have good FFI support? Generally speaking, if you want to make matrix operations efficient, you first make them possible with a friendly front-end and then you replace the backend with calls to existing hyper-optimized FORTRAN libraries, like both numpy & julia do. It means breaking out of the VM though, & potentially breaking some of Erlang’s guarantees.

                                                                  1. 3

                                                                    Erlang has ways of calling into C, but those are generally considered something to be careful of, because they can block the scheduler, though NIF’s marked as dirty seem to be able to trade off a little bit of performance to get those scheduling properties back.

                                                                    For the use-case that the poster was thinking of, however, I think it might make more sense to start by backing the arrays with Erlang binaries, but I could be wrong. I’d certainly be interested to see what it’d look like.

                                                            1. 18

                                                              In which @itamarst reads a pop-sci book and dishes out advice based on it with little evidence to support it.

                                                              But a few more serious thoughts:

                                                              1. The opening creates a pleasant fairy tale but one should be distrustful of it. Your manager probably isn’t doing so great if your project is in that situation. But maybe you aren’t so hot either. It’s hard to know the difference. Just saying any arbitrary situation, the problem is the manager stinks and the programmer isn’t working smart is ridiculous.
                                                              2. Some projects just require a lot of work no matter how smart you want to work. There are plenty of causes for a situation where the problem simply requires a lot of hours. Maybe hiring failed, maybe the scope is large. But you can’t just “work smarter” out of any situation and just because something requires a lot of time doesn’t mean you’re fixed mindset.

                                                              I believe this is another blog post taking a complicated situation, lossily simplifying it, and dishing out B-grade advice based on the simplification.

                                                              1. 7

                                                                You’re right on all counts, but I still think developers need to put their collective foot down and refuse to work unpaid overtime. Tired programmers are as prone to mistakes as tired workers in any other trade or industry; the main difference is that a tired factory worker might get himself or others hurt or killed, whereas a tired developer might commit a “clever hack” that ends up leaking PII for millions of unsuspecting users after a year or two in production.

                                                                1. 6

                                                                  Clarification: Are you against overtime or unpaid overtime? Because even if I’m getting paid for overtime, I can still commit the mistake you stated.

                                                                  1. 6

                                                                    If companies have to pay for over time, they will be (in theory anyways) less likely to make their workers work over time, and will instead have some actual planning and project management in place so working 12 hours straight is not needed.

                                                                    1. 4

                                                                      I’m against unpaid overtime. Making all waged and salaried work by non C-suite workers eligible for overtime pay might deter some managers from allowing/persuading/gaslighting/bullying/coercing/etc workers to do OT, though the current time-and-a-half rate might not be sufficient to serve as a deterrent. Triple pay for OT might be better.

                                                                      If not, and my boss is ignorant enough to take that kind of risk and incur that sort of cost, then I’m not going to say no to more money.

                                                                      1. 6

                                                                        I’m against overtime, it causes problems that need to be cleaned up. It’s as dumb as throwing bodies at the problem.

                                                                    2. 3

                                                                      No matter how much work something requires, it doesn’t need to force people to work long hours; they can just do it slower.

                                                                      1. 3

                                                                        If you work in a production based environment with deadlines, the amount of work you have is directly correlated with how many orders the sales team racks up, and you can very easily end up having to work long hours from necessity to meet the demand and deadlines. Many industries also have off seasons and busier times, especially those that deal with schools or taxes.

                                                                        1. 3

                                                                          Can you explain why software development timelines need to follow academic or fiscal schedules?

                                                                          1. 3

                                                                            Software development in a business setting is all about keeping other people’s promises.

                                                                            1. 1

                                                                              Yeah, I get that. If sales people are promising new functionality by a certain date then there’ll be pressure to build it by then. And I can now see why the school calendar could matter. What I’m not getting is why tax season makes a difference.

                                                                              1. 3

                                                                                Some software changes are necessitated by tax law changes or other external deadlines. I’m working on one right now.

                                                                                1. 2

                                                                                  Because tax laws change over time, so if you’re writing software that is dealing with something that got changed by your government with a given deadline, you’re going to have to deal with it by said deadline, or face whatever consequences come from not being in compliance with said law.

                                                                                  A big one in the past few years was all the changes related to the Affordable Care Act in the US. It isn’t that every year brings a change like this (to my current knowledge), but they are a thing with real external deadline pressure.

                                                                      1. 3

                                                                        At work I’ll be going on my third two week sprint, working on UI cleanup and other such things for the product I”m working on.

                                                                        Away from work, rather enjoying the April Fool’s joke we have going on right now, and enjoying some Tradewinds Legends.

                                                                        1. 2

                                                                          The other relatively easy thing to do is that if you find yourself jumping to a given location more than once an hour, set up a shell alias to take you there.

                                                                          1. 3

                                                                            I think the lead in is a little mis-leading.

                                                                            “In a world where we can do everything declaratively, the rest of the article would’ve wrote itself, after finishing the first sentence.”

                                                                            “As you might have guessed by now, or known already, declarative programming has something to do with declaring intentions and getting back the expected results.”

                                                                            The quote from wikipedia is much more sober, accurate and informative and even more brief and punchy

                                                                            Wikipedia describes declarative programming as “a style of building structure and elements of computer programs, that expresses the logic of a computation without describing its control flow.”

                                                                            and makes it more apparent that this is another way of abstracting away from the bare metal, like high-level languages abstract away from machine code.

                                                                            That said, the actual article did not make it easy for me to understand how Python could be used, actually, declaratively.

                                                                            1. 2

                                                                              First of all, thank you for taking the time to share your opinion on the article.

                                                                              The quote from wikipedia is much more sober, accurate and informative and even more brief and punchy

                                                                              I find the quote from Wikipedia quite beautifully written as well. However I think it doesn’t convey to the reader other forms of declarative programming, like that of make rules.

                                                                              “This article is about declarative programming in python and intended for consumption by humans”. Assuming, I have this sentence saved in dec_article.txt I can express this as the following make rule:

                                                                              dec_article.md: dec_article.txt
                                                                                      declarative_writer dec_article.txt
                                                                              

                                                                              and makes it more apparent that this is another way of abstracting away from the bare metal, like high-level languages abstract away from machine code.

                                                                              The goal of declarative programming isn’t to abstract away from the bare-metal though. Even one of the most imperative languages C, which maps very nicely to the bare-metal, abstracts away from it. Declarative programming abstracts away from the control flow, and from imperatively stating in an instruction-by-instruction manner, how a result is to be reached.

                                                                              That said, the actual article did not make it easy for me to understand how Python could be used, actually, declaratively.

                                                                              That’s a big failure on my side, I’ll update the article to include a comparison between the example API in the article written as it is vs the API written in a non-declarative manner, and explain the differences between them further.

                                                                              1. 1

                                                                                I’m no expert on Makefiles, but what’s declarative about them? It’s do this, then this, then this. You can compose them, but that’s not the same thing?

                                                                                1. 2

                                                                                  It’s more that Makefiles will work out what actions to perform, and in what order, based on the the state of the filesystem and rules you provide.You can usually write rules in any order, and Make will do a topological sort to figure out what actually needs to happen based on the dependencies between steps.

                                                                                  SQL is also known for being a largely declarative language, and has a similar process where you describe what needs to come from where, and the SQL query parser and planner for the DB in question actually works out any of a number of plans on how to actually fetch the data. This splitting up what you want from how it’s implemented allows for after-the-fact optimizations in SQL database, such as adding indexes, without necessarily having to change the queries themselves.

                                                                                  I don’t think Make is as declarative as SQL, (and so has little room to able to optimize things after the fact, though you can add in rules for how to treat certain types of files), but I’d consider it to be decently declarative as-is

                                                                              2. 1

                                                                                I’ve added a comparison[1] between the declarative implementation from the article and an imperative one. Let me know if that makes it any clearer:)

                                                                                [1]https://nullp0tr.com/pages/declarative_apis_1.html

                                                                                1. 1

                                                                                  Hi @nullp0tr,

                                                                                  Thank you for your response! I read your follow up post and tried to follow it. I suspect my lack of enthusiasm for this line of explanation is that for a person unfamiliar with declarative programming (like me) the example really doesn’t tell me anything. You’ve just punted on all the logic and are using whatever the library supplies you. Also, your declarative implementation carries no state and no __init__ so I don’t see how it implements the interface either :( .

                                                                                  To me make and makefiles are the the most easily accessible example of a declarative style: you declare how to make the products and make figures out in what order to run the individual steps in order to make that happen.

                                                                                  There is a lot of hype around the word “declarative” and I thank my very rudimentary poking around of Haskell for allowing me to grasp the core concepts that are behind it: No side-effects (though that’s a fun argument for make :) ), immutability and lazy evaluation.

                                                                                  It’s a fun paradigm, and I like the idea, and I should try it out in a mid-size project once to see how I feel about it.

                                                                                  1. 1

                                                                                    Hey @kghose,

                                                                                    It’s a shame I couldn’t bring my points across to you, I’ll keep your experience with the article with me for next time:)

                                                                                    You’ve just punted on all the logic and are using whatever the library supplies you. Also, your declarative implementation carries no state and no init so I don’t see how it implements the interface either :( .

                                                                                    I’m very fond of puns and riddles, but you’re absolutely right to point out that they’re not necessarily the best learning-aid. I’ve hidden the low-level bluetooth logic, because it’s completely irrelevant to the point of the article. The available bluetooth functionality was to be taken as an established baseline, just like any functionality offered by the standard library, and the state wasn’t actually hidden, it’s just that there isn’t much state. The article shows the __init__() function for RapidAPI which just has one attribute:)

                                                                                    To me make and makefiles are the the most easily accessible example of a declarative style: you declare how to make the products and make figures out in what order to run the individual steps in order to make that happen.

                                                                                    I’m thinking maybe I should’ve explained a bit more about Bluetooth Low Energy? I think the reason why make is much more accessible for you, is due to your knowledge of the compilation process.

                                                                                    There is a lot of hype around the word “declarative” and I thank my very rudimentary poking around of Haskell for allowing me to grasp the core concepts that are behind it: No side-effects (though that’s a fun argument for make :) ), immutability and lazy evaluation.

                                                                                    Haskell and what you’re describing here is pure functional programming, which is only a subset of declarative programming, that adds more constraints like referential transparency (no side-effects) and immutability. Lazy evaluation is an implementation detail that can be added to even imperative languages. So make having side-effects is okay, because make isn’t a functional language.

                                                                                    1. 1

                                                                                      Hi,

                                                                                      The article shows the init() function for RapidAPI which just has one attribute:)

                                                                                      Ah I see. Thanks.

                                                                                      I’m thinking maybe I should’ve explained a bit more about Bluetooth Low Energy? I think the reason why make is much more accessible for you, is due to your knowledge of the compilation process.

                                                                                      I don’t think that’s it.

                                                                                      I don’t need to know the details of the compilation process, just that this problem is well defined as a dependency graph and a makefile does not care about (mostly) the order you declare the products. make is the tool responsible for creating the dependency graph and then walking it to figure out what needs to be done and in what order, which is the core of declarative programming - the runtime/compiler figures out the dependency graph from your variable and function declarations.

                                                                                      Explaining more about Bluetooth Low Energy would probably just be tangential to all this.

                                                                              1. 21

                                                                                I would go one step further–I only grudgingly sign NDAs and assignments of invention too and would prefer if they weren’t there.

                                                                                This single issue is the thing that most makes me think we need collective bargaining and unions.

                                                                                Given the MO of modern companies, our ideas and skills are all that we have.

                                                                                1. 8

                                                                                  Yeah I don’t think i’d work anywhere that did Assignments of invention. I just don’t think I could be paid enough to make me give that up. I once signed a noncompete though but it wasn’t this restrictive, it only applied to business that were making the exact same kind of product (Laboratory information Management Systems).

                                                                                  1. 7

                                                                                    When I joined my last company they had an assignment of invention section in their paperwork, but provided a place to list exemptions. I listed so many things on that form: github side projects to theoretical ideas I’d been kicking around. When I handed in the packet to HR they didn’t know how to handle the fact that I actually filled that stuff out. They ended up removing the assignment of invention section completely.

                                                                                    I see a distinction between companies that prey on their employees and those that build in language and terms like this because legal told them to, or it’s “boilerplate”. Nether is acceptable and in many regions that take workers’ rights seriously they are explicitly illegal. I don’t see that happening in the US anytime soon, though.

                                                                                    If it’s important to you, don’t sign. If it’s important to you and your company is a bunch of idiots, change the contract before you sign it and watch them blindly put a signature on it. Who knows, maybe you’ll end up owning all their IP instead.

                                                                                    1. 0

                                                                                      I can definitely understand why a company would want you to sign an assignment of invention and I don’t think they’re inherently good or bad. They’re just a trade off like anything else. If you really want to start your own company one day or side projects are really important to you than that’s something to consider strongly before signing an assignment of invention. Just like flexibility would be something to consider before taking a job if you really wanted to be able to take off work, with no advanced notice, to surf if the waves happen to be good and then make up those hours later.

                                                                                      1. 11

                                                                                        Safety bars on looms and e-stops on lathes are a trade off like anything else…

                                                                                        This is a local minima of error that companies are stuck in due to investors and lawyers (and greedy founders) trying to cover their own asses.

                                                                                        It’s basically become industry standard, but seeing as how we’re all getting screwed in compensation (giving the growth we enable) compared to older days the bargain no longer makes sense. Further, the troubling trend is “Well, it’s probably no big deal to work on , just let us know and/or we don’t care anyways” is basically living with a gun to your head.

                                                                                        If it is such a non-issue that most companies will overlook it, fucking leave it off the conditions of employment. If it is such an issue, compensate the engineers properly.

                                                                                        1. 3

                                                                                          I think we need to create a list of businesses that do this so that I can avoid ever applying to them and also ones that don’t do this so that I can weigh applying for them.

                                                                                          1. 1

                                                                                            Safety bars on looms and e-stops on lathes are a trade off like anything else…

                                                                                            Apples and oranges. Those safety features don’t really affect the employer, but they have a huge effect on how safe the job is for all of the employees that use looms and lathes. Assignments of invention do have an effect on the employer and if you happen to be an employee without any aspirations of starting you own business then they don’t really affect you. Even if you do have that aspiration, a good company will be more than happy to stamp prior discovery paperwork to approve side projects that don’t have anything to do with the company’s area of business so an assignment of invention will only affect you if you want to compete with your employer.

                                                                                            Edit:

                                                                                            If it is such an issue, compensate the engineers properly.

                                                                                            If you compare the software engineering salaries with those of other fields it appears that we are compensated for signing non-competes and assignments of invention. Nurses, for comparison, are also highly educated salaried workers but they make on average $20,000 less per year then software engineers [Source] [Source]. It is entirely possible that the gap in pay is a result of a high demand for and low supply of software engineers. But there is a high demand for and low supply of nurses as well.

                                                                                            1. 8

                                                                                              a good company

                                                                                              Where, where are these good companies? “Not all companies”, indeed!

                                                                                              There is no upside to for the employer to do this once they have the paperwork in hand, and relying on the charity/largess of a company is foolish–especially once belts start tightening. Even companies that aren’t terrible can often punt forever on this sort of thing because of limited time to devote to non-business issues, because legal’s job is to provide maximal cover and push back on anything that might create risk, etc.

                                                                                              I suggest that the overall tone of how employee engineers are viewed, for the good of all engineers, needs to change. Hell, most of the innovation people claim to care about so much would be strangled in the crib under the agreements that are common today!

                                                                                              1. 3

                                                                                                Assignments of invention do have an effect on the employer and if you happen to be an employee without any aspirations of starting you own business then they don’t really affect you.

                                                                                                And without any intention of ever contributing to open source, and without any intention of ever writing an article or a story or a book, and without any intention of ever painting a painting, and without any intention of ever singing a song, etc., etc. (Ever assignment of invention I’ve ever seen has covered any and all copywritable works, not just code. Most have tried to claim assignment of works created before employment began.)

                                                                                                1. 1

                                                                                                  Ever assignment of invention I’ve ever seen has covered any and all copywritable works, not just code. Most have tried to claim assignment of works created before employment began.

                                                                                                  That is an entirely different story. The assignments of invention that I’ve seen strictly pertain to ip related to the company’s products and services, during your period of employment with the company. Although they have all asked for a list of prior work as a practical means of proving that any such ip of yours was created before your time of employment. That said, my comments above were made with that understanding of what an assignment of invention is.

                                                                                                  1. 6

                                                                                                    “Related” is way too open-ended for my comfort. If I contribute to an open-source project at night that is written in the same language I use at work, is that related? What about if they’re both web applications? What if they both use the same framework? If I write healthcare software during the day and I want to write a novel where somebody goes to the doctor, is that related?

                                                                                                    In the contracts I’ve been presented with it’s been explicit that any work done prior to employment with the company that is no on your list of prior inventions becomes the property of the company. I’ve been programming since I was 12; there is no conceivable way I can list every piece of code I’ve written in 20+ years (much less other forms of copywriteable expression).

                                                                                                    I have hired lawyers on two occasions to review assignment-of-invention contracts with provisions like these and on both occasions the advice I got was that “related” is pretty much a blank check for the employer.

                                                                                                    1. 3

                                                                                                      The ones I’ve seen (and signed) have been restricted to inventions created at work or on company equipment, which amounts to roughly “we own the things we’re paying (or providing infrastructure for) you to create”. Within the context of capitalist employment, I think that’s essentially reasonable.

                                                                                                      1. 2

                                                                                                        The fuzzy bit is, when you’re a salaried worker who is remote, what exactly is “company equipment”? What is “at work”?

                                                                                                        How many of us have, in an evening say, made a commit to wrap up a thought after dinner from our laptops or desktops?

                                                                                                        1. 3

                                                                                                          If you’re a salaried remote worker, the company should be providing your work machine, which is either a laptop you can take with you, or a desktop that you remote into. If you’re providing all the equipment out of pocket, why are you on salary, rather than working as a contractor?

                                                                                                          The only exception I could think would be a very early stage startup, but in that case you’re probably coming from a place of having a better negotiating position anyway.

                                                                                                          I’ve worked remotely for 3 jobs, and have always been provided a development machine, and have done my best to avoid doing anything that is strictly a side project on it for that reason.

                                                                                                          1. 1

                                                                                                            One of the selling points vendors of separation kernels pushed was separation of Personal and Work on one device (“BYOD”). They mainly pushed it under the illusion that it would provide security at reduced costs on consumer-grade devices. They also pushed it for GPL isolation to reduce IP risks to them. Your comment makes me think that can be flipped: use of dedicated, virtual work environment for (typical benefits here) with additional benefit of isolating I.P. considerations to what’s in the VM. If you want something generic, do it on your own time in your own VM just importing an instance of it into the work VM and/or its codebase. Anything created in the work VM they or you can assume will belong to them.

                                                                                                            I’m ignoring how time is tracked for now. Far as clarity on intent of I.P. ownership, what do you think of that as basic approach? Spotting any big risks?

                                                                                                        2. 1

                                                                                                          I’ve never consulted a lawyer so I’ll concede to you on this. Thank you for posting about your experience!

                                                                                                    2. 2

                                                                                                      If safety equipment did not affect the employer, then why did it take so long for employers to adopt them? Why did they fight so hard against them?

                                                                                                      And if it isn’t a big deal to a good company to make exceptions, why bother with the clause?

                                                                                                      If developers are being fairly compensated for these burdens, why do we still hear about a shortage of devs?

                                                                                                      1. 0

                                                                                                        If safety equipment did not affect the employer, then why did it take so long for employers to adopt them? Why did they fight so hard against them?

                                                                                                        The same reason anyone makes a fuss when you force them to do anything. People don’t like to be told what do to. Add to that the slow moving nature of large organizations and there is going to be a huge fight to get them to do absolutely anything.

                                                                                                        And if it isn’t a big deal to a good company to make exceptions, why bother with the clause?

                                                                                                        Because trusting every employee to be honest about signing over ip to anything they’re working on that is related to the company is not practical and it opens up the company to a huge amount of liability. If you don’t bother with the clause what happens if you inadvertently use your ip your day to day work, fail to notice, and fail to sign it over?

                                                                                                        If developers are being fairly compensated for these burdens, why do we still hear about a shortage of devs?

                                                                                                        Because there is a shortage. Paying more isn’t going to magically create more senior devs. It’ll increase the amount of people that get into the field (and it has) but there is still going to be a large lag time before they have the experience that employers are looking for. That said, if you compare the salaries of software developers to the salaries of other professions with shortages you’ll see that software developers make more. So we might not be compensated as much as you would like, but we are being compensated.

                                                                                                        1. 5

                                                                                                          It took so long to do it because it costs money to replace your lathes with ones with E-Stops. It has nothing to do with being told what to do or being slow. Corporations can actually do things quite quickly when there’s a financial incentive to do so. They struggle to do things which they have a financial disincentive to do. This is precisely why unions are necessary for a healthy relationship between corporations and employees.

                                                                                                          1. 2

                                                                                                            It has nothing to do with being told what to do or being slow.

                                                                                                            It’s both. Companies regularly waste money on stuff that doesn’t benefit the company or refuse to switch to things with known benefits that are substantially different. These are both big problems in companies that aren’t small businesses. They’re also problems in small businesses, but often in different ways. Egos and/or ineptitude of people in charge are usually the source. On programming side, it’s why it took so much work to get most companies to adopt memory-safe languages even when performance or portability wasn’t a big deal in their use cases. Also, why many stayed on waterfall or stayed too long despite little evidence development worked well that way. It did work for managers’ egos feeling a sense of control, though.

                                                                                                            Can’t forget these effects when assessing why things do or don’t happen. They’re pervasive.

                                                                                                          2. 4

                                                                                                            I don’t think a ‘company’ has any feelings at all. I think companies have incentives and that is it, full stop. The people within a company may have feelings, but I think it is amazing the extent that a person will suppress or distort their feelings for money or the chance at promotion.

                                                                                                            I would be surprised if liability was what companies had in principally in mind about ip assignment. I suspect the main drivers are profitability and the treat of competition.

                                                                                                            In terms of compensation, I don’t think anyone is saying programmers are poorly compensated. The question is whether non-competes and and sweeping ip assignments are worth it. Literally everyone who works is compensated, of course it is reasonable to dicker over the level of compensation and the tradeoffs involved in getting it. …

                                                                                                            I think there is a tendency to feel that the existence of an explanation for a company’s behavior is sufficient justification for it’s actions. Because there is an explanation, or an incentive for a company to do a thing has little to no bearing on whether it is good or right for a company to do a thing. It has even less bearing on whether a thing is good from the perspective of a worker for the company.

                                                                                                            If there is a shortage of software developers, and they are worth a lot of dollars, it is in the interest of software developers to collectively negotiate for the best possible treatment they can get from a company without killing the company. That could include pay, it could be defined benefits, it could be offices with doors on it, or all of the above and more.

                                                                                                            There is a strong strain of ‘the temporarily embarrassed millionaire’ in programmer circles, though. It seems like many empathize with the owner class on the assumption that they are likely to enter the owner ranks, but I don’t see the numbers bearing that assumption out.

                                                                                                            1. 6

                                                                                                              If there is a shortage of software developers, and they are worth a lot of dollars, it is in the interest of software developers to collectively negotiate for the best possible treatment they can get from a company without killing the company.

                                                                                                              And as you know, employers colluded to secretly and collectively depress labor wages and mobility among programmers in Silicon Valley (Google, Apple, Lucasfilm, Pixar, Intel, Intuit, eBay), on top of the intrinsic power and resource advantage employers have over employees, further underscoring the need for an IT union.

                                                                                                              https://www.hollywoodreporter.com/news/pixar-lucasfilm-apple-google-face-suit-285282 (2012)

                                                                                                              https://www.theverge.com/2013/7/13/4520356/pixar-and-lucasfilm-settle-lawsuit-over-silicon-valley-hiring

                                                                                                              https://www.theguardian.com/technology/2014/apr/24/apple-google-settle-antitrust-lawsuit-hiring-collusion

                                                                                                              1. 4

                                                                                                                A very good reason for a union.

                                                                                                                Given a union, I wouldn’t necessarily even start with salary, so much as offices with doors and agreements around compensation for work outside of core hours, parental leave and other non-cash quality of life issues.

                                                                                                              2. 2

                                                                                                                In terms of compensation, I don’t think anyone is saying programmers are poorly compensated. The question is whether non-competes and and sweeping ip assignments are worth it. Literally everyone who works is compensated, of course it is reasonable to dicker over the level of compensation and the tradeoffs involved in getting it. …

                                                                                                                Whether or not it is worth it is an individual decision. But at the end of the day we are compensated significantly more than our peers in other fields with shortages (accounting staff, nurses, teachers, etc). If you don’t believe that we’re being compensated enough, then what we really need to be doing is advocating for our peers in those other fields. Because if we’re not getting paid enough, they sure as hell aren’t getting paid anywhere close to enough. And if we improve the culture around valuing employees in general, that will translate into improvements for us as well. A rising tide raises all boats. But as it is, I don’t know anyone but programmers who think programmers are underpaid.

                                                                                                                1. 1

                                                                                                                  I’m all for paying people more, but I’m unclear why you are focusing on these other fields, I was under the impression we were talking about programmers and the IT field

                                                                                                                  I also disagree that those fields constitute peers. Accountants may be the closes as white collared professionals, but they are in a field where everyone applies the same rules to the same data, which is an important difference. I’m all for labor solidarity, but I think it’s up for people in a given field to advocate for themselves. People elsewhere should lend support, sure

                                                                                                                  1. 2

                                                                                                                    I also disagree that those fields constitute peers.

                                                                                                                    They’re peers in that they’re fields with similar, if not more rigorous, educational requirements and they’re also experiencing labor shortages.

                                                                                                                    Accountants may be the closes as white collared professionals, but they are in a field where everyone applies the same rules to the same data, which is an important difference.

                                                                                                                    That doesn’t mean they provide any less value than programmers though. If you run a big business you absolutely need an accountant and a good accountant will more than pay for themselves. That said, given the pay gap, it’s unclear to me that programmers aren’t already getting compensated for signing non competes and assignments of invention. Especially when you consider how much lower the average compensation is for programmers in markets where non-competes and assignments of invention are not the norm [Source].

                                                                                                  2. 2

                                                                                                    I’d never sign an assignment of invention, I find the concept to be absurd, especially in an industry like software engineering.

                                                                                                    I sign NDA’s without complaint when they’re not over-reaching. Many are sensible enough to abide by. But I once had an employer who attempted to make their workforce sign an NDA that imposed restrictions on use of USB sticks retroactively, with huge penalties - up to $10 million - in a company where USB sticks were routinely used to transfer documents and debug builds between on-site third party suppliers and employees of the company. Basically everybody would have been liable.

                                                                                                  1. 2

                                                                                                    I use both Sublime Text and Visual Studio Code, but I often noticed VSCode having some decent input lag (at least when using the Vim Mode), where Sublime, once it is started, rarely does to the same extent.

                                                                                                    That being said, there are a lot of things that VSCode does that can be quite useful, and somehow, it manages to have a terminal that is less laggy than ConEmu.

                                                                                                    1. 1

                                                                                                      VSCode’s vim emulation is really laggy.

                                                                                                      I was playing with throttling my CPU to 400MHz to see how slow certain things were, an unexpected consequence was it took substantial time (felt like about half a second, but I didn’t measure) for a keystroke in vscode to actually register. Turning of the vim extension fixed this entirely.

                                                                                                      1. 2

                                                                                                        So, minor update on this: I don’t think it’s just the Vim Emulation that’s laggy. I suspect there are other extensions that are similarly laggy (I didn’t test to figure out which ones, because I have other things to do at the moment). I actually switched from VS Code back to Sublime Text for Angular/Typescript development due to just getting fed up with the general lack of responsiveness I was getting. Given that Sublime Text has autocomplete for TypeScript, I’m not losing a great deal.

                                                                                                        1. 1

                                                                                                          That’s a shame. I like having my Vim keys. Maybe I’ll have to look into other Vim for VS extensions