I recently spent a year in a codebase that only used raw SQL composed via the psycopg2 driver, having mostly accessed databases via various ORMs previously. I found it educational, and it definitely made me a better SQL developer with a much better understading of the database in question (Postgres), and got me much better at reading SQL, but I also found it added a great deal of overhead for basic CRUD queries that could have been accomplished quickly with a tool.
I also found it made me hard to understand the business logic in the system, because the business logic was often embedded in complex query-composition code, and I had to juggle multiple layers of abstraction in my head to accomplish pretty much anything. Translating between intermixed declarative and imperative styles lead to a lot of mental overhead, and then trying to translate to prose when attempting to write up a design doc added another abstraction to juggle. I often found myself with a SQL editor side-by-side with mulitple columns of an IDE with a debugger running alongside a set of notes trying, and often struggling, to figure out what the code backing an endpoint was actually doing. The verbosity of SQL also meant I never felt I could never possibly fit enough lines of code on my screen.
Yeah, my feeling is that even with an ORM, business logic should not be doing any kind of query composition. In an object-oriented language, it should be calling methods on a Repository class, and the repository should be figuring out how to compose and make the query. Not only does it make the business logic more readable, you can change the implementation without changing the interface to the repository, so if you need to drop down to SQL to optimize a query, you can change it in one place.
Building any machine that lasts 50 years takes some work. The durable tech artifacts like typewriters and guns and mechanical watches and so on that tend to last a long time with minimal maintenance are also precision machines that are massively over-engineered, basically because they have to be very tough to keep that precision. How many, say, chairs, bicycles and pocket calculators are still around from the 1970’s? A lot more than zero, sure, but not that many.
Also, unlike bicycles and can openers, computers have a lot of components inside such as northbridge chips and drive controllers that are basically manufacturer-specific. If you want to replace a part in them 45 years from now then you either need a donor system or you need a manufacturer still producing those parts.
I’d personally be really happy with a computer explicitly designed to last 10 years; there’s plenty out there that are this old, but mostly by accident.
Many 1970s Schwinns are still in service, because they were sold with a lifetime warranty and thus designed to not need much servicing in spite of being mostly sold to young people who were not expected to treat them gently.
As a tradeoff to meet their durability constraints at their price point, they were generally very heavy machines built with technology that was seen as outdated even when new— 50-70% heavier than many competing bicycles, with clunky 1950s derailleur and shifter designs that didn’t have a wide range of gears (which meant lower-precision and lower-maintenance complements worked just fine). Their frame was also mass-produced via a very unique technology, but it required huge capital expenditures and would have been very expensive to adapt to changing tastes: https://www.sheldonbrown.com/varsity.html
They are still pleasant machines to ride in the right circumstances— flat terrain without a lot of starting & stopping, which is why I am content keeping a $75 1970s Schwinn as the bike I ride when I visit my parents in Wisconsin— but Schwinn nevertheless went bankrupt (and is now a Walmart brand) because customers wanted lighter, more capable machines and were willing to accept a bicycle less likely to last 50 years in exchange.
Thanks for the story! This is a great example of the tradeoffs involved. (Now that I look up pictures, I think my mom had one of those bikes in the early 1990’s.)
I’m curious, do you know what the market for parts is like? Have any companies cropped up making reasonable replacement parts, or are people just steadily cannibalizing old bikes? I’d guess the former, since it’s relatively easy to make moderate amounts of simple shapes out of steel, but…
Bike parts are reasonably standard regardless of make & model especially on steel frames from US, English, and Japanese manufacturers. Velo Orange is one company that’s really marketed themselves as a maker of parts for old bikes, but they are the nice maker in a market that also includes free parts bins at a community bike kitchen.
I do still ride a bike built on a frame from around 1980 with parts mostly from the 2010s as a winter training bike (after many years of service as a 4-season commuter). There are only a few major mechanical interfaces (bottom bracket, headset, seatpost, brake mounts) on a bike frame, and, as mentioned, they mostly became globally standardized by the ISO around 1980 for ‘normal bikes’, although there has been an explosion of proprietary parts over the past decade on a lot of high-end bikes for the sake of weight/aerodynamics/stiffness/etc.
For a bike like a 1970s Schwinn, mostly built to older American standards, one often needs to dig through the parts bin at a community bike kitchen or hunt things down on eBay. But everything is very durable & rebuildable so replacement parts other than brake pads & chains are rarely necessary.
I think as the intro implies this can be extended to machines and tools and maybe even further
I think in the context of computers in particular there’s a bit of a political problem where we force people to use them, sometimes by lawn, sometimes through society. They have to use computers, Smartphones and even certain apps.
At the same time we see a rise in scams and are surprised how people who might not even need or want this devices and only have them because they are forced to fill out some form online.
Some decades ago it was relatively easy to come by without almost any particular tool one can think of. You might be odd for it, but it allowed you to stop make use of your rights, etc.
Today you need apps to log in to your bank, websites to do your taxes, sometimes even the web to apply for elderly homes. And smartphones are pretty complex, and force you to fit example have or create an email address, require passwords, etc. You need to know how to use software, understand what the internet is, should have done concept of pop-ups, online ads, spam, updates, understand that there is no other person sitting on the other end right now and so on .
I think a lot of ruthlessness comes from this. Then even if you know about all of the above you end up like in Kafka’s The Trial and even if you know what things mean the processes behind the scenes for the vast majority of use cases will remain completely intransparent to you.
In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions. In the digital world one has to hope the developer has to have thought of it and handle it accordingly. If you are lucky there’s a support hotline but these seem to go away, especially for bigger so often more important companies
I see tools more on the morally neutral side, but I don’t think that’s the issue really. I don’t think computers are impressive but there’s an unintentional direction we move towards to whete things are forced upon people often thinking it’s a good thing when it’s at least debatable.
As a side note there’s certainly cases where things were done in the name of digitalization, progress, efficiency and things were just harder, slower, turned out to be less cost effective, less secure and required more real people to be involved
Of course these are the bad example, but given the adjective here is oppressive. Usually even in (working/stable) oppressive societies it works for most people most of the time. Things start to shift when it doesn’t for you many or there’s war. Only the ones not fitting in tend to have problems and while I would have titled it differently I think that is true for how computers are used that’s true today for all sorts of computers.
In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions.
In the land of unicorn and rainbows? ;)
From my experience, people in positions of “HTML form actions” absolutely aren’t inclined to answer any questions and handle exceptions, unless they have any real retribution to fear. Worse yet, it’s a rational behavior for them: they almost certainly will be reprimanded if they break the intended logic, so it’s much safer for them to follow its letter.
Just past month I had to file a certain application for a somewhat uncommon case. The humans responsible for handling them rejected it as invalid because my scenario wasn’t in their “cache” of common cases and they used the default “contact our parent organization” response instead of trying to handle it, and not even in a polite manner. I contacted the parent organization and, luckily, people there were willing to handle it and told me that my application was valid all along and should have been accepted, and that I should file it again.
I suppose the application form handlers received quite a “motivational speech” from the higher-ups because they were much more polite and accepted it without questions, but it’s still wasted me a lot of time traveling to a different city to file it and standing in lines.
It may be one of the more egregious example in my practice, but it’s far from unique. I very much prefer interacting with machines because at least I can communicate with them remotely. ;)
Your anecdote just demonstrates the author’s point; you had to escalate to a more-responsible human, but you successfully did so and they were able to accommodate the uncommon circumstances, even though those cirumstances were not anticipated by the people who designed the process. When was the last time you pulled that off with an HTML form?
They were anticipated by the people who designed the process. It’s just that their subordinates did a sloppy job executing the logic written for them by the higher-ups. If the higher-ups programmed a machine to do that, it wouldn’t fail.
And I got very lucky with the sensible higher-ups. It could have been much worse: in that particular case it was obvious who the higher-ups were and they had publicly-accessible contact information. In many other cases you may never even find out who they are and how to reach them.
I love that, and I wish more of the web worked that way, but it’s worth pointing out that the only reason it can work is because ultimately the input I put into that form gets interpreted by a human at the post office. It would not be possible to create a form for inputting an email address which would be as resilient to errors or omissions.
yes, and a lot of the information filled into the form doesn’t make sense to me – I just copy it on the envelope. It makes sense in peels as it is routed along: first country, then ZIP, then street, then name. That’s flexibility! Subsidiarity at work.
Some decades ago, here in the US, we were deep in the midst of making a large proportion of physical social institutions at best undignified and at worst somewhere between unsafe and impossible to independently access without ownership and operation of a dangerous, expensive motor vehicle, something unavailable to a significant proportion of the population that ruthlessly grinds tens of thousands of people a year into meat just here into the US.
Looks like the employee is based in the UK. As you might expect, most of the responses to his announcement are Bad Legal Advice. This comment is also going to be Bad Legal Advice (IANAL!) but I have some experience and a little background knowledge so I hope I can comment more wisely…
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time. Opinions I’ve heard from solicitors and employment law experts suggest that this practice might constitute an over-broad, “unfair”, contract term under UK law. That means you might be able to get it overturned if you really tried, but you’d have to litigate to resolve it. At any rate the de facto status is: they own it by default.
What employees typically do is seek an IP waiver from their employer where the employer disclaims ownership of the side-project. The employer can refuse. If you’ve already started they could take ownership, as apparently is happening in this case. Probably in that scenario what you should not do is try to pre-emptively fork under some idea that your project is FOSS and that you have that right. The employer will likely take the view that because you aren’t the legal holder of the IP that you aren’t entitled to release either the original nor the fork as FOSS - so you’ve improperly releasing corporate source code. Pushing that subject is an speedy route to dismissal for “gross misconduct” - which a sufficient reason for summary dismissal, no process except appeal to tribunal after the fact.
My personal experience seeking IP waivers, before I turned contractor (after which none of the above applies), was mixed. One startup refused it and even reprimanded me for asking - the management took the view that any side project was a “distraction from the main goal”. Conversely ThoughtWorks granted IP waivers pretty much blanket - you entered your project name and description in a shared spreadsheet and they sent you a notice when the solicitor saw the new entry. They took professional pride in never refusing unless it conflicted with the client you were currently working with.
My guess is that legal rules and practices on this are similar in most common law countries (UK, Australia, Canada, America, NZ).
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time.
This seems absurd. If I’m a chef, do things I cook in my kitchen at home belong to my employer? If I’m a writer do my kids’ book reports that I help with become privileged? If I’m a mechanic can I no longer change my in-laws’ oil?
Why is software singled out like this and, moreover, why do people think it’s okay?
There have been cases of employees claiming to have written some essential piece of software their employer relied on in their spare time. Sometimes that was even plausible, but still it’s essentially taking your employer hostage. There have been cases of people starting competitors to their employer in their spare time; what is or is not competition is often subject to differences of opinion and are often a matter of degree. These are shadow areas that are threatening to business owners that they want to blanket prevent by such contractual stipulations.
Software isn’t singled out. It’s exactly the same in all kinds of research, design and other creative activities.
I think it’s a pretty large problem if someone can become a colleague, quickly hoover up all the hard won knowledge we’ve together accumulated over the past decade, then start a direct competitor to my employer, possibly putting me out of work.
You’re thinking of large faceless companies that you have no allegiance to. I’m thinking of the two founders of the company that employs me and my two dozen colleagues, whom I feel loyal towards.
This kind of thing protects smaller companies more than larger ones.
…start a direct competitor to my employer, possibly putting me out of work.
Go work for the competitor! Also, people can already do pretty much what you describe in much of the US where non-competes are unenforceable. To be clear, I think this kind of hyper competitiveness is gross, and I would much rather collaborate with people to solve problems than stab them in the back (I’m a terrible capitalist). But I’m absolutely opposed to giving companies this kind of legal control over (and “protection” from) their employees.
Who says they want me? Also I care for my colleagues: who says they want them as well?
where non-competes are unenforceable
Overly broad non-competes are unenforceable when used to attempt to enforce against something not clearly competition. They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
opposed to giving companies this kind of legal control
As I see it we don’t give “the company” legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us. We’re not protecting some nameless company: we’re protecting our livelihood.
And please note that my employer does waive rights to unrelated side projects if you ask them, waives rights to contributions to OSS, etc. Also note that non-compete restrictions are only for a year anyway.
Who says they want me? Also I care for my colleagues: who says they want them as well?
Well then get a different job, get over it, someone produced a better product than your company, that’s the whole point of capitalism!
They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
Not in California, at least, it’s trivially easy to Google this.
As I see it we don’t give “the company” legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us.
Are you a legal party to the contract? If not, then no, it’s a contract with your employer and if it suits your employer to use it to screw you over, they probably will.
I truly hope that you work for amazing people, but you need to recognize that almost no one else does.
Even small startups routinely screw over their employees, so unless I’ve got a crazy amount of vested equity, I have literally zero loyalty, and that’s exactly how capitalism is supposed to work: the company doesn’t have to care about me, and I don’t have to care about the company, we help each other out only as long as it benefits us.
Why did the original company need the person who started the competitor? Companies need workers and if the competitor puts the original company out of business (I was responding to the “putting me out of work” bit) then presumably it has taken on the original company’s customers and will need more workers, and who better than people already familiar with the industry!
Laying off and reducing the workforce can be regulated (and is in my non-US country). The issue with having employees starting competitor products is that they benefit from an unfair advantage and create a huge conflict of interest.
If California enforced non-compete agreements, Silicon Valley might well not have ended up existing. Non-enforcement of noncompetes is believed to be one of the major factors that resulted in Silicon Valley overtaking Boston’s Route 128 corridor, formerly a competitive center of technology development:
https://hbr.org/2016/11/the-reason-silicon-valley-beat-out-boston-for-vc-dominance
I don’t think we are talking about the same thing. While I agree that any restriction on post-employment should be banned, I don’t think it is unfair for an organization to ask their employees to not work on competing products while being under their payroll. These are two very different situations.
If the employee uses company IP in their product then sure, sue them, that’s totally fair. But if the employee wants to use their deep knowledge of an industry to build a better product in their free time, then it sucks for their employer, but that’s capitalism. Maybe the employer should have made a better product so it would be harder for the employee to build something to compete with it. In fact, it seems like encouraging employees to compete with their employers would actually be good for consumers and the economy / society at large.
An employee working on competing products on its free time creates an unfair advantage because the employees have access to an organization IP to build its new product while the organization does not have access to the competing product IP. So what’s the difference between industrial espionage and employees working on competing products on their free time?
These kinds of epic dunks are funny but not productive in the real world, where yes, the employer risks suffering damages if an employee tries to relicense or revoke the implicit (tort) usage agreement between himself and said employer.
If the balance of power was different, I might be concerned. But I’m not, because what we actually see in the real world is rampant exploitation of employees and, in relative terms, essentially zero exploitation of employers by employees. So I actually think that these “dunks” are productive because they help reveal how absolutely absurd it is to worry about employers being “victimized” by their employees.
Joel Spolsky wrote a piece that frames it well, I think. I don’t personally find it especially persuasive, but I think it does answer the question of why software falls into a different bucket than cooking at home or working on a car under your shade tree, and why many people think it’s OK.
Does this article suggest the employers view contracts as paying for an employee’s time, rather than just paying for their work?
Could a contract just be “in exchange for this salary, we’d like $some_metric of work”, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
Why should an employer care about extra work being released as FOSS if they’ve already received the amount they paid an employee for?
EDIT: I realise now that $some_metric is probably very hard to define in terms of anything except number of hours worked, which ends up being the same problem
Does this article suggest the employers view contracts as paying for an employee’s time, rather than just paying for their work?
I didn’t read it that way. It’s short, though. I’d suggest reading it and forming your own impression.
Could a contract just be “in exchange for this salary, we’d like $some_metric of work”, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
I’d certainly think that one of many possible reasonable work arrangements. I didn’t link the article intending to advocate for any particular one, and I don’t think its author intended to with this piece, either.
I only linked it as an answer to the question that I read in /u/lorddimwit’s comment as “why is this even a thing?” because I think it’s a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
Why should an employer care about extra work being released as FOSS if they’ve already received the amount they paid an employee for?
As a general matter, I don’t believe they should. One reason I’ve heard given for why they might is that they’re afraid it will help their competition. I, once again, do not find that persuasive personally. But it is one perceived interest in the matter that might lead an employer to negotiate an agreement that precludes releasing side work without concurrence from management.
I only linked it as an answer to the question that I read in /u/lorddimwit’s comment as “why is this even a thing?” because I think it’s a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
I think so too, and hope I didn’t come across as assuming you (or the article) were advocating anything that needs to be argued!
I didn’t read it that way. It’s short, though. I’d suggest reading it and forming your own impression.
I’d definitely gotten confused because I completely ignored that the author is saying that the thinking can become “I don’t just want to buy your 9:00-5:00 inventions. I want them all, and I’m going to pay you a nice salary to get them all”. Sorry!
There is a huge difference: We’re talking about creativity and invention. The company isn’t hiring your for changing some oil or swapping some server hardware. They’re hiring you to solve their problems, to be creative and think of solutions. (Which is also why I don’t think it’s relevant how many hours you actually coded, the result and time you thought about it matters.) Your company doesn’t exist because it’s changing oil, the value is in the code (hopefully) and thus their IP.
So yes, that’s why this stuff is actually different. Obviously you want to have exemptions from this kind of stuff when you do FOSS things.
I think the chef and mechanic examples are a bit different since they’re not creating intellectual property, and a book report is probably not interesting to an employer.
Maybe a closer example would be a chef employed to write recipes for a book/site. Their employer might have a problem with them creating and publishing their own recipes for free in their own time. Similarly, maybe a writer could get in trouble for independently publishing things written in their own time while employed to write for a company. I can see it happening for other IP that isn’t software, although I don’t know if it happens in reality.
I think the “not interesting” bit is a key point here. I have no idea what Bumble is or the scope of the company, and I speak out of frustration of these overarching “legal” restrictions, but its sounds like they are an immature organization trying to hold on to anything interesting their employees do, core to the current business, or not, in case they need to pivot or find a new revenue stream.
Frankly if a company is so fearful that a couple of technologies will make make or break their company, their business model sucks. Technology != product.
This is pretty much my (non-lawyer) understanding and a good summary, thanks.
If you find yourself in this situation, talk to a lawyer. However I suspect that unless you have deep pockets and a willingness to litigate “is this clause enforceable” through several courts, your best chance is likely to be reaching some agreement with the company that gives them what they want whilst letting you retain control of the project or at least a fork.
One startup refused it and even reprimanded me for asking - the management took the view that any side project was a “distraction from the main goal”
I think the legal term for this is “bunch of arsehats”. I’m curious to know whether you worked for them after they started out like this?
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time
Is it really that widespread? It’s a question that we get asked by candidates but our contract is pretty clear that personal-time open source comes under the moonlighting clause (i.e. don’t directly compete with your employer). If it is, we should make a bigger deal about it in recruiting.
I would think the solution is to quit, then start a new project without re-using any line of code of the old project - but I guess the lawyers thought of this too and added clauses giving them ownership of the new project too…
I recently spent a year in a codebase that only used raw SQL composed via the psycopg2 driver, having mostly accessed databases via various ORMs previously. I found it educational, and it definitely made me a better SQL developer with a much better understading of the database in question (Postgres), and got me much better at reading SQL, but I also found it added a great deal of overhead for basic CRUD queries that could have been accomplished quickly with a tool.
I also found it made me hard to understand the business logic in the system, because the business logic was often embedded in complex query-composition code, and I had to juggle multiple layers of abstraction in my head to accomplish pretty much anything. Translating between intermixed declarative and imperative styles lead to a lot of mental overhead, and then trying to translate to prose when attempting to write up a design doc added another abstraction to juggle. I often found myself with a SQL editor side-by-side with mulitple columns of an IDE with a debugger running alongside a set of notes trying, and often struggling, to figure out what the code backing an endpoint was actually doing. The verbosity of SQL also meant I never felt I could never possibly fit enough lines of code on my screen.
Yeah, my feeling is that even with an ORM, business logic should not be doing any kind of query composition. In an object-oriented language, it should be calling methods on a Repository class, and the repository should be figuring out how to compose and make the query. Not only does it make the business logic more readable, you can change the implementation without changing the interface to the repository, so if you need to drop down to SQL to optimize a query, you can change it in one place.
Building any machine that lasts 50 years takes some work. The durable tech artifacts like typewriters and guns and mechanical watches and so on that tend to last a long time with minimal maintenance are also precision machines that are massively over-engineered, basically because they have to be very tough to keep that precision. How many, say, chairs, bicycles and pocket calculators are still around from the 1970’s? A lot more than zero, sure, but not that many.
Also, unlike bicycles and can openers, computers have a lot of components inside such as northbridge chips and drive controllers that are basically manufacturer-specific. If you want to replace a part in them 45 years from now then you either need a donor system or you need a manufacturer still producing those parts.
I’d personally be really happy with a computer explicitly designed to last 10 years; there’s plenty out there that are this old, but mostly by accident.
Many 1970s Schwinns are still in service, because they were sold with a lifetime warranty and thus designed to not need much servicing in spite of being mostly sold to young people who were not expected to treat them gently.
As a tradeoff to meet their durability constraints at their price point, they were generally very heavy machines built with technology that was seen as outdated even when new— 50-70% heavier than many competing bicycles, with clunky 1950s derailleur and shifter designs that didn’t have a wide range of gears (which meant lower-precision and lower-maintenance complements worked just fine). Their frame was also mass-produced via a very unique technology, but it required huge capital expenditures and would have been very expensive to adapt to changing tastes: https://www.sheldonbrown.com/varsity.html
They are still pleasant machines to ride in the right circumstances— flat terrain without a lot of starting & stopping, which is why I am content keeping a $75 1970s Schwinn as the bike I ride when I visit my parents in Wisconsin— but Schwinn nevertheless went bankrupt (and is now a Walmart brand) because customers wanted lighter, more capable machines and were willing to accept a bicycle less likely to last 50 years in exchange.
Thanks for the story! This is a great example of the tradeoffs involved. (Now that I look up pictures, I think my mom had one of those bikes in the early 1990’s.)
I’m curious, do you know what the market for parts is like? Have any companies cropped up making reasonable replacement parts, or are people just steadily cannibalizing old bikes? I’d guess the former, since it’s relatively easy to make moderate amounts of simple shapes out of steel, but…
Bike parts are reasonably standard regardless of make & model especially on steel frames from US, English, and Japanese manufacturers. Velo Orange is one company that’s really marketed themselves as a maker of parts for old bikes, but they are the nice maker in a market that also includes free parts bins at a community bike kitchen.
I do still ride a bike built on a frame from around 1980 with parts mostly from the 2010s as a winter training bike (after many years of service as a 4-season commuter). There are only a few major mechanical interfaces (bottom bracket, headset, seatpost, brake mounts) on a bike frame, and, as mentioned, they mostly became globally standardized by the ISO around 1980 for ‘normal bikes’, although there has been an explosion of proprietary parts over the past decade on a lot of high-end bikes for the sake of weight/aerodynamics/stiffness/etc.
For a bike like a 1970s Schwinn, mostly built to older American standards, one often needs to dig through the parts bin at a community bike kitchen or hunt things down on eBay. But everything is very durable & rebuildable so replacement parts other than brake pads & chains are rarely necessary.
I think as the intro implies this can be extended to machines and tools and maybe even further
I think in the context of computers in particular there’s a bit of a political problem where we force people to use them, sometimes by lawn, sometimes through society. They have to use computers, Smartphones and even certain apps.
At the same time we see a rise in scams and are surprised how people who might not even need or want this devices and only have them because they are forced to fill out some form online.
Some decades ago it was relatively easy to come by without almost any particular tool one can think of. You might be odd for it, but it allowed you to stop make use of your rights, etc.
Today you need apps to log in to your bank, websites to do your taxes, sometimes even the web to apply for elderly homes. And smartphones are pretty complex, and force you to fit example have or create an email address, require passwords, etc. You need to know how to use software, understand what the internet is, should have done concept of pop-ups, online ads, spam, updates, understand that there is no other person sitting on the other end right now and so on .
I think a lot of ruthlessness comes from this. Then even if you know about all of the above you end up like in Kafka’s The Trial and even if you know what things mean the processes behind the scenes for the vast majority of use cases will remain completely intransparent to you.
In a non automated/digitalized world is easy to ask quick questions and people who can ask other people handle exceptions. In the digital world one has to hope the developer has to have thought of it and handle it accordingly. If you are lucky there’s a support hotline but these seem to go away, especially for bigger so often more important companies
I see tools more on the morally neutral side, but I don’t think that’s the issue really. I don’t think computers are impressive but there’s an unintentional direction we move towards to whete things are forced upon people often thinking it’s a good thing when it’s at least debatable.
As a side note there’s certainly cases where things were done in the name of digitalization, progress, efficiency and things were just harder, slower, turned out to be less cost effective, less secure and required more real people to be involved
Of course these are the bad example, but given the adjective here is oppressive. Usually even in (working/stable) oppressive societies it works for most people most of the time. Things start to shift when it doesn’t for you many or there’s war. Only the ones not fitting in tend to have problems and while I would have titled it differently I think that is true for how computers are used that’s true today for all sorts of computers.
In the land of unicorn and rainbows? ;)
From my experience, people in positions of “HTML form actions” absolutely aren’t inclined to answer any questions and handle exceptions, unless they have any real retribution to fear. Worse yet, it’s a rational behavior for them: they almost certainly will be reprimanded if they break the intended logic, so it’s much safer for them to follow its letter.
Just past month I had to file a certain application for a somewhat uncommon case. The humans responsible for handling them rejected it as invalid because my scenario wasn’t in their “cache” of common cases and they used the default “contact our parent organization” response instead of trying to handle it, and not even in a polite manner. I contacted the parent organization and, luckily, people there were willing to handle it and told me that my application was valid all along and should have been accepted, and that I should file it again.
I suppose the application form handlers received quite a “motivational speech” from the higher-ups because they were much more polite and accepted it without questions, but it’s still wasted me a lot of time traveling to a different city to file it and standing in lines.
It may be one of the more egregious example in my practice, but it’s far from unique. I very much prefer interacting with machines because at least I can communicate with them remotely. ;)
Your anecdote just demonstrates the author’s point; you had to escalate to a more-responsible human, but you successfully did so and they were able to accommodate the uncommon circumstances, even though those cirumstances were not anticipated by the people who designed the process. When was the last time you pulled that off with an HTML form?
They were anticipated by the people who designed the process. It’s just that their subordinates did a sloppy job executing the logic written for them by the higher-ups. If the higher-ups programmed a machine to do that, it wouldn’t fail.
And I got very lucky with the sensible higher-ups. It could have been much worse: in that particular case it was obvious who the higher-ups were and they had publicly-accessible contact information. In many other cases you may never even find out who they are and how to reach them.
everytime the form allows freedom (which they are admittedly rarely used for, but could be), e.g. https://mro.name/2021/ocaml-stickers
I love that, and I wish more of the web worked that way, but it’s worth pointing out that the only reason it can work is because ultimately the input I put into that form gets interpreted by a human at the post office. It would not be possible to create a form for inputting an email address which would be as resilient to errors or omissions.
yes, and a lot of the information filled into the form doesn’t make sense to me – I just copy it on the envelope. It makes sense in peels as it is routed along: first country, then ZIP, then street, then name. That’s flexibility! Subsidiarity at work.
Some decades ago, here in the US, we were deep in the midst of making a large proportion of physical social institutions at best undignified and at worst somewhere between unsafe and impossible to independently access without ownership and operation of a dangerous, expensive motor vehicle, something unavailable to a significant proportion of the population that ruthlessly grinds tens of thousands of people a year into meat just here into the US.
Looks like the employee is based in the UK. As you might expect, most of the responses to his announcement are Bad Legal Advice. This comment is also going to be Bad Legal Advice (IANAL!) but I have some experience and a little background knowledge so I hope I can comment more wisely…
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time. Opinions I’ve heard from solicitors and employment law experts suggest that this practice might constitute an over-broad, “unfair”, contract term under UK law. That means you might be able to get it overturned if you really tried, but you’d have to litigate to resolve it. At any rate the de facto status is: they own it by default.
What employees typically do is seek an IP waiver from their employer where the employer disclaims ownership of the side-project. The employer can refuse. If you’ve already started they could take ownership, as apparently is happening in this case. Probably in that scenario what you should not do is try to pre-emptively fork under some idea that your project is FOSS and that you have that right. The employer will likely take the view that because you aren’t the legal holder of the IP that you aren’t entitled to release either the original nor the fork as FOSS - so you’ve improperly releasing corporate source code. Pushing that subject is an speedy route to dismissal for “gross misconduct” - which a sufficient reason for summary dismissal, no process except appeal to tribunal after the fact.
My personal experience seeking IP waivers, before I turned contractor (after which none of the above applies), was mixed. One startup refused it and even reprimanded me for asking - the management took the view that any side project was a “distraction from the main goal”. Conversely ThoughtWorks granted IP waivers pretty much blanket - you entered your project name and description in a shared spreadsheet and they sent you a notice when the solicitor saw the new entry. They took professional pride in never refusing unless it conflicted with the client you were currently working with.
My guess is that legal rules and practices on this are similar in most common law countries (UK, Australia, Canada, America, NZ).
This seems absurd. If I’m a chef, do things I cook in my kitchen at home belong to my employer? If I’m a writer do my kids’ book reports that I help with become privileged? If I’m a mechanic can I no longer change my in-laws’ oil?
Why is software singled out like this and, moreover, why do people think it’s okay?
There have been cases of employees claiming to have written some essential piece of software their employer relied on in their spare time. Sometimes that was even plausible, but still it’s essentially taking your employer hostage. There have been cases of people starting competitors to their employer in their spare time; what is or is not competition is often subject to differences of opinion and are often a matter of degree. These are shadow areas that are threatening to business owners that they want to blanket prevent by such contractual stipulations.
Software isn’t singled out. It’s exactly the same in all kinds of research, design and other creative activities.
Sounds fine to me, what’s the problem? Should it be illegal for an employer to look for a way to lay off employees or otherwise reduce its workforce?
I think it’s a pretty large problem if someone can become a colleague, quickly hoover up all the hard won knowledge we’ve together accumulated over the past decade, then start a direct competitor to my employer, possibly putting me out of work.
You’re thinking of large faceless companies that you have no allegiance to. I’m thinking of the two founders of the company that employs me and my two dozen colleagues, whom I feel loyal towards.
This kind of thing protects smaller companies more than larger ones.
Go work for the competitor! Also, people can already do pretty much what you describe in much of the US where non-competes are unenforceable. To be clear, I think this kind of hyper competitiveness is gross, and I would much rather collaborate with people to solve problems than stab them in the back (I’m a terrible capitalist). But I’m absolutely opposed to giving companies this kind of legal control over (and “protection” from) their employees.
Who says they want me? Also I care for my colleagues: who says they want them as well?
Overly broad non-competes are unenforceable when used to attempt to enforce against something not clearly competition. They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
As I see it we don’t give “the company” legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us. We’re not protecting some nameless company: we’re protecting our livelihood.
And please note that my employer does waive rights to unrelated side projects if you ask them, waives rights to contributions to OSS, etc. Also note that non-compete restrictions are only for a year anyway.
Well then get a different job, get over it, someone produced a better product than your company, that’s the whole point of capitalism!
Not in California, at least, it’s trivially easy to Google this.
Are you a legal party to the contract? If not, then no, it’s a contract with your employer and if it suits your employer to use it to screw you over, they probably will.
I truly hope that you work for amazing people, but you need to recognize that almost no one else does.
Even small startups routinely screw over their employees, so unless I’ve got a crazy amount of vested equity, I have literally zero loyalty, and that’s exactly how capitalism is supposed to work: the company doesn’t have to care about me, and I don’t have to care about the company, we help each other out only as long as it benefits us.
Go work for the competitor?
Why would the competitor want/need the person they formerly worked with/for?
Why did the original company need the person who started the competitor? Companies need workers and if the competitor puts the original company out of business (I was responding to the “putting me out of work” bit) then presumably it has taken on the original company’s customers and will need more workers, and who better than people already familiar with the industry!
Laying off and reducing the workforce can be regulated (and is in my non-US country). The issue with having employees starting competitor products is that they benefit from an unfair advantage and create a huge conflict of interest.
Modern Silicon Valley began with employees starting competitor products: https://en.wikipedia.org/wiki/Traitorous_eight
If California enforced non-compete agreements, Silicon Valley might well not have ended up existing. Non-enforcement of noncompetes is believed to be one of the major factors that resulted in Silicon Valley overtaking Boston’s Route 128 corridor, formerly a competitive center of technology development: https://hbr.org/2016/11/the-reason-silicon-valley-beat-out-boston-for-vc-dominance
I don’t think we are talking about the same thing. While I agree that any restriction on post-employment should be banned, I don’t think it is unfair for an organization to ask their employees to not work on competing products while being under their payroll. These are two very different situations.
If the employee uses company IP in their product then sure, sue them, that’s totally fair. But if the employee wants to use their deep knowledge of an industry to build a better product in their free time, then it sucks for their employer, but that’s capitalism. Maybe the employer should have made a better product so it would be harder for the employee to build something to compete with it. In fact, it seems like encouraging employees to compete with their employers would actually be good for consumers and the economy / society at large.
An employee working on competing products on its free time creates an unfair advantage because the employees have access to an organization IP to build its new product while the organization does not have access to the competing product IP. So what’s the difference between industrial espionage and employees working on competing products on their free time?
That was literally in the comment you responded to.
These kinds of epic dunks are funny but not productive in the real world, where yes, the employer risks suffering damages if an employee tries to relicense or revoke the implicit (tort) usage agreement between himself and said employer.
Employees also risk suffering damages (getting fired) if they find a way to do their work more efficiently, yet this is encouraged.
I’m not saying employees don’t risk damages!
If the balance of power was different, I might be concerned. But I’m not, because what we actually see in the real world is rampant exploitation of employees and, in relative terms, essentially zero exploitation of employers by employees. So I actually think that these “dunks” are productive because they help reveal how absolutely absurd it is to worry about employers being “victimized” by their employees.
Joel Spolsky wrote a piece that frames it well, I think. I don’t personally find it especially persuasive, but I think it does answer the question of why software falls into a different bucket than cooking at home or working on a car under your shade tree, and why many people think it’s OK.
Does this article suggest the employers view contracts as paying for an employee’s time, rather than just paying for their work?
Could a contract just be “in exchange for this salary, we’d like $some_metric of work”, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
Why should an employer care about extra work being released as FOSS if they’ve already received the amount they paid an employee for?
EDIT: I realise now that $some_metric is probably very hard to define in terms of anything except number of hours worked, which ends up being the same problem
I didn’t read it that way. It’s short, though. I’d suggest reading it and forming your own impression.
I’d certainly think that one of many possible reasonable work arrangements. I didn’t link the article intending to advocate for any particular one, and I don’t think its author intended to with this piece, either.
I only linked it as an answer to the question that I read in /u/lorddimwit’s comment as “why is this even a thing?” because I think it’s a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
As a general matter, I don’t believe they should. One reason I’ve heard given for why they might is that they’re afraid it will help their competition. I, once again, do not find that persuasive personally. But it is one perceived interest in the matter that might lead an employer to negotiate an agreement that precludes releasing side work without concurrence from management.
I think so too, and hope I didn’t come across as assuming you (or the article) were advocating anything that needs to be argued!
I’d definitely gotten confused because I completely ignored that the author is saying that the thinking can become “I don’t just want to buy your 9:00-5:00 inventions. I want them all, and I’m going to pay you a nice salary to get them all”. Sorry!
There is a huge difference: We’re talking about creativity and invention. The company isn’t hiring your for changing some oil or swapping some server hardware. They’re hiring you to solve their problems, to be creative and think of solutions. (Which is also why I don’t think it’s relevant how many hours you actually coded, the result and time you thought about it matters.) Your company doesn’t exist because it’s changing oil, the value is in the code (hopefully) and thus their IP.
So yes, that’s why this stuff is actually different. Obviously you want to have exemptions from this kind of stuff when you do FOSS things.
I think the chef and mechanic examples are a bit different since they’re not creating intellectual property, and a book report is probably not interesting to an employer.
Maybe a closer example would be a chef employed to write recipes for a book/site. Their employer might have a problem with them creating and publishing their own recipes for free in their own time. Similarly, maybe a writer could get in trouble for independently publishing things written in their own time while employed to write for a company. I can see it happening for other IP that isn’t software, although I don’t know if it happens in reality.
I think the “not interesting” bit is a key point here. I have no idea what Bumble is or the scope of the company, and I speak out of frustration of these overarching “legal” restrictions, but its sounds like they are an immature organization trying to hold on to anything interesting their employees do, core to the current business, or not, in case they need to pivot or find a new revenue stream.
Frankly if a company is so fearful that a couple of technologies will make make or break their company, their business model sucks. Technology != product.
I know of at least one online magazine’s contracts which forbid exactly this. If you write for them, you publicly only write for them.
This is pretty much my (non-lawyer) understanding and a good summary, thanks.
If you find yourself in this situation, talk to a lawyer. However I suspect that unless you have deep pockets and a willingness to litigate “is this clause enforceable” through several courts, your best chance is likely to be reaching some agreement with the company that gives them what they want whilst letting you retain control of the project or at least a fork.
I think the legal term for this is “bunch of arsehats”. I’m curious to know whether you worked for them after they started out like this?
https://www.youtube.com/watch?v=Oz8RjPAD2Jk
I left shortly after for other reasons
Is it really that widespread? It’s a question that we get asked by candidates but our contract is pretty clear that personal-time open source comes under the moonlighting clause (i.e. don’t directly compete with your employer). If it is, we should make a bigger deal about it in recruiting.
I would think the solution is to quit, then start a new project without re-using any line of code of the old project - but I guess the lawyers thought of this too and added clauses giving them ownership of the new project too…