LinkedIn’s study was approved by the Massachusetts Institute of Technology’s Institutional Review Board, as noted in its acknowledgments.
This seems like an interesting point, not in that it justifies this behavior, but that the safeguards that review of these types of experiments are faulty/don’t meet the expectations that the researchers criticizing this study have. If review boards are supposed to help prevent unethical behavior, the failure here may need to be examined so that it’s not repeated.
FD: I worked at LinkedIn years ago (not on PYMK or anything customer-facing tho)
There is a great podcast about this: YANSS 182 – Why we find A/B testing icky when it comes to policies, practices, medicine, and social media. Basically if you are randomly chosen to be either in group A or B, people feel uncomfortable with it, but if all people are randomly inside either group A or B it‘s fine and if you don’t do A/B testing thats basically whats happening. Of course it goes much more in depth with the researchers.
This seems like an interesting point, not in that it justifies this behavior, but that the safeguards that review of these types of experiments are faulty/don’t meet the expectations that the researchers criticizing this study have. If review boards are supposed to help prevent unethical behavior, the failure here may need to be examined so that it’s not repeated.
FD: I worked at LinkedIn years ago (not on PYMK or anything customer-facing tho)
There is a great podcast about this: YANSS 182 – Why we find A/B testing icky when it comes to policies, practices, medicine, and social media. Basically if you are randomly chosen to be either in group A or B, people feel uncomfortable with it, but if all people are randomly inside either group A or B it‘s fine and if you don’t do A/B testing thats basically whats happening. Of course it goes much more in depth with the researchers.