I wonder what are the liability risks in developing and making openly available such a system/platform. Given how litigious medical practice is, I wonder if someone could sue if a bug in this synthetic pancreas causes serious harm or death. I would expect to be able to do it if I was using a commercial product and I would expect that a free and open source replica would be covered under the same FDA guidelines for medical devices safety.
Said in other words, does offering a free and open source medical device exclude you from medical liability?
They aren’t selling a medical device. People are choosing to connect their insulin pump to a glucometer with a pi and using this woman’s software to facilitate how those two things communicate. In order to do this you have to have an understanding of the technology involved. The only person liable is yourself who choose to use a DIY solution. At least until laws are made to address questions of the place DIY medical devices have.
To be honest though, the risk of uncontrolled diabetes far outweighs whatever risk there is using these devices and no matter what diabetics have to have a backup way to check their blood sugar and keep insulin and glucose on hand for emergencies. I have a diabetic alert dog that helps me to manage my diabetes, but its still incredibly hard to keep blood sugar in a very narrow range.
DIY medical devices help make things more affordable and accessible to the disabled who often don’t have thousands of dollars to spend giving money to a company. Then to top everything off, you can’t see what the company’s medical device is made from or what the software looks like. I would much prefer to see the software in a pace maker or insulin pump than use something with hidden source code that I can’t change to match my own circumstances or that might have security issues which will never be updated and fixed. Like seriously, how often do you think the software in a pace maker is updated?
While I agree with the general sentiment of the post, I don’t see how a plain text statistics book is going to be the future, especially for a field that relies on mathematical notation. There is more to a textbook than just information (e.g., typesetting, graphics). Maybe a better idea would be to have open-source textbooks (maybe in LaTeX or some other typesetting language) that can be edited/peer-reviewed/updated by the community and compiled to pdf, html, or whatever.
I wonder if this is more an issue with the didactical contract  rather than lack of mathematical skills on the part of the students. In Brousseau’s view, teachers and students are bound together in the classroom by reciprocal responsibilities. Among them, the teacher has the role of asking questions and the students have the role of coming up with the answers.
The teacher breaks his/her contractual obligation by posing a question like the shepherd problem. Some students will call him/her out on it recognizing his/her contractual breach, while others will think that the question is some sort of trick question (thus working within the agreed-upon contractual roles of teacher and student) and will try to provide an answer to the problem.
As a math teacher, you are between a rock and a hard place. On one hand, you have to provide problems to your students to practice what they are learning in a way that doesn’t confuse them. On the other, you want them to develop the intellectual autonomy and problem solving skills that Mubeen talks about.
 Brousseau, G. (1997). Theory of didactical situations in mathematics: Didactique des mathématiques 1970–1990 (N. Balacheff, M. Cooper, R. Sutherland, & V. Warfield, Eds. and Trans.). Dordrecht: Kluwer.
The answer then would be to make explicit the nature of mathematics. K-12 education in the US does students a disservice by providing a view of mathematics as being purely axiomatic, hiding away the discovery, inconsistency, and creativity of professional mathematics. When you learn math in school, you’re simply told “these are the facts” (the axioms and theorems of the field), with no indication of the reason of those facts, and the ways in which they were derived.
Compare that to the system illustrated by Paul Lockhart in his book “Measurement,” wherein you would give first the description of some mathematical objects (say, triangles), and then begin to ask and answer questions about what rules these objects obey based on the definition of their structure.
It may be that Lockhart’s free-form method is ill-suited on its own to testing-based education (we’ll leave the question of whether testing-based education is a good idea for later), but it seems entirely possible to meld this method with the more axiomatic structure of current math education. Start with an interrogation of the core objects in the area being studied, and then talk about what facts have already been discovered by mathematicians about these objects (perhaps giving students opportunities to derive these facts independently), and then using those pre-developed facts to further encourage exploration of the subject.
This is a great article. Thanks for sharing.
It is a long read, but it gives a very good overview to the circumstances that led to the creation of the internet and its effects on scientific knowledge.
There is a reason why listening professors talking about doing research in the library using index cards seems so far removed from the current computer-based research workflow.
I think there are probably a lot of professors who have not taken the great leap forward…
On a more serious note the open publication of scientific work is really important as it lets everyone test out the knowledge for themselves, but also to link disparate strands of knowledge.
I started MobaXTerm and looked at the X Server settings.
By the way, using a regular X11 server compiled for Windows also works, instead of launching MobaXTerm+“rooted” X Server.
As a mathematician turned educator, I think that a better analogy for the author would be a 100-meter sprinter turned coach.
Even if the sprinter was able to run 100 meters in less than 10 seconds, his performance is contingent on years of training. Once he stops training, no one would expect him to continue to run as fast as he did at the peak of his career. Now that he is a coach, he goes around saying that running is not about being fast, but rather that running is an excellent proxy for problem-solving, that running embeds character in runners, and running is fun.
The problem with mathematics today is that we focus way more on the product (going back to the runner analogy, how fast you run) rather than teaching students that there is way more to math than just that (just like there is more than speed in running).
I like your analogy.
running is an excellent proxy for problem-solving
Should I start running around at work? :-)
Yeah, stepping away from the computer to ponder a problem AFK is a worthy development technique.
Generalizing to N dimensions is often seen as a pointless mathematical exercise because of …
… nothing interesting happens in the Nth dimension. Topologically, there is little difference between any spaces with more than 6 dimensions (because of h-cobordism, which shows that spaces with 6 or more dimensions can be “mapped” into each other). Fun fact: this has something to do with the (Abel-Ruffini)[https://en.wikipedia.org/wiki/Abel-Ruffini] theorem and the impossibility to find a close formula for a nth-degree polynomial, with n greater than 5. For spaces with 5 dimensions or lower, there is still some debate on how to work in those dimensions (see (low-dimensional topology)[https://en.wikipedia.org/wiki/Low-dimensional_topology]).
As far as adding a component to a vector, there are a lot of different things happening that make the exercise far from obvious. For example going from a 3-dimensional space to a (3+1)-dimensional space, you will build a (projective space)[https://en.wikipedia.org/wiki/Projective_space] which has its own set of rules, most notably the fact that there are no more parallel lines.
I do not think that reducing the problem to the tools (linear algebra) used to study the geometrical/topological properties of the space fully explains the mathematical misconceptions about this.
I’m okay with not fully correcting the misconceptions; I was only trying to give a taste of the complexity behind the mathematics I’ve seen. I was writing this for my high school self.
Blindly looking at h-cobordism, it says there is a sufficient condition which might not be met in this particular domain. I’ve seen domain experts use N dimensions and there’s even the problem of dimensionality reduction present; I’m inclined to believe that N dimensions are needed for the use cases I’ve seen.
This link leads to a 404. :(
It is here now: http://www.thoughtworks.com/insights/blog/mockists-are-dead-long-live-classicists
Unfortunately the author made a generalization, using a bad example and most of all, an awful title.
Ouch, the article moved…
What’s the new link?
It still is on Google Cache
It looks like it was removed. Odd, it was an interesting article..
An interesting article - even though the title is rather misleading.
Why is there not more double blind educational research, it might help find the answer to poor educational outcomes? This sort of research seems to be more of a gut feel approach - although it does seem eminently more sensible then some of our current educational practice.
conflict of interest: I would love to do a Ph.D on educational research :~)
Why is there not more double blind educational research
I think it is hard to find people would knowingly have their own kids experimented on. If someone were proposing a new/cool technique for educating children, would you want your kids to be in the “control” group that didn’t get the latest technique?
would you want your kids to be in the “control” group
What if the new/cool technique does not work and your kids just lost one/two/three years of instruction?
Working with children is also difficult because it is challenging to separate the effects of your instruction from factors outside the control of the teacher/researcher (even if you have a control group). On top of that, causation does not mean causation, which opens a completely different problem to address once you have your results.
Bit late on the reply but in medicine they manage it - and getting it wrong can kill people. I agree that getting this right is not easy, but the current lack of evidence doesn’t help anyone. Like medicine there will be times when the placebo effect is better than the therapy or teaching method.
This is an interesting write up. I never thought of exploiting machine precision in that way.
For the method that fails, shouldn’t the author keep all the checks on x instead of f(x)? By the definition of limit, we can control the tolerance in x but we cannot control the tolerance of f(x), which in some cases can be several order of magnitude bigger that the original tolerance depending on the function uses.
If we want to keep a tolerance check, why not use something like:
while x2-x1 > tol