I think you should also host something like fermats library and make every PDF link that gets posted reuploaded to there. Then the community can annotate the PDF itself instead of just putting comments below it.
To expand a little bit on this, I would prefer a way to collectively annotate a paper rather than providing comments on it.
The former feels closer to the way that I read and annotate papers (that is, making comments and highlights on specific parts of the paper). The latter feels like reviewing a paper with the goal to provide overarching commentary about it.
The commenting features should depend on the kind of community you want to build (one around conversations about specific parts of the paper or one around public peer review of papers).
I wonder if “thematic” comments (either by tagging them or by breaking ten down in sections) would allow you to implement the discussion part but within the lobsters codebade.
Research gate and academia.edu have similar aims and are closed source, I think. Both have had trouble keeping the lights on.
Academia and research gate are more like Facebook for academics where you post your papers instead of cute pictures of your cat. I don’t think there is much discussion about the papers there.
Source: I have an account on both websites.
I would agree on this. I’ve been using these services for a long time and I don’t remember any case in RG that I had a discussion about a paper. In my field, it’s always experimental questions. Not the papers.
A nitpicky comment and two questions:
I have a quite large library of papers, mostly from journals. A few ideas to consider:
Thank you for putting this tool out there. This is one of those things that I didn’t know I needed until I saw it. Good job!
Those are all great ideas, some of which (like subfolders and/or tags) are on my to-do list. My approach is to think really carefully about avoiding micromanagement and having a good UI. I’ll try to find a way of having the advanced features be more ‘opt-in’. Ideally advanced options like subfolders or different fields would not get in the way (or clutter the UI) if you decide to keep it basic.
Note that you can already do [Author] [Year] [Title] in a pretty speedy way, you just won’t receive the renaming suggestions in that format. It might actually be a good idea to make one of the suggestions configurable like that, the limitation is often the lack of pdf metadata though.
I do the first two. My filenames are Title Year Author since that’s order of importance for me. I also have subfolders just to cut down on the chaos. Some are organized, some are just a month/year combo. I don’t do citation management. It would be useful for academics, though. On that note, I just discovered Zotero yesterday on Hacker News with a ton of people saying they loved it. Worth checking out at least.
I think that the question to ask is why are you taking the exam.
Most university exams (unfortunately) are designed to sort students for future employers. For that goal, preparing for exams by memorizing (often useless) facts and not cheating is an important skill. This about the cliche on whether you would trust your doctor if you knew he/she cheated on their board exams.
On the other hand, if you believe that exams should measure how well you can solve real world problems, then the exams should be structured to reflect the kind of work environment mostly common in the field of study.
In this case the answer to “why are you taking the exam” is “the exam is specifically about thinking like an adversary in cybersecurity.”
The trouble is, of course, that it’s difficult to do “mass production” of students if we use the latter form of exams. A university (or even just a department/program) dedicated to that might be able to crank out a few dozen per year. But they need to do hundreds… and across the nation it needs to be many thousands.
So higher education has drifted away from that sort of thing (to whatever extent they were invested in it earlier), and we get the “sorting for future employers” thing nowdays. Let’s everyone meet quotas.
There’s simply very little value in individual expertise at the inhuman scale of our society. No single individual fixes anything worth fixing, a company does that, after a dozen or three dozen worker ants all look at it, scratch their head, and then pass it on to another who might fix it.
I agree 100%, with the proviso that I don’t think there’s any more nefarious reason for the status quo than the fact that modern higher education is essentially an assembly line.
Intensive, one-on-one tutoring in the mold of the classic OxCam university simply isn’t cost-effective when the political goal is to give more than a single-digit percentage of the population a university degree.
I wonder what are the liability risks in developing and making openly available such a system/platform. Given how litigious medical practice is, I wonder if someone could sue if a bug in this synthetic pancreas causes serious harm or death. I would expect to be able to do it if I was using a commercial product and I would expect that a free and open source replica would be covered under the same FDA guidelines for medical devices safety.
Said in other words, does offering a free and open source medical device exclude you from medical liability?
They aren’t selling a medical device. People are choosing to connect their insulin pump to a glucometer with a pi and using this woman’s software to facilitate how those two things communicate. In order to do this you have to have an understanding of the technology involved. The only person liable is yourself who choose to use a DIY solution. At least until laws are made to address questions of the place DIY medical devices have.
To be honest though, the risk of uncontrolled diabetes far outweighs whatever risk there is using these devices and no matter what diabetics have to have a backup way to check their blood sugar and keep insulin and glucose on hand for emergencies. I have a diabetic alert dog that helps me to manage my diabetes, but its still incredibly hard to keep blood sugar in a very narrow range.
DIY medical devices help make things more affordable and accessible to the disabled who often don’t have thousands of dollars to spend giving money to a company. Then to top everything off, you can’t see what the company’s medical device is made from or what the software looks like. I would much prefer to see the software in a pace maker or insulin pump than use something with hidden source code that I can’t change to match my own circumstances or that might have security issues which will never be updated and fixed. Like seriously, how often do you think the software in a pace maker is updated?
While I agree with the general sentiment of the post, I don’t see how a plain text statistics book is going to be the future, especially for a field that relies on mathematical notation. There is more to a textbook than just information (e.g., typesetting, graphics). Maybe a better idea would be to have open-source textbooks (maybe in LaTeX or some other typesetting language) that can be edited/peer-reviewed/updated by the community and compiled to pdf, html, or whatever.
I wonder if this is more an issue with the didactical contract  rather than lack of mathematical skills on the part of the students. In Brousseau’s view, teachers and students are bound together in the classroom by reciprocal responsibilities. Among them, the teacher has the role of asking questions and the students have the role of coming up with the answers.
The teacher breaks his/her contractual obligation by posing a question like the shepherd problem. Some students will call him/her out on it recognizing his/her contractual breach, while others will think that the question is some sort of trick question (thus working within the agreed-upon contractual roles of teacher and student) and will try to provide an answer to the problem.
As a math teacher, you are between a rock and a hard place. On one hand, you have to provide problems to your students to practice what they are learning in a way that doesn’t confuse them. On the other, you want them to develop the intellectual autonomy and problem solving skills that Mubeen talks about.
 Brousseau, G. (1997). Theory of didactical situations in mathematics: Didactique des mathématiques 1970–1990 (N. Balacheff, M. Cooper, R. Sutherland, & V. Warfield, Eds. and Trans.). Dordrecht: Kluwer.
The answer then would be to make explicit the nature of mathematics. K-12 education in the US does students a disservice by providing a view of mathematics as being purely axiomatic, hiding away the discovery, inconsistency, and creativity of professional mathematics. When you learn math in school, you’re simply told “these are the facts” (the axioms and theorems of the field), with no indication of the reason of those facts, and the ways in which they were derived.
Compare that to the system illustrated by Paul Lockhart in his book “Measurement,” wherein you would give first the description of some mathematical objects (say, triangles), and then begin to ask and answer questions about what rules these objects obey based on the definition of their structure.
It may be that Lockhart’s free-form method is ill-suited on its own to testing-based education (we’ll leave the question of whether testing-based education is a good idea for later), but it seems entirely possible to meld this method with the more axiomatic structure of current math education. Start with an interrogation of the core objects in the area being studied, and then talk about what facts have already been discovered by mathematicians about these objects (perhaps giving students opportunities to derive these facts independently), and then using those pre-developed facts to further encourage exploration of the subject.
This is a great article. Thanks for sharing.
It is a long read, but it gives a very good overview to the circumstances that led to the creation of the internet and its effects on scientific knowledge.
There is a reason why listening professors talking about doing research in the library using index cards seems so far removed from the current computer-based research workflow.
I think there are probably a lot of professors who have not taken the great leap forward…
On a more serious note the open publication of scientific work is really important as it lets everyone test out the knowledge for themselves, but also to link disparate strands of knowledge.
I started MobaXTerm and looked at the X Server settings.
By the way, using a regular X11 server compiled for Windows also works, instead of launching MobaXTerm+“rooted” X Server.
As a mathematician turned educator, I think that a better analogy for the author would be a 100-meter sprinter turned coach.
Even if the sprinter was able to run 100 meters in less than 10 seconds, his performance is contingent on years of training. Once he stops training, no one would expect him to continue to run as fast as he did at the peak of his career. Now that he is a coach, he goes around saying that running is not about being fast, but rather that running is an excellent proxy for problem-solving, that running embeds character in runners, and running is fun.
The problem with mathematics today is that we focus way more on the product (going back to the runner analogy, how fast you run) rather than teaching students that there is way more to math than just that (just like there is more than speed in running).
I like your analogy.
running is an excellent proxy for problem-solving
Should I start running around at work? :-)
Yeah, stepping away from the computer to ponder a problem AFK is a worthy development technique.
Generalizing to N dimensions is often seen as a pointless mathematical exercise because of …
… nothing interesting happens in the Nth dimension. Topologically, there is little difference between any spaces with more than 6 dimensions (because of h-cobordism, which shows that spaces with 6 or more dimensions can be “mapped” into each other). Fun fact: this has something to do with the (Abel-Ruffini)[https://en.wikipedia.org/wiki/Abel-Ruffini] theorem and the impossibility to find a close formula for a nth-degree polynomial, with n greater than 5. For spaces with 5 dimensions or lower, there is still some debate on how to work in those dimensions (see (low-dimensional topology)[https://en.wikipedia.org/wiki/Low-dimensional_topology]).
As far as adding a component to a vector, there are a lot of different things happening that make the exercise far from obvious. For example going from a 3-dimensional space to a (3+1)-dimensional space, you will build a (projective space)[https://en.wikipedia.org/wiki/Projective_space] which has its own set of rules, most notably the fact that there are no more parallel lines.
I do not think that reducing the problem to the tools (linear algebra) used to study the geometrical/topological properties of the space fully explains the mathematical misconceptions about this.
I’m okay with not fully correcting the misconceptions; I was only trying to give a taste of the complexity behind the mathematics I’ve seen. I was writing this for my high school self.
Blindly looking at h-cobordism, it says there is a sufficient condition which might not be met in this particular domain. I’ve seen domain experts use N dimensions and there’s even the problem of dimensionality reduction present; I’m inclined to believe that N dimensions are needed for the use cases I’ve seen.
This link leads to a 404. :(
It is here now: http://www.thoughtworks.com/insights/blog/mockists-are-dead-long-live-classicists
Unfortunately the author made a generalization, using a bad example and most of all, an awful title.
Ouch, the article moved…
What’s the new link?
It still is on Google Cache
It looks like it was removed. Odd, it was an interesting article..
An interesting article - even though the title is rather misleading.
Why is there not more double blind educational research, it might help find the answer to poor educational outcomes? This sort of research seems to be more of a gut feel approach - although it does seem eminently more sensible then some of our current educational practice.
conflict of interest: I would love to do a Ph.D on educational research :~)
Why is there not more double blind educational research
I think it is hard to find people would knowingly have their own kids experimented on. If someone were proposing a new/cool technique for educating children, would you want your kids to be in the “control” group that didn’t get the latest technique?
would you want your kids to be in the “control” group
What if the new/cool technique does not work and your kids just lost one/two/three years of instruction?
Working with children is also difficult because it is challenging to separate the effects of your instruction from factors outside the control of the teacher/researcher (even if you have a control group). On top of that, causation does not mean causation, which opens a completely different problem to address once you have your results.
Bit late on the reply but in medicine they manage it - and getting it wrong can kill people. I agree that getting this right is not easy, but the current lack of evidence doesn’t help anyone. Like medicine there will be times when the placebo effect is better than the therapy or teaching method.
This is an interesting write up. I never thought of exploiting machine precision in that way.
For the method that fails, shouldn’t the author keep all the checks on x instead of f(x)? By the definition of limit, we can control the tolerance in x but we cannot control the tolerance of f(x), which in some cases can be several order of magnitude bigger that the original tolerance depending on the function uses.
If we want to keep a tolerance check, why not use something like:
while x2-x1 > tol