TL;DR: I know nothing about technology and here’s how I got my domain name back, with little real details.
I agree that this was better than most write-ups by laypeople on such situations. Further, it details the experience one is likely to go through. It also makes it clear to other laypeople you can’t trust the hosting sites to help you protect your domains. I bet many would’ve assumed otherwise.
Don’t forget that she ignored more than one sign that something was off, especially the notification from Google about a new login.
I think the writeup is pretty good, and I also wonder if any writeup that is a better postmortem from our point of view would be harder to relate to (for the people with non-technical skill sets). I don’t know whether the original post as it is will make any people pay more attention to suspicious situations, though.
I don’t understand why you felt this comment was warranted.
Don’t forget the “security advice” at the end.
I think it illustrates why it’s a bad idea to share an account between multiple people, even if they’re your significant other.
Not just «even if they’re your significant other» — even if you trust them not to do anything wrong and even if they do not betray that trust.
So not too different from most Medium posts on technology.
Admittedly they’re comparatively rare, but I have seen some pretty in depth technical write-ups on Medium.
An alternative to Disqus is Isso, which is self-hosted.
I’ll probably add this to my blog so you all can be mean to me for a change. Good find @hga.
I use Isso on my blog and I absolutely love it.
Shameless plug: I wrote an openshift catridge(?) which makes installation of Isso in just one click - link
I wonder if it has any spam protection features, I can see administrative features but not auto-spam rule features
One of the great things about Disqus is that you can use it on a “static” blog. My blog (the one hosting this article) is just GitHub pages with posts written in markdown. This has the advantage of being simple and free (and easily to cache/distribute on CDN etc.) but has drawbacks of not being able to have custom code like that.
When my blog was hosted on AppEngine I had self-hosted comments; but Disqus seemed like a much better option. Not so sure about that now though!
I’ve gone back and forth on that, but the solution my current blog is a note at the bottom of each post saying:
Comments welcome: email@example.com
This outsources the infrastructure to email, which already works, with the obvious drawback that the barrier for many people to emailing someone is higher than that for posting a comment. Although that might not be purely a drawback. :-) Another difference of course is that email is private, while some comments might be interesting to other readers, too. I partly remedy that by occasionally posting (attributed) updates at the bottom of a post if someone sends in something I think might be interesting for other readers, as in this example.
Besides not wanting to mess with running either a first- or third-party commenting system, the other motivation is that on a personal blog I feel some desire to keep it as a place for my own writing, not as a general third-party discussion forum attached to every page. So if someone sends in relevant comment I’m happy to post it (or a paraphrase), but I don’t necessarily want comments from random people arguing about tangents to be posted underneath my essays.
I’ve thought about not having comments directly (esp. when HN/Reddit/here usually get more comments than directly on the blog), but I do still think they add value. Not only do I get “Thanks!” now and then which lets me know people are finding my posts useful, but there’s often good discussion between people there.
I don’t get a lot of bad comments, so the only reason to remove them would be to get rid of the scripts but I think (hope) Disqus cares enough about its reputation that they’ll fix this and be more careful in future.
The discussion between commenters on one thread can lead to discovery of new ideas for those people or blog author. That’s essentially what happens here, on HN, etc. Doesn’t happen with email since the readers don’t know of each others' presence much less interesting comments.
Oh, based on the above I figured this was self-install and wouldn’t work for static sites. If it can be used directly from their site though, there’s nothing to stop them making the same mistake in the future? =D
I’m doing the same thing. I have static Jekyll blog, although now on Netlify rather than GitHub Pages because then I can use https with my custom domain.
I built and hosted my own IndieWeb Disqus alternative though. And it’s open for others to use: https://webmention.herokuapp.com/
And there’s other similar services that one can easily self-host. There’s even people who do automatic commits to their static page of any received comments, both from WebMention and through comments form. Been thinking of eventually experimenting with that as well and make my WebMention endpoint talk to my Micropub endpoint (another standard that’s now going through W3C) to submit any received mentions: https://github.com/voxpelli/webpage-micropub-to-github Some are already doing that with their respective endpoints.
I have static Jekyll blog, although now on Netlify rather than GitHub Pages because then I can use https with my custom domain.
My blog (hosting this article) is actually custom domain over SSL on GitHub pages (using CloudFlare to add the SSL). It’s not ideal, but was easy to add to the existing GitHub Pages site rather than migrating!
It uses WebMention (which btw now is a W3C Proposed Recommendation), which removes the need for embedding any authentication mechanisms like Facebook. Instead everyone writes the comments on their own blogs instead and pings my service which then retrieves the comment.
I’d never heard of this, this sounds really interesting - I shall have to read up! Thanks! :-)
C++, the standard library is a performance minefield. Sometimes it’s fast, sometimes it’s terrible. For example last I checked std::regex is actually slower than python’s regex engine. Of course you can use re2 but it’d be nice if you didn’t have to.
Many languages have this problem, but for a performance oriented language like C++ it’s a pain in the ass.
Haskell: the barrier for entry is becoming higher and higher, as GHC becomes more and more complicated. A lot of modern Haskell code requires familiarity with 10 different LANGUAGE extension pragmas. Real World Haskell is now 8 years old, so there isn’t a good authority on what packages we should use for what. Often, the best solution is to ask on #haskell. A lot of the material to learn advanced Haskell is buried in stackoverflow answers and blog posts, but this is getting better. The tooling isn’t great either, and we don’t really have any usable IDEs (doesn’t bother me, but many would see this as important). Another thing I see as problematic is how Haskellers love to be as terse and general as possible - if you can’t resolve the types yourself, you’re screwed even if you just want to use the library and not care about how it works. This is made even more difficult with a huge monad transformer stack, and the compiler output is basically unreadable.
I think the problem with this article is that it does not clearly enough state the point that all major tech companies use slave workers to produce their devices. The opposite is the rare exception and it’s not just an Apple-problem.
I’m interested as to whether anyone has any information to suggest that Apple is particularly bad at this, or maybe the opposite?
I always had the (perhaps misguided) view that Apple was somewhat better than other firms. That said, they do tout their environmental improvements with each new hardware release, ignoring the fact that their products are less repairable than ever! No mention is ever made of improvements in worker conditions in their supply chain.
Fair trade is something that has worked well in other industries, but only because of competition for exactly the same product. I can choose to pay a bit more for fair trade coffee, because I have a choice. If I want an Apple phone, I don’t have any choice (well, until Apple introduces the pricier “iPhone F[air trade]” at least). Not sure where I’m going with this, but it’s not just environmental issues that tech manufacturers need to focus on, but worker conditions. And let me not even get started on the working conditions of IT workers in so-called third world countries…
That said, they do tout their environmental improvements with each new hardware release, ignoring the fact that their products are less repairable than ever!
Repairability, environment friendliness and recyclability not necessarily related. All these products are recycled (as some of their materials are really quite expensive!) and your local repair store doesn’t necessarily have high standards either. Also, some iPhone models were far better repairable then previous editions, so this is not necessarily a strict trend.
No mention is ever made of improvements in worker conditions in their supply chain.
Except in their yearly progress reports, which I found with one google query. http://www.apple.com/supplier-responsibility/progress-report/
While I agree that these kinds of progress reports are often putting lipstick on a pig, Apple does put some pressure on, also on the humans side. It would be nicer if they put that in their Keynotes more often, but hey, what do you expect from an advertisement event?
Microsoft, who is a large supplier of hardware though XBoxes, often produced at FOXCONN, has their reports here: https://www.microsoft.com/about/csr/transparencyhub/citizenship-reporting/
Sony, also a mayor FOXCONN customer, I couldn’t find a specific report: https://www.sony.net/SonyInfo/csr_report/sourcing/supplychain/
Fair trade is something that has worked well in other industries, but only because of competition for exactly the same product. I can choose to pay a bit more for fair trade coffee, because I have a choice.
This is only true if you consider all other products simple and only phones complex.
And let me not even get started on the working conditions of IT workers in so-called third world countries…
I recommend reading the Fairphone reports, which - even if they fail their goals on many levels - give a very open assessment of the situation they are in and how hard it is to introduce checks while also keeping the product at a price level that costumers would buy.
Yes, it’s all horrible, but at the same time, but using Apple as a poster boy of “nice upfront, but terrible behind” gives a lot of cover to a lot of companies that use quite the same practices.
In the end, it comes down to this: large amounts of people are not interested in the working conditions in China when buying their phone in a store or getting excited about the new flagship smartphone that costs north of 500$. If they were, these reports were more widely read or reported upon.
https://www.fairphone.com/blog/ just in case anyone is unsure what “fairphone reports” (most likely) refers to.
Repairability, environment friendliness and recyclability not necessarily related.
True. My original comment was a poor attempt at stating that “a phone that can be easily repaired doesn’t necessary have to be recycled when it develops a fault”.
… using Apple as a poster boy of “nice upfront, but terrible behind”
So true. I’m not sure why, in the tech industry, Apple is such a target - perhaps because of their perceived success? Their practices are no different from, and in many ways are better than, any other firm that manufactures electronics on a massive scale. The popular press tends to home in on Apple, probably to the relief of many other firms!
I’m not sure why, in the tech industry, Apple is such a target
I suspect it’s because of the huge amount of cash Apple has to work with. I think the general premise is something along the lines of “Apple has vast amounts of cash reserves, and they say they care about worker conditions - surely they can leverage that money to do a better job than they are currently.”
I think the mistake is just as you said, people thinking that extremely exploitative companies are the exception and not the rule. How is this not the natural end-condition of a capitalist free market? Many economic analyses rely on rational agents, and what evidence is there saying that the existence of non-rational agents is non-natural, or not just some statistical deviation, or even that the rational agent will make morally undesirable decisions? I highly recommend Anwar Shaikh’s “Capitalism. Competition, Conflict, Crises” on these subjects.
Seems to be a variant of this one, https://medium.com/@ValdikSS/deanonymizing-windows-users-and-capturing-microsoft-and-vpn-accounts-f7e53fe73834 requiring physical access.
That link seems to 404 for me.
The comma following the URL was seen as part of the URL, fixed.
What forum is appropriate to have discussions. lobste.rs is probably fine to promote these things but not great for Q&A/discussions of the readings. Standard choice would be a mailing list but maybe there’s a newer, hipper technology that’d be better?
I’m kind of promoting putting together a discourse site or even going on reddit (maybe!). The idea of a maybe weekly “book club” thread plus side threads for exercises/specific topics/questions seems nice.
I also like Craig’s notion of landing on a “study guide” as a result. Each “book club” thread could have as a goal adding to that study guide (in Hackpad?).
What’s wrong with using irc, where it’s logged and published on a website? Like bash.org
Could also embed a web irc client.
I thing direct communication channels (like IRC) can really get messy when you have more people participating in complicated discussions.
I like the idea of a mailing list, google groups is a pretty decent solution for that.
IRC could be nice to go alongside the main work, but I think something more organized than an IRC log would be really desirable.
So mailing lists, IRC, and Google Groups are all going to make LaTeX / MathJax a challenge. I’m thinking that’s a must-have feature?
Eh i say just throw discourse up on a small instance. Can still have irc for live chat on freenode as an example.
IRC is ephemeral, active, and poorly archived. I like it, but more permanent and passive ways of conversing may be preferred for this beyond the rough stages.
This post reads as if it’s intended as flamebait, but I am going to do my best to respond as if it’s serious.
The overall problem with this post is that as follows. Any vulnerability P in a set of several vulnerabilities present at a given time Σ(t), such as t₀ now, is sufficient to cause some undesired outcome Q, such as NSA agents passing your dick pics around the office and laughing at them. This post argues that fixing one particular P from Σ(t₀) by time t₁ is useless because Σ(t₁) is still nonempty. This is not a very good argument; the goal is not for Σ(t₁) to be empty, but rather for Σ(t₂) to be empty for some time t₂ not too far in the future, preventing Q thenceforth. If tedu’s argument were taken seriously, it would prevent any progress toward that goal.
Fixing any of the vulnerabilities represents progress toward that state of affairs.
The particular set of vulnerabilities @tedu is claiming will allow the CIA (or, more accurately, the NSA) to backdoor Debian systems are as follows:
P: Breaking into any Debian Developer’s machine and corrupting a package they are going to upload to the archive allows the TLA to insert a backdoor into that package, until that DD or another one uploads a new version of the package.
Q: Inserting a self-reproducing backdoor into a compiler allows the TLA to corrupt arbitrary future packages built with that compiler.
R: A backdoor inserted in the process of building a binary package from an irreproducible build process is virtually guaranteed never to be found.
S: Compromising a download server allows the TLA to insert a backdoor into any version of any binary package on it until a new version of the package is uploaded, and to serve up the backdoor only to people targeted by IP address until the compromise is fixed.
T: Anonymously-contributed innocent-appearing source code patches can exploit current compiler bugs to introduce backdoors into any package until the compiler bug is fixed.
U: All our existing software is full of accidental holes, so backdoors are unnecessary.
Of these items, reproducible builds fix P, Q (which Ted incorrectly dismisses as impractical; it was deployed in the field for a number of years), and R; S is actually not true, because apt will not install packages that are not signed by an authorized key, and the download servers do not have those keys; and T and U are still true and need to be solved by means other than reproducible builds. For example, I am typing this in a version of Iceweasel with known vulnerabilities.
There is a somewhat weaker real attack S', which is as follows: modify the download server to serve up an old version of the Packages file and the old versions of packages that have been replaced by bug-fixed versions, so that users relying on that download server will remain vulnerable to known vulnerabilities even if they apt-get update && apt-get upgrade regularly. This is a replay attack, and could be fixed.
apt-get update && apt-get upgrade
There’s another thing I want to point out, which is that T and U are much weaker attacks than P and Q. You are vulnerable to T and U if you are currently running a vulnerable package in a configuration where the vulnerability is exploitable. By contrast, you are vulnerable to P as long as any of the thousands of Debian Developers are currently running or have recently been running a vulnerable package in such a configuration, unless builds are reproducible.
So @tedu is completely mistaken about the importance of reproducible builds. Or he’s joking, which seems likely. But even if it’s a joke, people might take it seriously. Reproducible builds are one of the most important steps toward eventually having a secure general-purpose computing system, if that is possible.
His post also expresses some confusion about whether Debian reproducible builds are only a theoretical defense or an actually deployed defense. 83.8% of Debian packages in the testing release are currently built reproducibly. The original Vice article actually explains this, which reinforces my suspicion that his post is just a joke.
I think we have a difference of opinion about whether or not ‘U’ strictly dominates.
which Ted incorrectly dismisses as impractical; it was deployed in the field for a number of years
I would love a good citation for this. As far as I know Ken just demonstrated a simple proof of concept. I know it can be done. I believe it to be fragile, as in will fail to propagate or otherwise reveal itself by failing to compile some other code.
For the record, though, this is really more a reaction to the motherboard headline than the idea of reproducible builds. I’ll likely clarify a few things tomorrow.
Thank you, btw; the alphabet of exploits is exactly what more discussions need. That’s what I was trying to provoke.
What do you mean by “strictly dominates”? Obviously we have to solve U or we don’t get security. But solving U without solving P wouldn’t get us security any more than solving P without solving U. Solving P but not U at least gives you the possibility of having security on a few machines (the ones controlling your power steering, say) when you’re willing to pay a heavy price in convenience.
AFAIK ken has never admitted in writing to the scale and success of his deployed “Trusting Trust” attack; all we have is the oral tradition. Yes, it’s fragile, and fragile is exactly what you need in many cases; it’s a backdoor written in disappearing ink. If there was a backdoored GCC in Slackware96, who could prove it today?
I am glad my commentary is useful to you.
There is some probability that my build chain is backdoored (P), but I think it’s less than 100%. I think it’s considerably less than 100% that the NSA, FSB, PLA all have backdoors in my build chain. I would say, however, that the likelihood of there being some dumb bug (U) like pdf.js file system access is 100%, and that bug is available to all of the above parties to exploit.
(build chain = compiler, upstream build machine, download server, etc.)
First, no probability is 100% in the Bayesian sense we’re discussing. That’s on par with the people who said they wanted to come visit me in Buenos Aires because they’ve “always wanted to visit the Amazon rain forest”. I’ve made similarly dumb remarks myself many times. This seems like a good time to thank the good people like Sean B. Palmer, Udhay Shankar, and Jeff Ubois who were kind enough to correct me, usually while laughing at me.
Second, you’re unreasonably comparing apples and oranges in order to get the answer you want. It would be fair to compare the probability of at least one Debian Developer currently using a backdoored machine with the probability that your machine is currently backdoored; the first is clearly more likely. It would also be fair to compare the probability of at least one Debian Developer running at least one source-code-vulnerable piece of software that could conceivably be exploited to backdoor them, like VLC until a month ago, with you personally running at least one source-code-vulnerable piece of software that could conceivably be exploited to backdoor you. While both of these probabilities are very high, the first one is clearly even higher.
However, you’re cherry-picking one case from the first category and one case from the second category. This seems unreasonable to me, but it’s part of a larger unreasonableness, which is the following.
Third, you’re treating this as a probabilistic question about facts, as if we were talking about a disease or natural disaster: what is the probability that you currently have pancreatic cancer? What is the probability that there will be a major earthquake in San Francisco by 2020? But we are talking about intelligent adversaries here, not random events devoid of intentionality. The relevant question to your safety is not whether you currently have a keylogger installed; it’s how costly it would be for an adversary to install a keylogger on you.
That’s still a statistical question, because it depends on unknowns like what undisclosed vulnerabilities exist in Chrome, how expensive it will be to find them, whether they are exploitable to gain access to your entire account, and whether those exploits have already been written and are already in the hands of your adversary. But it’s a statistical question about costs to your adversary. It’s very clear that an adversary will find it cheaper to successfully attack whoever is the most vulnerable of the thousands of Debian Developers than to successfully attack your machine directly. (It’s very unlikely — at least hundreds to one, if not thousands to one — that you personally are less cautious and competent than all of them. In fact, I imagine that if we ranked the DDs by difficulty of compromising their machines, you’d be most similar to the DDs in the most difficult quartile.)
(Edited to be less flamey.)
Oh, I like where this is going. I disagree, of course, because I am an Alabama tick, but I like the argument.
I’ll grant that somebody upstream is owned. Adversary wants to own me. Agreed going through upstream is one way to do that, but it has its uncertainties. Maybe I don’t use that package, maybe I don’t update frequently or the package is stable. It lacks a certain immediacy. On the other hand, despite my cautious nature, I just can’t stop myself from following every link posted on lobsters. (Actually in practice, using Ubuntu, apt-get insisted I need like 75 updates a week on the “stable” branch. As if I had time to even read the list each time. So, yeah, I’m boned.) I feel like, with end point security being what it is (zero), it’s simpler and safer to always just attack end points. The math on this changes as software becomes more reliable. I would totally agree with you if I didn’t feel so exposed myself.
I wanted to return to an earlier point, wrt repro builds being a vital step towards secure software. And P without U vs U without P.
Let’s say we want to “close the loop”. We solve our Ps and Qs. Buttttt… U. Certifiable crap is still crap. :) or let’s assume we finally make software reliable. Everything is written in bug free rust. I can’t be owned directly. But… Neither can upstream actually. Ok, some rando package builder could choose to fuck with me, but they won’t be a party to inadvertent fuckery. P becomes incrementally less of a threat as the state of security advances.
So, I’m still wrong, but see what I’m thinking? :)
I’ve been following this with interest but have little to add to it. But a small point:
At present, no distribution is secure out of the box, especially not if one installs every package it offers. This is point U. But, also at present, everybody has a reasonable opportunity to know that (if they don’t, they’re being willfully obstinate), and hardening a system includes reducing its attack surface as much as possible. This always includes removing unnecessary services, and in anything complicated should include consideration of dividing the things it’s doing across several machines, and creating security boundaries between them. The distribution’s job is to help as much as it can, but it can’t do so alone.
Also, by the way, how would one change point U? I see two consistent long-term strategies, but I suppose there could be more:
1) Promote the view that there is no such thing as information security and never will be. Defend yourself in court as needed when your data is stolen. Rely on the general public to not blame you, so that you can still continue business as usual. Implement mitigations as dictated by short-term cost/benefit, with full awareness that this is an arms race with no inherent endpoint.
2) Promote research on how to make formal verification practical for use on large codebases, especially server software. While waiting for that, conduct and encourage independent security audits of your proprietary software and in everything open-source that you use. Make any incremental improvements to security that help immediately, and, since we’re seeking a global maximum and not a local one, also make improvements that you know to be part of an ultimate solution, even if they aren’t useful by themselves. Think in terms of how we can get to a position where attackers have no further paths to continue an arms race.
Every middle ground I can think of is in denial about one or more things that aren’t going to change. :)
To tie this back to the conversation: A fix for U would not make anyone safer without also fixing P, Q, R, S, and T. Strategy 2 demands that we fix all six points; strategy 1 would prefer not to spend money on any of them. If you believe in strategy 1, stop reading now; I believe in strategy 2 and have nothing to say that’s relevant to the other. :)
Some of these points may be useful to fix without the others, and some may not, but we need them all in the end. Which order to spend effort and money on them in is a question that needs to be answered from more-or-less a business perspective: How difficult are they, how soon will they pay off, how much do they raise the cost of an attack or reduce the value of a compromise? But those concerns can’t control what we consider to need fixing eventually, or we’ve given up.
I agree with both of you.
My favorite form of denial is denying that small, simple codebases — the ones that could plausibly be secure with our current level of knowledge — are unusable and cannot compete for mindshare with large, complicated ones. So I run my daemons with runit, for years I received my mail (and the FSF’s) with qmail, for years I left JS disabled and still occasionally browse with elinks, I maintain my main address book with pencil and paper, and I keep tinkering with projects to bootstrap from zero, including impractical toys like general-purpose mechanical computation and more practical things like solar thermal energy systems.
Maybe one day I’ll find a way to make my denial come true, or more dismayingly perhaps we will suffer some kind of catastrophe, but that’s not where the smart money is.
In reference to Q there is a lot of information about countering “Trusting Trust” via Diverse Double-Compiling by David A. Wheeler on his site: http://www.dwheeler.com/trusting-trust/
I think I’ve used boldface in one comment on Lobsters in the time I’ve been here. You are at eight upvotes, which is eight times as many as I got. Well done. :)
Thank you :) I find that selective boldfacing is often a useful form of rubrication which increases the skimmability of text.
You state that S is impossible because backdoored packages wouldn’t install because they aren’t signed, but if P is assumed true, surely these packages could just be signed (unknowingly to the developer) by the attackers anyway?
Yes, if the a developer’s GPG machine is compromised, then the attacker can use that access to sign packages, which they can then upload to download servers they have compromised. But compromising a download server is neither necessary nor sufficient to execute that attack. It just makes it slightly stealthier. That’s why I distinguished between P and S, as @tedu did in his original post.
The blog post has since been deleted. /u/gaggra on Reddit helpfully copied it in case you wanted to read it.
https://web.archive.org/web/20150811090106/https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t may be easier to read.
Nicely, Lobsters saves a cached version of pages.
It’s a shame my comment about how unprofessional the post was didn’t make it into the archived version.
Actually, I posted it to the moderation log about the time it got taken down…
Still working on Haskell Programming with my coauthor Julie. Recently finished an initial scaffold of the functor/applicative/monad material, moving on to foldable and traversable now. We’re releasing algebraic datatypes through testing (right before monoids) this 15th. Next month on September 15th the monoid/functor/applicative/monad sequence will be released.
Kicking around ideas for a final project, pretty hard to find something that fits all of our specifications. Not easy to find something that’s:
Tractable for a beginner without much exposure to Haskell libraries. We’re not assuming they know databases and the like, for example. The book is written with the limitations of inexperienced programmers in mind.
Sufficiently interesting to someone that hasn’t been neck deep in code or distributed systems for awhile. I’d love nothing more than to have them make a Kafka or SaltStack clone in Haskell but it’s going to bore them to tears.
Trying to avoid heavy duty web apps that would require a lot of frontend work or learning a whole framework as well. If anyone has any suggestions, please reply here or please tweet or email them to me.
We’re also looking for reviewers (of any experience level) so if you have the time and it interests you, please reach out!
For a final project, perhaps a (text based) command-driven discussion / social network? Maybe start out with an in-memory implementation then swap it out for a file-based implementation (reinforcing the abstraction tools you’ve taught)? It could be a lead-in to databases: “manipulating all that text data was kind of a pain and won’t be very fast with hundreds of users. go check out databases!”. You could also generate/serve some simple HTML and say “that’s the basic idea behind webapps”
I really like this idea! I’d have to think about how to avoid the problems with NAT and networking for home-users, but I’m going to kick it around with Julie and see if there’s something workable here.
Thank you :)
Maybe could be a LAN-only thing for e.g. conferences or something along the lines of that.
Pop-up social network?
That’d be a good descriptor: you could include your <social-network-du-jour> handle in your profile for longer-term connections, maybe. For a talk, maybe a way to submit questions during the talk and have them queued up at the end.
Maybe https://ngrok.com/ will help? (also has a handy request inspector!). I’m not sure if it will be appropriate in complexity/difficulty for your audience, though.
I’d like to review. What do you need?
Is the email address in your lobsters profile a good way to reach you?
Yeah, sure, contact me at firstname.lastname@example.org
For review, do you need anything more formal than what I’ve been sending you and Julie on Twitter?
It’s more work and structure than that, but what you’ve been doing isn’t wildly different.
I wouldn’t mind helping out, since I want good learning materials out there.
Use the email address I used when I mailed you last week if you want to send me more info.
Cool, i’m curious what you think of the maybe haskell book that thoughtworks put out recently? I rather liked its approach of introducing functor/applicative/monad as concepts. I wish it had been out a year ago when I was starting to relearn haskell in earnest.
I’ll have a look at the new updates after you update the epub, been following along with interest on this book.
And for a final project, can always punt and do a todo application. >.<
It’s hard for me to think about a 94 page book when we’re working on something much longer with very different objectives. You could probably stack it up against how I review books with more similar objectives in this post and see whether it hits the high notes I care about.
It’s very hard to think about a book with wildly different scope like that. We release >100 pages every month and we’re trying to make a book that gets people from zero to, “I can begin to solve my problems with Haskell”.
More fundamentally, I’m skeptical people can just be taken on an isolated tour of functor/applicative/monad, there are foundations that have to be in place for it to be anything but cargo culting.
Fair enough, just thought i’d ask is all. Looking forward to the updates regardless! I’ll dig into the link.
More specifically about Maybe Haskell: I think it’s worth giving people a teaser of what’s possible. I do this sort of thing too, but usually for in-person demos/tutorials. However, I don’t regard them as being a component of any kind of pedagogical scheme.
For sure, i’ll admit maybe haskell is at best a great way to whet someones appetite for haskell. So in that regard I’d place it alongside LYAH. Although even less comprehensive in that it doesn’t cover a whole lot of haskell at all. Then again it doesn’t really bill itself as such either.
I just really liked the approach to the functor/applicative/monad explanation used is all. It helped convince a friend to learn haskell. But they’re hitting the same wall everyone seems to with haskell learning materials. I’m helping but i’m not the best teacher so I’m not sure if I’m hindering more than helping. Haskell isn’t hard either its just difficult to elaborate why doing things a different way is beneficial. In hindsight its 20/20 but from the other direction it just looks like a mirror.
Either way keep it up the lambda calculus links on the first chapter were fun to review and read. I was pleasantly amused that you started out with that.
I grabbed a copy of the book last week, so far so good.
sure, i’d be glad to help review. email address in profile is valid.
oops, thought i had :) fixed now
Looking over the ToC, I’d noticed that the four final chapters are TBA. I’d love to see at least one on something along the lines of “using Haskell in anger”: some tips for real-world Haskell use. Things like common packages to be aware of, debugging, problem solving with Haskell, and deploying my programs. Are there plans to include something like that?
We’re always open to ideas/suggestions, but those chapters are literally TBA. We have topics planned for them.
The “almost guaranteed to happen” chapters are IO, “when things go wrong”, and the final project. The more open slot is DSLs & APIs. Data structures covers a lot of the common packages to be aware of, monad transformers covers some of those as well. Debugging is sort-of part of “when things go wrong” but it’s probably not quite what you’d think.
The way we name our chapters hides practical examples. The upcoming release (15th) has more practical projects and examples than have been seen so far.
Your suggestions are very much appreciated as I wasn’t happy with the “DSLs & APIs” chapter and I’m seeing a common theme (which you’ve improved the confidence in) in what practical bits people want.
I’d be happy to review; I’m going to university in a couple of months and will be doing a very theoretical CS course, which begins with haskell - I’ve been told to buy the Bird book on Haskell, but a lot of people have told me it’s terrible. I already know the basics of haskell though as I’ve used it for a couple of years.
What email should I contact you at?
Ah, I sent you a pm, not entirely sure how the privacy settings work on this website but I was under the impression my email was viewable in my profile.
Only what you explicitly put in the “About” box is visible (plus your Gravatar). You can see here what’s visible: https://lobste.rs/u/NickHu
I just thought there ought to be a “make my email public” checkbox or something like that, and reading https://lobste.rs/privacy implied that that might already be the case, duly noted though.
I’d love to help review! I have experience in other languages, and have made a few false starts with Haskell (through LYAH and the cis194 class).
A cis194 dropout is a perfect candidate for us! What email should we contact you at?
There’s a link in my profile now. Thanks!
There is a reverse implementation of Hangouts: https://github.com/tdryer/hangups
It’s pretty usable, I did a write up about it if anyone would care to read it: https://nickhu.co.uk/posts/2015-02-13-hanging-up-on-hangouts.html
I’m writing this message from a Lobste.rs client that I wrote for iOS. I’m currently in the final stages before I open source it. Is there any licensing issues with creating an app for Lobste.rs?
I’ve heard that Apple explicitly do not allow GPL applications on their App Store, but maybe someone who knows more about app dev may be able to confirm.