I think he mentions this with the difficulties with scala, but that there is more to learn with the jvm and how to tune it than with the golang runtime (which is fairly simple aside from GOMAXPROCS)
Since 1.5 GOMAXPROCS is no longer an issue, as it defaults to the number of cores.
As of 1.6 beta it is still a variable you can configure. Not all production installs are on 1.5+ yet. It is worth mentioning as being a parameter you can tune.
“More tuneable” should never be a downside. If the JVM required more tuning to match the performance of the go runtime that would be a fair criticism, but I would expect the JVM in default configuration to outperform go, and then you have the option of further tuning the JVM if the cost-benefit on doing so is positive in your circumstances.
go outperforms tuned JVM in some benchmarks https://www.techempower.com/benchmarks/#section=data-r11&hw=peak&test=db
the simplicity of the runtime is a feature and allows for high performance because the runtime has to do less.
I disagree with the second takeaway:
Takeaway: If you can choose a more permissive license for your project than GPL or LGPL, please do.
The GPL is a political and ethical choice. If you agree with the principles of free (libre) software, then please do choose the GPL.
I understand that it is annoying to find a project or library that solves your problem beautifully and not being able to use it in a commercial context (It has happened to me). Well, that’s life. At least you have the source as inspiration and to learn.
There is the question of how common it is for someone to choose the GPL or LGPL without understanding that it is a political choice (and I do agree with you that it is). choosealicense.com, the site recommended by GitHub for the selection of an open source license when you create a new repository, describes the GPL as “…the most widely used free software license…” and I could definitely see people unfamiliar with it (students and those people new to open source) choosing it based on that line alone.
[Comment removed by author]
If your philosophy says to use GPL/LGPL then there it is, choose those. Otherwise, consider using a license that allows more people to use it.
But even this framing makes GPL/LGPL the minority position, and MIT the majority position. I could also say
If your philosophy says others can take your work and not give back, choose MIT. Otherwise, consider using a license that keeps your open code open.
Things are hard. :)
Exactly. If I decide to license a library under GPL or LGPL, it’s because I want compensation for my work. I will accept compensation in the form of code, and I’ll accept it in the form of money – I’ll happily cut you a release with a commercial license.
But either way you chose to pay for the GPL code, “Fuck you, pay me”.
This is my perspective as well: gratis non-GPL code (MIT, BSD, Apache, etc) is exploitative of the developer: it allows taking without contributing back. It is - in my frank opinion - a scourge on hackers that they have NOT gotten compensated for their work, or recognized. A great example is the guy running the core NTP infrastructure. He should have a solid sinecure somewhere where he is paid a hefty salary with benefits to keep the system running.
If you want to give out your code, you deserve to be recognized and compensated. Your labor has value! If it’s a valuable contribution, it should continue on past your control and the control of the corporate users. This to me means AGPL3 is the most ethical licence when taking the historical view. From a “PAY UP” view, the commercial license is the correct one; MIT et al lets you NOT get recognition or compensation in any form except for corporate charity, which is sketchy at best. I feel super strongly about this.
I was going to post this as well. Of course, many people end up choosing the GPL by default; the real takeaway for me is not necessarily “avoid the GPL”, but rather, “think about how you want your contributions to be built upon”. If you decide that the GPL is best aligned with your beliefs, than please choose it.
If you agree with the principles of free (libre) software, then please do choose the GPL.
Copyleft does not have a monopoly on the principles of free software. I absolutely agree with said principles, but also explicitly avoid the GPL because of its viral tendencies and its reliance on intellectual property laws to work. If a library is GPL’d, I won’t even use it open source projects, because it (in practice) means that my project has to become GPL’d.
It can be also a commercial choice if you dual license your code, the viral effect of GPL means no commercial work can be built over it, so it can be a tool to undermine your competition (by offering a “free” product ) and increase your market share (if you provide an easy transition from your GPL'ed code to the commercial one)
the viral effect of GPL means no commercial work can be built over it,
Commercial work like github built on git or EducateWorkforce built on top of AGPLed edX?
There’s more than one way to make money than to hide the source code and restrict your users.
Straight from the horse’s mouth: http://www.gnu.org/philosophy/selling.en.html
But those are services not products, my point is your competition cannot take your GLPed code, maybe extend it, and sell it as it was it own, thus they cannot (legally) steal your work. Plus, making available a free alternative of the product undermines their market.
There’s more than one way to make money than to hide the source code and restrict your users
I was never arguing in favor of that… in fact, I’m arguing about the opposite: dual license your product under GPL can be a good way to make money…
I, on the other hand, use and have used tons of GPL'ed software commercially and I bet you have too. Which company nowadays would avoid using git just because it’s GPL'ed?
There’s no need to be so afraid of the GPL, even for the most evil of plutocrats.
Of course, but the original blog post was “why I’m not using your open source software”. It sounded like they refused to even touch it if it was GPLed.
My point is that some people have have more misgivings about the GPL than necessary.
Oh, and this is not purely academic. I know for example Apple has or had a policy to not touch anything GPL'ed. A friend of mine once got excited about Octave and installed it on her Apple-owned laptop. She got chewed out by her boss for installing GPLed software. As part of her admonition, she was told that if she needed a Matlab license she should request it, but never install anything GPLed.
There are libraries licensed under the GPLv3 and I cannot use those libraries with my MIT/BSD/ISC-licensed projects because if anybody deployed the binaries he would essentially break the GPLv3-conditions. I wouldn’t mind if the GPL was protecting free software from commercial exploitation, but the GPL does too much for my tastes. It tries to control what can’t and shouldn’t be controlled. In the end, companies don’t care anyway.
Blog? Ah that thing I used to have before starting wasting my time in twitter? I used to have one… where I put it? Ah… here it is! (blows the dust) http://gabrielsw.blogspot.com you will find Scala, Java, Haskell, and silly stuff… (beware that I defaced it with adwords, in the hope that in 5 years I will afford a cup of coffee )
Leaning on “pragmatism” probably isn’t the angle you want to take given it’s (anti) intellectual pedigree: http://en.wikipedia.org/wiki/Pragmatism
That aside, if we mean a focus on what is practical, it seems unlikely to be practical to make a human do the work a compiler could do. You can do a lot better than Go these days. You could do a lot better than Go 10 years ago.
There are things I like about Go. Binaries and cheap threading come to mind. But to ignore the work PL has done to make entire classes of problems history is a perpetuating the “Null value billion dollar mistake”. Golang’s type system is near useless and emblematic of the sort of type system that dyn-langers wanted to escape to begin with.
So, every time I see a comment like this about Go, two questions come to mind:
If the answers to those are not favorable, I assume you are talking out your ass.
That aside, if we mean a focus on what is practical, it seems unlikely to be practical to make a human do the work a compiler could do.
Unless making the human do it isn’t that big a deal, and dramatically simplifies the language. Writing Go incurs low cognitive overhead—its greatest advantage. Go is for writing code quickly, code that other people can read and understand quickly, and then modify quickly themselves.
Edited: removed comments on pragmatism because I don’t want to enter a philosophical debate I’m not qualified to have.
I don’t see what bitemyapp and you are talking about as mutually exclusive. In fact, it does really sound like you care about the same things.
As I understand (please correct me!) Go programmers are interested in a language which is simple in the same way C was. You could look at some code and understand what was going on and how fast it would run easily. Perhaps without the gun aimed by default at your foot. Because the complexity and bugs have no where to hide, it’s easier to write code that you can trust. Something like Haskell doesn’t fit into this niche because simple alternations to the code can have some serious impacts on its performance.
Now on the other side of things, you also have people that care about correctness and choose to push some of that work onto the compiler. They primarily set out to do that with types. By arming themselves with more expressive types, they contend that if the compiler accepts it, there’s a very high probability that it’s correct.
To me, both of these sound appealing. Honestly, they don’t seem exclusive either. I want to be able to look at 10 lines of code and understand what assembly it generates, and other times I look at code and check the types and go Oh, I get what’s happening here.
The reason they haven’t been unified is because the languages in the types design space tend to be much higher level, but that’s not always true. There’s an implementation of typed assembly that has identical performance and runtime semantics as normal assembly, but with safety guarantees. Rust’s borrowing-checker also is a derivative of linear logic applied to types, there’s a lot of other junk in there too. We really can have interesting types that don’t make the code harder to read! It’s just that the compilers community doesn’t intersect very heavily with the low-level types community so implementations are scarce.
If you’re curious I’m happy to link to some research.
Absolutely. Your comment reminds me of this video where SPJ talks about the intersection of these two approaches to the same goal. Also the title of the video is snarky and misleading. =/
Rust is a big step forward there, but they are still working out how to do a lot of this stuff. In the interim, there isn’t anything higher level on the C-like side that performs well besides Java. That’s why Go is seeing so much popularity. That and an extremely useful standard library.
do you typically do systems programming, especially network programming?
I wrote a k-ordered distributed unique id service in Haskell recently than I am considering changing to do some direct syscalls recently. Does that count? Also it was twice as fast as the Erlang version out of the box. Good chunk of the code is bit manipulation.
“Systems programming” is a pretty dodgy term. I’d consider anything with a GC to not be “systems programming” because that implies kernel dev / embedded work. There are methods for doing that kind of work in Haskell but this isn’t the time or place for talking about how the runtime is side-stepped.
how much Go have you actually written?
A couple of years ago I was starting to chafe with untyped languages. I wanted easily deployable binaries as well. Go was one the languages I hacked around in before settling on Haskell. It’s not a particularly unique or interesting language, deeper experience doesn’t change anything paradigmatically the way learning Haskell/Agda/Idris/Coq would.
There are little aesthetic touches in Go that were neat, but it was very clearly not doing what it could be. A better choice would be Rust and even that lacks higher kinded types and other things I use regularly in order to work more efficiently.
Good comparison to demonstrate what I’m alluding to would be writing an HTTP parser in Golang, then in Haskell.
If the answers to those are not favorable, I assume you are talking out your ass.
Not getting things off on the right foot mate.
Unless making the human do it isn’t that big a deal, and dramatically simplifies the language.
There is stuff Haskell doesn’t do because it would complicate the language and we don’t know how to make it nice (yet), such as dependent typing. Haskell is headed in that direction, but there’s an insistence on having a solid implementation for practical use.
This is probably a case of blub-itis where you regard the nicer stuff as silly and unnecessary, but everything beneath you is clearly primitive. Haskell has the benefit of Agda and Coq to keep it honest about its place in the order of things. Languages like Go do not, so they can convince themselves they’re king of PL hill.
Go is for writing code quickly, code that other people can read and understand quickly, and then modify quickly themselves.
https://gist.github.com/paf31/9c4d402d400d61a49656
Particularly note:
Despite these challenges, I can report that I feel much more confident in my ability to learn new Haskell libraries than in any other language. Over the course of these two projects, I have used more than 20 libraries for the first time. I put this improvement down to the expressiveness of the type language, and the ability to “follow the types” in order to learn a new set of functions. Certainly, there is a steep learning curve, but I find the benefits quickly outweigh the effort required.
I’m going to stop here. Things have already taken an acrimonious turn.
Credentials: taught Haskell for the last year quite actively. Writing a book on Haskell. Do Haskell OSS work. Giving a local class on Haskell this Friday.
I wrote a k-ordered distributed unique id service in Haskell recently
That sounds like a pretty cool project! Is it open source? =)
Not getting things off on the right foot mate.
Maybe not, but I’m tired of people denigrating Go.
There are little aesthetic touches in Go that were neat, but it was very clearly not doing what it could be. A better choice would be Rust and even that lacks higher kinded types and other things I use regularly in order to work more efficiently.
As lousy as it may be, most programmers don’t really understand higher kinded types or have a lot of trouble reasoning about them.
As far as Rust, the Go standard library is huge and extremely useful for the type of work its designed for. Rust has a lot less there.
Languages like Go do not, so they can convince themselves they’re king of PL hill.
I don’t hold that opinion, nor do many Go users I’ve worked with. Go isn’t being about the king of any PL hill, it’s about being an effective tool for a certain job. Rob Pike is up front about this, he wanted a programming language that did the kinds of things he does on a day to day basis.
Certainly, there is a steep learning curve, but I find the benefits quickly outweigh the effort required.
Maybe. But Go has a shallow learning curve and doesn’t require a lot of cognitive breaks from other languages that are widely used. Large portions of the Go user base are immigrants from Python and Ruby, it’s no surprise they find Go easier to learn than Haskell.
That sounds like a pretty cool project! Is it open source? =)
Yes, and it’s being used in production, albeit not by me: https://github.com/bitemyapp/blacktip
Maybe not, but I’m tired of people denigrating Go.
I’m not sure why people thought a language that roughly equates to ALGOL + C FFI + green threads created by people with a luddite streak would be warmly welcomed by the wider software community.
As lousy as it may be, most programmers don’t really understand higher kinded types or have a lot of trouble reasoning about them.
You saw the part where I teach Haskell right? Don’t bs me about what can/can’t be learnt that you haven’t even tried to learn. Higher-kinded types are one of the easier things to learn in programming. Not hard at all.
Rob Pike is up front about this, he wanted a programming language that did the kinds of things he does on a day to day basis.
So do I. That’s why I use Haskell. I don’t want to keep solving the same stupid problems over and over.
Large portions of the Go user base are immigrants from Python and Ruby, it’s no surprise they find Go easier to learn than Haskell.
I’m not going to disagree, except to note that this is only due to familiarity, not because Haskell is intrinsically more difficult.
It’s also worth considering that the “ramp-up” is a one-off cost whereas friction lasts for-ever-and-ever. Consider the immense cost one simple mistake with the language design nulls have cost us? Haskell eliminates that and countless more. It’s faster to write things in Haskell than Go because of the higher level constructs. It has a concurrent and parallel runtime. It gives you your choice of green threads, OS threads, CPU pinning, software transactional memory, promises, channels, transactional channels…
Blacktip is neat. But I do have to note that it isn’t exactly a tremendous amount of work, nor a particularly complex system. Now show me a Haskell implementation of chubby / zookeeper / etcd. I don’t know of one, but I would be interested if there was one.
would be warmly welcomed by the wider software community
It is though, people who complain about Go are a loud minority.
You saw the part where I teach Haskell right? Don’t bs me about what can/can’t be learnt that you haven’t even tried to learn. Higher-kinded types are one of the easier things to learn in programming. Not hard at all.
I know Haskell, and understand higher kinded types. Don’t assume what I do or don’t know. :P
I’m not going to disagree, except to note that this is only due to familiarity, not because Haskell is intrinsically more difficult.
Familiarity is extremely important. Personally, I would be thrilled if programming education increased emphasis on functional languages.
It’s faster to write things in than Go because of the higher level constructs.
But you have to know more of those higher level constructs. Green threads multiplexed onto OS threads and channels are easy to reason about and quite expressive on their own. There is a paradox of choice problem with having too many bells and whistles.
FWIW, there is my implementation (work in progress, there is at least 1 known bug I’m fixing) of Raft in Ocaml https://github.com/orbitz/scow
A few thoughts on the experience:
I read most of the Go implementations as I did this. I found them hard to understand and often broken. The latter is not the fault of the language but I believe the first is. Being able to send pointers between goroutines, I believe, will be viewed as a mistake in the long term. It makes understanding a program significantly harder.
I saw Mutexes. I found this strange. This has to do with my first point.
I found much of the code difficult to test and poorly abstracted. This is a mix of the language and particular developers, I believe.
My code is much uglier than I like, I’m working on fixing it.
By providing functors, a compile time abstraction tool, my Ocaml code is much easier to test and it breaks into clear components. It’s also heavily modular without, IMO, making it significantly expensive to understand. It’s also type safe, in that even though the Log and the Transport are orthogonal components the user can implement it guarantees that the Log stores things the Transport knows how to send. This is convenient, but admittedly it’s something anyone would identify quickly and fix in a language like Go (Bohr Bug).
I leverage the type system heavily in propagating errors. This means when I discover a new error case I can add the code that creates it and then the compiler tells me every place to fix it. This is much harder to identify in a language like Go (Python, Ruby, and Java to some degree) and leads to abolishing a whole class of difficult to find bugs (Heisenbugs). I think this is the more significant contribution in building complicated systems.
Nice, I’ll definitely take a look. I’ve been meaning to get better at Ocaml. You are just implementing for kicks?
My code is much uglier than I like, I’m working on fixing it.
You’ll get there! :D
I read most of the Go implementations as I did this. I found them hard to understand and often broken.
Yeah, I don’t know why but most of the Raft implementations I’ve seen in Go are just written poorly. That and, Raft isn’t that easy to understand to begin with.
I leverage the type system heavily in propagating errors. This means when I discover a new error case I can add the code that creates it and then the compiler tells me every place to fix it.
Indeed, this is really useful. But also sometimes not. When you have a lot of errors that have meaningful solutions, it’s definitely great. But when the errors pretty much always mean die, it doesn’t matter so much.
You are just implementing for kicks?
Yep.
But when the errors pretty much always mean die, it doesn’t matter so much
But I don’t have to propagate all errors with this style. With Ocaml I can enforce handling errors (with some discipline) or I can say an error isn’t really handleable and not force it (generally using exceptions). In the case of Go, I unfortunately do not have the ability to make this choice. For myself, this is an important enough to trade-off a rather crappy concurrency model (OCaml’s sucks) for better error handling (Go’s sucks), because errors are what really distinguish complex systems from simpler systems.
Blacktip is neat. But I do have to note that it isn’t exactly a tremendous amount of work, nor a particularly complex system. Now show me a Haskell implementation of chubby / zookeeper / etcd. I don’t know of one, but I would be interested if there was one.
That’s pretty rude. That said, https://github.com/NicolasT/kontiki exists. As does https://code.facebook.com/posts/302060973291128/open-sourcing-haxl-a-library-for-haskell/ and tons of other things.
If you’re going to assign homework, you’re doing it with me. Why don’t we both write HTTP parsers in our languages of choice and compare brevity, performance, and time to develop?
I know Haskell, and understand higher kinded types. Don’t assume what I do or don’t know. :P
I’m going to have a hard time believing that If you think they’re complicated. I taught them to my student who never programmed before. Not a problem.
Familiarity is extremely important.
No it isn’t. It misleads people into local maxima that aren’t far from a global minima.
But you have to know more of those higher level constructs.
You have to take a driver’s test before driving a car too. You realize this stuff isn’t that hard to learn with the right pedagogy right?
Green threads multiplexed onto OS threads and channels are easy to reason about and quite expressive on their own.
If you want to program that way in Haskell, nobody’s stopping you. That’s the default for us anyway (green threads, MVars/channels, STM)
paradox of choice problem
Lol. No.
Civil engineers don’t struggle with having choices in building materials. Don’t be ridiculous.
This isn’t productive. You should ping me about learning Haskell. You’re probably going through what I did, where I knew a “bit” of Haskell and thought I knew enough to write it off. (I ignored Haskell for ~5 years) - I had no idea what I was talking about though.
https://github.com/bitemyapp/learnhaskell
Offer still stands to do the side-by-side with parsers in Go and Haskell.
That’s pretty rude.
I don’t mean to be rude, I mean to be accurate. Being a smaller project doesn’t mean it’s bad, it just means it’s small.
I’m going to have a hard time believing that If you think they’re complicated. I taught them to my student who never programmed before. Not a problem.
I wonder what you told them then. It’s easy to demonstrate the value of first order type constructors, List<T>, done. I’ve never seen any demonstration of second order type constructors that wasn’t of dubious utility, or extremely niche.
First order generics are even of questionable utility in the problem domain Go purports to solve, as evidenced by the multitude of problems people have used Go to solve. Of course, Go maps, slices, and channels are all generic, and that seems to be enough for most.
This isn’t productive.
I agree. But I’d like to point out that you don’t like my claims against Haskell any more than I like you complaining about Go. You assume I dislike Haskell, I don’t. The only reason I’m here is to address your claim that “you can do a lot better than Go these days,” which I clearly disagree with.
If Go is so bad, why has it gained tremendous traction in a few short years while Haskell remains niche? Because people try Go and find they can solve real problems quickly, especially the dynamic language folks. Meanwhile, people check out Learn You A Haskell, or other resources, and hit a brick wall.
Haskell aside, Go succeeds because it’s made for a specific type of programming, not despite of that. The language itself, the standard libraries, the tooling, the ecosystem, the community, are all built around that purpose and the core principle of simplicity.
You want to do homework, lets up the ante from an http parser to an http server, that serves files in the current directory.
package main
import (
"net/http"
)
func main() {
http.ListenAndServe(":8080", http.FileServer(http.Dir(".")))
}
My server is extremely concise, and has speed comparable to Nginx. I can configure the number of cores it uses by setting the GOMAXPROCS environment variable.
This is, of course, cheating. But at the same time, it’s perfectly legitimate within the Go philosophical system. Go is about tools that solve real problems. The net/http code is quite nice, not anywhere near as complicated as Nginx or Apache. Because Go was designed to solve that kind of problem.
Keep trying to convince people that Haskell is a great tool, I hope you succeed. There are a lot of useful ideas there that should be widespread. Just consider that Haskell being a good thing doesn’t invalidate the advantages of different approaches.
Haskell’s IO subsystem makes it faster than Nginx, not comparable to.
http://haskell.cs.yale.edu/wp-content/uploads/2013/08/hask035-voellmy.pdf
http://adit.io/posts/2013-04-15-making-a-website-with-haskell.html#serving-static-content
My server is extremely concise, and has speed comparable to Nginx.
Yeah so is that one.
I can configure the number of cores it uses by setting the GOMAXPROCS environment variable.
Haskell has RTS arguments. Either you’re presenting arguments in bad faith, knowing Haskell has equivalents, or you’ve overstated your knowledge. (I’m guessing the latter)
First order generics are even of questionable utility in the problem domain Go purports to solve, as evidenced by the multitude of problems people have used Go to solve. Of course, Go maps, slices, and channels are all generic, and that seems to be enough for most.
…no.
Haskell’s IO subsystem makes it faster than Nginx, not comparable to.
Same for Go, which I consider comparable to because it’s not off by a factor of 2. So unless Haskell is 2x as fast as Nginx, I would consider it comparable as well.
Yeah so is that one.
But it’s not literally part of the standard library. The emphasis was on the nature of the ecosystem.
Haskell has RTS arguments. Either you’re presenting arguments in bad faith, knowing Haskell has equivalents, or you’ve overstated your knowledge.
Neither. Merely explaining the Go example further, since you only toyed with Go fleetingly years ago. You keep assuming that I’m making arguments out of malice. I’m still not trying to be rude. :P
…no.
Well you can take that one up with Rob Pike, I’m done arguing about generics with people who haven’t bothered to use a language without generics for any appreciable amount of time. It’s as sad to me as people who say IO in Haskell is unreasonably difficult because of the IO monad. You’re the one who believes “everything beneath you is clearly primitive,” in your words.
I’m done arguing about generics with people who haven’t bothered to use a language without generics for any appreciable amount of time
Mostly out of curiosity: does Pascal,C, C++, and Java (pre-1.5) count? I already did those and I’m not in a hurry to go back…
[Comment removed by author]
Post your HTTP parser in Go, I’ll post mine in Haskell, we’ll benchmark.
I’m traveling to two cities on business over the next five days. I’ll be back on Tuesday though, and I may have time then.
Even so, this benchmark is questionable at best. How much of HTTP? Do large file uploads need to be accounted for? Test suite? Does time reading the RFC count? How is that time measured? How do I know you don’t just have an HTTP parser you’ve written? Meaningful benchmarks are a hard problem in of themselves.
Just parsing the bytes of the HTTP payload. Test suite can be static request payloads that get parsed. You’ll want to use a benchmark suite that measures statistical significance of benchmark results like Criterion
I know of an HTTP parser in Haskell but I planned to write one myself, albeit possibly with some help from friends that have done more parsing work than I.
No parsing of the body makes it fairly easy. From there it’s just parse the first line, and then parse the K/V pairs. Alright, you’re on—once I get back on Tuesday. ;)
Cool. Criteria I’ll be considering are LOC, performance, time to develop, and correctness. Format the code using gofmt please. We track development time using contiguous screen-recording (video). Ping me next week when you’re ready. :)
Have you considered using ReaderT or any other approach to remove the use of unsafePerformIO https://github.com/bitemyapp/blacktip/blob/master/src/Database/Blacktip.hs#L52 ?
Have you considered using ReaderT or any other approach to remove the use of unsafePerformIO … ?
Yes, I’m intentionally not doing so. It’s a singleton service, you shouldn’t ever have multiple instances of it running. I’m considering adding code to lock it down further.
https://github.com/bitemyapp/blacktip#this-is-a-singleton-service-one-instance-per-servermac-address
I’m not going to make it ReaderT because that implies running multiple instances in a single process would somehow make sense. unsafePerformIO’d global is unmistakable.
I think that unsafePerformIO should be avoided since it is an impure construct in a pure language. Requiring would indicate a deficiency in the ideals of the language: it isn’t needed though the obvious solution is to create the reference and pass it as a parameter to the functions that need it.
Of course this “pattern” of using unsafePerformIO can be shown to not violate referential transparency or anything but it would improve code quality to avoid dangerous functions. It doesn’t set a good example to beginners either.
As I’m sure you know (but other people reading might not) ReaderT just lets us pass a variable around without having to add an extra parameter to all your functions. It doesn’t imply that you should run multiple instances.
To make it a singleton service you could attempt to bind on a specific port, failing when someone (another instance) already binds there.
vanila asked in IRC if this pattern was okay.
16:22 < vanila> What are peoples thoughts on the NOINLINE unsafePerformIO create IORef/MVar trick?
16:23 < carter> its kosher
TL;DR haskellers aren’t condescending because they don’t look down on you. Rather they treat you like an educated adult fully-capable of taking initiative.
Basically she’s saying Haskellers are academics. Every thing about that describes my experience in grad school. Friendly bunch.
Since the comments mention the Socratic method, that’s something that resonates with me: the only time I went to #haskell IRC with a question, the attempt to use the method didn’t sound like treating me like an educated adult capable of taking initiative, so it felt condescending, and really, it was a waste of everybody’s time. Of course, I can understand if the person made wrong assumptions about my question, but what’s wrong with a straight answer? As an adult, I’m perfectly capable of looking for, say “blargher co-types conjecture”, and I don’t need hand-holding. My main takeaway was to not ask more questions in #haskell :)
I’ve had a very similar experience with getting advice off of IRC in general: sometimes you get the jerk that just wants to put you down for no reason, sometimes you get a very patient kind soul who will take half an hour to explain something to you. You just need to grow a thick skin and get used to not being reactive about the jerks.
There’s a high degree of variance with the quality of help you get in #haskell. Sometimes you get Cale…sometimes you don’t get Cale. I’m sure you don’t mean to paint all 1200 residents of #haskell with the same pedagogic brush, but it’s worth making it clear that not every experience is going to be the same. I’d also take the opportunity to highlight that #haskell isn’t singularly devoted to teaching as such, it’s the big tent. People who haven’t otherwise given effective teaching a moment’s thought could end up “giving it a try” and scaring/burning some new person in the process.
The variance in quality and noise problems are why #haskell-beginners exists.
Related post about teaching in #haskell:
I think that it is good that there is a big-tent channel for a programming language and that you are right in pointing out that it is a very generic ground. I’d like to say that even if you just read the logs of the meaningful exchanges, one can learn a lot even without asking. A few of this, lots of papers, books and practice, will make you able to be worthy of Thor’s mjolnir one day. Lastly, not all people learn at the same rate; true fast learners may be as “dysfunctional” as slow learners, so whoever is involved into teaching has to have a lot of patience, good will and a shiny mood.
I don’t think the author understands the detection kit as I do. He calls it an offensive weapon. For myself, it is a mechanism for me to determine if something I am being told is BS. It is not an argument tool. So no, my Baloney Detection Kit is working just fine.
As I understood it,the focus of the article is not the “baloney detection kit” itself but the use and abuse of logical fallacies in a debate, and how are used as an excuse to not to listen to other people’s argument. (and how damaging they can be when used to exclude)
The last paragraph says:
The Baloney Detection Kit is a cache of offensive weapons, and for many discussions it’s better to leave it behind and go in unarmed.
So it sounds to me like he is including logical fallacies in the kit. But my statement applies to logical fallacies too. You don’t present them to someone, you use them as a tool to understand what they are saying and find weak spots to ‘attack’ their claim.
Agreed. It’s not something you throw at other people, it’s something you use in your own head to find the weak points of a claim. More of a “maybe you should examine this detail more carefully” kit.
That’s really good. OTOH, I’m not going to stop talking about abstraction because some people equates it to unnecessary indirection :) I maintain that abstraction is the single most important tool in software development
I realize this is partly because the examples are in Scala, but none of this gets at what a Functor really is.
Functor is an algebra.
Functor is an algebra with one operation, usually called map.
That one operation has a type something like:
(a -> b) -> f a -> f b
That one operation should respect identity:
map id = id
And that one operation should be associative:
map (p . q) = (map p) . (map q)
That’s it people. That’s it. Functor is a very weak structure. Many things can be functor. Many of those things will not look anything like a “list”, “collection”, or even a “data structure”.
Understanding free objects, free versions of these algebraic structures, can lend a more faithful intuition for what these things are.
Glancing at Coyoneda (the free functor) should give one some idea of why you’re not dealing with something that has anything to do with lists.
Want to know more?
You know the drill: https://github.com/bitemyapp/learnhaskell
Edit:
Since I take great satisfaction in excising misunderstandings, I’m going to include a Functor instance that should help drop the “collections” oriented view of what they are.
-- (->) or -> is the type constructor for functions
-- a -> a, the identity function's type is a type of
-- -> taking two parameters of the same type (a and a)
-- (->) a a analogous to Either a b
instance Functor ((->) r) where
map = (.)
-- (.) or . is function composition
-- (.) :: (b -> c) -> (a -> b) -> a -> c
-- more on this Functor instance: http://stackoverflow.com/questions/10294272/confused-about-function-as-instance-of-functor-in-haskell
Bonus round for upvoting me:
http://www.haskellforall.com/2012/09/the-functor-design-pattern.html
http://hackage.haskell.org/package/kan-extensions-3.7/docs/Data-Functor-Coyoneda.html
http://oleksandrmanzyuk.wordpress.com/2013/01/18/co-yoneda-lemma/
My upvote does not express how strongly I appreciate this kind of comment that expands on the post, brings in new material, and links to more. So I’m saying thanks. :)
Thanks to your comment, it was my pleasure. I was cringing and expecting downvotes when I posted it.
Understanding free objects, free versions of these algebraic structures, can lend a more faithful intuition for what these things are.
This is a super great point—it also, meaningfully, applies to other structures like Monads, Applicatives, or Monoids, Categories, Arrows. Really quickly, here’s Yoneda and Coyoneda (the “two” free functors)
newtype Yoneda f a = Yoenda { runYoneda :: forall b . (a -> b) -> f b }
data Coyoneda f b where Coyoneda :: f a -> (a -> b) -> Coyoneda f b
In each case we see that functor tends to mean having a parametric structure (the f) and a method of transforming the parameter to something else (the functions a -> b). When we “collapse” this free view of a functor we get to decide if, how, when, and why we combine that structure and its mapping function. For lists we, well, map it. For something like
data Liar a = Liar -- note that `a` does not appear on the right side
we just throw the mapping function away.
(Another key point that’s a bit harder to see is that if you map the Yoneda/Coyoneda formulation repeatedly it does not store each and every mapping function but instead composes them all together and retains only that composition. This ensures that functors cannot “see” how many times fmap has been called. That would let you violate the functor laws!)
Do you have any reference of functor being an algebra? I’m intrigued
Since we’re clarifying what a functor is, I guess is worth noting that you’re talking about endofunctors in the (idealized) Hask category. In category theory, a functor is defined by two mappings: one for objects in the category and one for arrows, that must preserve identity and composition (the laws you mention). Since the mapping of objects is already given by the type constructor, here one needs to provide only the mapping of functions but it kind of irks me when ppl. say a functor is only defined by “map” :)
I didn’t want to force the thunk for categories, natural transformations, etc. None of the readers will care if they’re still at the stage of thinking Functor has something to do with List.
Monoid et al. are more properly thought of as an algebra, Functors are more accurately thought of as a mapping.
There’s a tradeoff to be made with lying minimally to avoid false intuitions and not forcing a bunch of thunks that aren’t pertinent to the topic under discussion.
I anticipated this comment.
The point was to emphasize the abstract-ness of Functor.
Yes, but why characterizing functor as an algebra? I think a mapping is pretty clear and understood (heck it can even be more intuitive!)
I agree my point could be pedantic, but on the other hand, the basics of CT are pretty simple and should emphasize the abstractness of functor more than mentioning Coyoneda :)
Functor is definitely an algebra. Its rules mean that it has tight relation to certain functors in CT.
Ok. To be honest, I need to familiarize myself with the definition of algebra, is just that I had never heard this before :)
It’s an incredibly overloaded term, tbh. In the context of abstract algebra you’d probably want to think of a (G, L)-algebra as a set inductively defined by generators G and laws L. For instance, here’s a “free” monoid algebra (note that this isn’t a free monoid, but a “free monoid algebra” or a “free algebra of the monoid type” or a “(monoid, {})-algebra” maybe)
data FMonoid where
Fmempty :: FMonoid
Fmappend :: FMonoid -> FMonoid -> FMonoid
class Monoid FMonoid where -- this is wrong! doesn't follow laws!
mempty = Fmempty
mappend = Fmappend
note that it has all the “generators” of the typeclass Monoid but follows none of the rules (mempty <> mempty != mempty). Typically we also want to add a set of constants to form the smallest free algebra over a set
data FMonoid a where
Embed :: a -> FMonoid a
Fmempty :: FMonoid a
Fmappend :: FMonoid a -> FMonoid a -> FMonoid a
Really interesting, thanks a lot! Now I’m trying to see how this ties to the Functor typeclass: G are the instance constructors and the functor laws make L ? I think I’m missing an important piece of the puzzle here :)
You’re not, that’s basically it.
data FFunctor f a where
EmbedFunctor :: f a -> FFunctor f a
Ffmap :: (a -> b) -> FFunctor f a -> FFunctor f b
This lets you build the free (Functor, {})-algebra over some initial type f. If we translate it naively then it doesn’t follow the laws
class Functor (FFunctor f) where -- wrong!
fmap = Ffmap
but we can implement it properly if we’re a little more clever
class Functor (FFunctor f) where
fmap f x = case x of
EmbedFunctor fa -> Ffmap f x
Ffmap g fa -> Ffmap (f . g) fa
We need one more function, though, since we can’t use EmbedFunctor directly without exposing information about whether or not we’ve ever fmaped this functor (which shouldn’t be possible to access, that’s what fmap id = id says)
embed :: f a -> FFunctor f a
embed fa = Ffmap id (EmbedFunctor fa)
And now, if we think about it, we can see that every value of FFunctor constructed using embed and fmap is of the form
Ffmap fun (EmbedFunctor fa)
And so that EmbedFunctor constructor is totally superfluous. Let’s remove it
data FFunctor f a where
Ffmap :: (a -> b) -> f a -> FFunctor f b
embed :: f a -> FFunctor f a
embed fa = Ffmap id fa
And—well—this is just CoYoneda again!
lower :: Functor f => FFunctor f a -> f a
lower (Ffmap f fa) = fmap f fa
Nice Haven’t digested it properly but I see the trick is to capture the functor with a datatype (is the same thing with free monads, right?) Now is easier to see from where CoYoneda comes, thanks! (you did show me an important piece of the puzzle :P )
On a side note, I went back to a book about CT, and defines categories as an algebraic structure, so I guess it can be called an algebra :) (besides, identity and composition form a monoid) Since Functors form categories too, it might not be that much of a stretch to call them algebras either
When I traveled into Malaysia they had a thermal infrared camera pointed at the crowd looking for people with fever. This should be at every point of entry.
For EBOV that would have very low specificity. You’d end up quarantining people that had colds. It might work if you limited it to people that were in the epidemic area.
The good thing about EBOV is that it poses a lower risk of a major epidemic because it requires close contact with infected people. Bad for medical staff, bad for people without access to medicine or a distrust of the medical system, bad for people whose death rituals involve lots of contact… but less risky for the overall world.
Which they did in Malaysia. I don’t think this is a bad thing. Put people up in a hotel, give them HBO and room service.
Yeah, makes sense. I’d imagine a targeted approach like that would be useful for yellow fever/influenza/other nasty bug monitoring as well.
The thermal cameras are pretty common in east and southeast Asian airports when something is going around. They were well trained by SARS. When H1N1 was going around in 2009, there were thermal cameras in at least Narita, Hong Kong, Bangkok, and various Chinese airports.
Given the population densities involved in some of the region, they have to be very wary of something spreading too fast and out of control.
Just because you have a high temperature does not mean you have a contagious disease. What if you just finished a strenuous run? Or if you happen to have a higher body temperature than most people? Those undergoing chemo sometimes have fevers because of the treatment, not because of a contagious disease.
One of the best parts of the article is the “Operations Catalog” near the bottom, with concise pictorial representations of the discussed operations. http://martinfowler.com/articles/collection-pipeline/#op-catalog
I like how you can translate the symbols to types pretty straightforwardly (sort of):
-- Col representing a collection
filter:: Col a -> Col a
flatten:: Col (Col a) -> Col a
map :: (a->b) -> Col a -> Col b
reduce :: (a -> a -> a) -> Col a -> a
groupBy is a little more problematic: either the result of the function has an ordering and you return the same collection with that ordering or you lose the nesting collection and return a map as the article indicates, but I guess the symbol is somewhat understandable:
groupBy :: Ord b => (a -> b) -> Col a -> Col (Col a)
groupBy :: (a -> b) -> Col a -> Map (Col a)
flatMap is the one I don’t think the pictorial representation gives much grasp of what’s happening, as the “shape” of the function is what differentiates it from map
(I guess the pic should be something like f: o => [ x ... ] )
flatMap :: (a-> Col b) -> Col a -> Col b
The interesting part is when you replace Col with any abritrary “thing that can have other things ”“inside”“ ” (formally, any type constructor of kind *->* if I got it right :) )
Is this a sign that more and more functional programming concepts are leaking into mainstream languages?
Yes, I think that’s been a big theme of language design and use in the last decade. (Lambdas in Java, Ruby style settling on filter/map/reduce rather than loops, many JS libraries like React/Elm/Om)
I think it’s been a big improvement, and I’m diving into functional programming (and submitting lots of lobste.rs stories) to get ahead of it. :)
On the side, I’m working on a category theory based generic programming experiment/library in Scala that’s a port of a haskell code. https://github.com/gclaramunt/scala-reggen
Would love some feedback :)
Always cool to see somebody is using my guide to learn Haskell.
https://github.com/bitemyapp/learnhaskell
I hope they don’t drop out, author seems thoughtful.
Just wanted to say thanks for that guide – it’s very well thought-out, and packed with really good resources!
I’m using the guide as well, slowly though and mostly for personal use. At work I’ve switched all of my development from ruby to… wait for it… ye olde school C11 (well not old really but you get the point).
To be honest its not all that bad with clang+llvm and using the scan-build tools. The static analysis in llvm/clang have made me a clang fanboy.
But in either case thanks for the Haskell guide, I’m working through some project euler questions with Haskell and I also bough about every Haskell epub out there and am working through them all.
Learn you a haskell however is not quite my cup of tea. Anyone have other books? I bought Beginning Haskell which is in my personal view better but other options are welcome.
LYAH wasn’t my cup of tea either, that’s why I point people at cis194 - that’s the main way I teach Haskell. I only use LYAH as a direct guide with people that haven’t programmed before. Otherwise, LYAH and RWH are references to supplement cis194.
If you haven’t done cis194 yet, that’s what I recommend you do, followed up by the NICTA course. This is outlined in the guide.
I’ve heard Beginning Haskell is good if you want practical project walkthroughs, but I haven’t validated it and am trying to avoid non-free resources in my guide or things I haven’t tried.
Generally speaking, if you’ve done cis194 and possibly also NICTA course, I aim people towards working on their own projects if none of the supplementary sections in my guide intrigue them.
Edit: I’ve edited the guide to include this guidance.
Yep yep, i’m working through cis 194 actually and nicta is next up on the list. Think I found your github from hacker news actually. Have to say, haskell has helped my c code as well. I am finding myself mutating state way less than when I look at old c code I have lying around. It is surprisingly refreshing and honestly its making me much more intrigued how I’ll view the same code once I get to walking speed in Haskell.
My biggest win so far however has been to dedicate ~30 minutes a day to just haskell time. Harder now that its summer but quite doable in general. Not that I don’t skip days.
BTW, what resources you will recommend to move to an intermediate level? Something past monads, transformers, combinators, etc.
RWH?
I guess at this point I have to go and write a shit-ton of code (which in Haskell means ~300 lines)
My guide https://github.com/bitemyapp/learnhaskell mentions plenty of intermediate topics. Have you mastered everything listed?
And go make things. yes.
I’m not sure what’s more incredible: that his job was that easy to automate or that it took them six years to realize he’d automated his entire job. From a management perspective, it’s very important to stay connected to your employees so that you know what they’re doing and how they can achieve more. If they’d known what he was able to do, they could have promoted him or had him audit their other processes. From his perspective, he could have used the time to come up with new ideas or sharpen his skills. Sounds like a failure on the part of both sides. To be clear though, it’s still hilarious.
In white collar America, the unofficial game at some orgs is “how little work can I do without getting caught?” In this case, he probably did too little so he ended up being fired out of spite.
If he’s as clever as he says, then he’ll be fine, he just needs to apply himself and find a job that challenges him more.
I’m not even sure of that. Pretty often, people start their jobs with good intentions, and later find out that the friction of doing work is too high. I’ve seen people in companies slacking off and if you inquired as a freshmen, they would bounce all your ideas with “yeah, tried that, it makes sense, doesn’t fly”.
Turns out they slack off because if they moved, there’s no sense of really moving the project. Some companies have real aggressive in-fighting going on. There’s two reactions to this: leave (introducing all kinds of problems like less security on the new job) or stay and find yourself a corner where it is silent.
The problem is often that they miss the point where things cool down and they could be effective again, because they still believe that all new things introduce this huge amount of friction.
I have seen this story many times. A company hires someone with an explicit mandate of “you’re here to shake things up.” Person enthusiastically joins, tries to shake things up, then “we’re sorry but we seem to have different values, you’re going to have to go.” Often said by the same person that hired them in the first place!
As a consultant, a PM introduced me to the project by saying, “We want you to improve quality in our legacy codebase.” A week after I started doing so, they said, “Why did you spend two days adding tests, I could’ve whipped up that change in 30m!”. A week after I hacked things out, “Why doesn’t this code have tests, don’t you know we’re trying to improve quality?”. A week later it was “This isn’t a good fit, we’re canceling the contract.”
I had a job like this and did no work for 18 months. They kept telling me things were changing in the org VERY soon and my project would be on track.
Didn’t happen, I left. It was career suicide to stay there, but lots of other people were willing to stick around for decades and collect a paycheck. Sad because if they get laid off for downsizing they have no real appreciable skills anymore so I don’t know how they could ever work again…
[Comment removed by author]
[Comment removed by author]
Emphasis added by me:
It sounds like you have a lot to learn from your European colleagues to me. Calling people lazy and/or apathetic for finishing work on time, and not letting it bleed into every hour of the day is to me a symptom of a healthy work/life balance, combined with sensible workers rights.
A few years ago (I left in 2012) I worked for a US company that insisted I opted out of the EU working time directive. Since I was working in the UK they couldn’t fire me for not opting out, but I would have to keep time sheets documenting in minute detail where time was spent, to protect them from overworking me. Everyone I knew opted out because the timekeeping requirements were so onerous.
Yet, it feels to me partly that you label them “lazy” partly because they’re in a time zone that means their day ends during the middle of yours. As an anecdote, when it came to cross-Atlantic meetings at the US company I worked at they somehow always happened during US work hours^1. And yet that was sort-of alright when I worked from London, since there’s a few hours overlap between NY and London. However, my eyes were really opened to this when I moved to Hong Kong with the same company. Since there was 12 hours time difference between HK and NY (at least part of the year) there was no overlap of “business hours”. Many of my colleagues^2 over there were attending meetings with people in NY from home at 10pm HK local time, because it was never an option that the people in US would get in slightly early for such meetings.
Footnotes
[1] Well into, actually, because they had “important business to attend to” before attending overseas meetings. We joked that our US colleagues couldn’t be bothered to get out of bed, but I don’t believe that. I think it was more ignorance / arrogance and forgetting to take time zones into account.
[2] I luckily escaped this because the rest of my team was in London. Though, my situation was similar: meetings with London was always well into London working hours, such that if the meeting overran in the slightest it would finish after I was meant to finish in HK, but this meant no time for chit-chat with the London contingent.
Edited to fix typo: “GK” is now more correctly “Hong Kong”, and it should be clear that remaining “HK” is that city. Also edited to fix the emphasis, that wasn’t showing up originally.
At least in the context of Europe, I’ve found having an orderly work schedule, set hours, etc. to even correlate pretty well with professionalism and quality. Companies where developers work 80-hour unpaid overtime at the whim of management can seem responsive (they’ll have a phone meeting any time of day or night you want), but tend not to have the highest-quality developers and produce the best work. Which is why Danish and German companies, with their fixed work hours and orderly work culture, are not massively bleeding business to Greek and Spanish companies, despite the latter being willing to work more hours for less money.
I was on an eng team with a similar breakdown: NY, LN (actually UTC+2), and HK and a standing weekly call. A couple of weeks into it, I asked my peers if we could roll our call so everyone would be inconvenienced twice per month. No objections but I don’t know that it ever got further than us to the larger team. When it was just me and my HK colleague on a project we would alternate 9am/9pm local time so neither of us was always the one put out. More often than not, I fielded an EU end-of-day call (about noon NY) and passed info on for the AP start-of-day (7-8pm NY). It just seemed like the right thing to do.
/u/stig, I think we worked in the same industry and employer…
Heh, easily verified: This particular job was with Morgan Stanley, working on various minor systems related to structured products. (Exchange Traded Funds, its predecessor Opals, and a derivative called Custom Baskets.)
Bingo. Enterprise Infrastructure, here.
Heh, when I saw NY, LN, HK, I thought about MS :) I was a contractor on Fixed Income (end of day risk calculation)
Naturally! ;)
So many good people passed through, you probably can’t swing the proverbial dead cat around a tech forum without hitting a few.
I think mattgreenrocks bases it on experience. And you? Bonus points for something other than ‘gut feeling’.
I think that this starts out as “How little risk can I take?” The upside of high performance is mediocre and the downsides of high performance and low performance are severe (getting fired, possibly even blacklisted). Corporate life teaches risk aversion. Eventually, people get enough tenure that the risks to low performance disappear; those of high performance never do.
Likable mediocrities never get fired and people eventually figure this out. The only people who end up doing any work are the somewhat broken people like me who can’t tolerate pretending to work for 8+ hours and who therefore try to do something real, just to pass the time.
I mostly agree.
I’d add that the upside of high performance combined with political acumen is very rapid promotion to highly paid, influential roles. Unfortunately, that’s a stunningly rare combination of skills in tech.
With luck, sure. However, average performance suffices for the politically adept. In fact, any bit of energy that is put into high performance is arguably better spent on political gain.