I want to install arch on my 2018 macbook pro really bad but I dont think its a good idea after past experiences… at least in 2015 there was docs on the Arch wiki. Looks dead now.
I’m going to ask the boss for a non-Mac next year…
I don’t understand why they removed the Installation Guide from the Arch Wiki. It used to be such a comprehensive resource to guide you through the installation process. Now all that information is spread across multiple wiki articles and you have to somehow piece it together.
I think all the content is still there it just got split up. It’s a bit of a shame since that install page used to be mostly standalone, now it’s a bit more of a choose your own adventure that branches off at various points.
I’m happy that Oil shell (the author) and Elvish (https://elvish.io/) are both innovating with shell development again.
This is definitely an area that had flatlined for the last decade outside of plugin development.
I’ve written tons of ZSH over the last two years as I basically developed my own ZSH framework as a hobby and most of the time I spend fighting the language. Even beyond the learning curve I still run into some many strange issues.
I’d love a new modern language to work with for scripting. Plugins are just not enough to fill the gap.
Northpointe, the company that sells COMPAS, said in response that the test was racially neutral. To support that assertion, company officials pointed to another of our findings, which was that the rate of accuracy for COMPAS scores — about 60 percent — was the same for black and white defendants
Reading between the lines a little bit (so I could be wrong): threshold the scores at some level, dividing them into “low risk” and “high risk” scores. An accurate prediction is a case where either a defendent given a high-risk score is re-arrested in the following two years, or one given a low-risk score isn’t re-arrested in that time. Accuracy is accurate predictions divided by total cases.
Of course the unstated assumption is that the false positive (Arrested but actually innocent, but found guilty after trial) rate for actual arrests is 0%.
For low income defendants, we know this to be unlikely because they are appointed a public defense attorney, who is likely to ask them to plead guilty for reduced charges. Public Defense Attorneys do this because they do not have time to adequately defend each case, so if they try to fight the charges they’ll likely lose even if they are innocent. This isn’t a case of the Defense Attorney trying to get out of work, this is frequently literally the most advantageous path their client can take. This unfortunately clouds a lot of crime metrics, and until our court system improves we won’t get good enough data to make conclusions going forward.
Interesting. My partner is a lawyer and told me that murder trials can cost $100k (this is in Canada) so only the wealthiest people could afford a private defense attorney. But regardless most of their murder cases are funded by the government (meaning they make around ~10k/yr in income). Otherwise they will be mortgaging the house and borrowing from family.
The bigger problem that often comes up is most people talk to the police way to much. So the only defense they are often left with is challenging civil rights abuses for the search warrants or what not. As the police typically have enough evidence. I’ve watched a number of police interviews from their cases and every time they end up saying something they shouldn’t.
The third thing that really doesn’t get enough attention in the media is how cell phones have made the polices job 10x easier. That data which gets pulled in every case and most of the time it’s useful. Not a good era to be a defense attorney!
These three combined contribute to the 90%(?) plea deal situation.
Hmm they’re giving people a big monetary incentive to find them rather than paying them to go away.
I’d imagine the type of company that can pay $250k worth of important data would have a professional set-up with back-ups. But I might be being too presumptive.
Another thing is getting that much BTC would be a huge hassle and probably end up adding a ~10% premium.
This seems like a bigger gamble than the typical ransomware fees.
But even if victims do reach deep into their pockets, the probability that the attackers will decrypt the files is small.
It’s also critically important they decrypt it otherwise people will stop paying the fee entirely.
And the alternative of “finding them” ultimately won’t necessarily help the business recover, according to the article
Well, I imagine that the advanced type of resources it takes to hunt down hackers at the FBI and elsewhere require some big motivation and each of these attacks is high profile by nature, more so than the other usual rinky-dink randomware. But I guess both types are pretty bad, just quality over quantity.
I’m mostly just not sure what the criminals business model is here. Because I don’t see it working very well.
The article describes this as an attack on the Ukraine’s infrastructure. As you point out, criminals don’t have a strong incentive to do that unless they’re going to get money from it.
But there’s an APT for whom this makes perfect sense - the same one that has been using its associated physical military in that area for a few years now. State actors have motives that aren’t money.
Haskell and ocaml. I am really interested in functional languages for the last 10 years. I have been dabbling on and off and reading papers here and there. Never have I had the chance to program a large professional app in it. I also don’t have any desire to “hobby program” anymore since settling down.
I mainly program C++ (python too, but I have no love for oopy python) and I dabbled in go a bit. Rust doesn’t actually interest me. I haven’t seen enough novelty to actually drive me away from c++. Maybe 15 years ago, everyone was talking about D and then half that many years ago they were all talking about go. I’m sure there was a new hotness language before my time that never came to be. As C++ evolves, it’s going to start addressing enough of the big concerns that moving to a new ecosystem might have more downsides than up.
Haskell is definitely worth the effort. It’s not easy but I’ve learned so much about FP and programming in general from it. Plus I love Quickcheck for testing functions.
It also exposed me to PureScript. Which is great, although still in the early stages. High learning curve will probably scare away most JS devs though.
D has completely changed in those 15 years. What is now called D really is a new language that was briefly called D2. You should check it out again! I really am enjoying it. It feels like a completely modern and sane C++.
Agreed, I’m curious why the author is taking this tone.
Was Schildt hostile to his feedback?
It was understandable if he wrote it back in the day without thinking much about it. But he seems to be doubling down here after becoming experienced in programming and (in his words) communication.
I don’t understand why anyone would feel the need to publish such a negative review to begin with, let alone defend it. I did read the first few pages of examples and found them to be valid points, and overall this is written without much exaggeration or irrelevant rhetoric. I guess there must have been a history of divisive argument and this was written to settle it.
I think the tone here’s valid. Coding C properly means having a handle on this sort of minutiae, and documentation purporting to give a comprehensive overview (“The Complete Reference”) that contains these sort of errors has the potential to do real harm to people learning off it. Also, the author was on the standards committee - he should’ve done a better job in the first place - and is on his fourth edition of the book - there’s been ample time to fix the errors. He deserves some flak for continuing to peddle known-bad docs.
I think that’s the key thing. The earlier criticism was based on the 3rd edition, so any glaring errors should really have been resolved by then.
I remember reading a few of Herb Schildt’s books when I was in high school (including C: The Complete Reference) and that’s how I picked up a lot of bad habits (void main(..) for one), simply because I didn’t know any better (remember, this was before the Internet was available to people outside academia so I had no frame of reference).
I’m curious how people use personal wikis? Is it for note taking and todos?
I keep a $HOME/notes folders where I keep a bunch of .md files with my notes and .tasks files (using vim-tasks [1]) for todos. Then I created a tiny ZSH script bin ‘note’ to either open a note if it exists already or create a new one from anywhere in terminal.
$ note my-project
Will automatically execute:
nvim my-project.md
OR
echo "# My Project\n\n" >> $HOME/notes/my-project.md && nvim $HOME/notes/my-project.md
I like the simplest approaches. But I’m curious if these Wiki’s help keep things organized better.
I keep a personal wiki with gollum, which works over git as well and can use all sorts of text formats, though I tend to stick to markdown. I figured I would always use plain text editing, but on some occasions I do use the in-browser, markdown preview editor.
Oh my! Looking at the example, that’s even worse. Now you need to wrap every computation in a closure! There are far more elegant solutions, even with Go’s limited type system. E.g., from the code:
Or switch to a language with sum types and monads/macros :). Though, I still have difficulty understanding why the Go team just doesn’t add some syntactic sugar to let errors bubble up, since error is an interface.
I basically only use Cloudflare for SSL as a stopgap until I have time to buy my own cert and add it to nginx (which is very easy to do).
I otherwise refuse to use them given the fact they basically block Tor, by having captchas appear so often most sites are unusable. They always sidestep confronting this issue by saying they don’t ‘technically’ block Tor which is even more angering.
Plus the whole MITM and passive analytics consumption they have the potential to be coerced into doing. They provide a great service but the compromise is too much to hand over at such a large scale IMO.
I remember reading (on the Go blog maybe?) about doing a small “assert” function for things that can fail (like command line utilities), it’s been one of my favorites since then:
func assert(err error) {
if err != nil {
log.Fatalf("Fatal error: %s\n", err.Error()) //or panic(err) if you want the debug info
}
}
Of course, if you’re doing web services or any sort of daemon you’d want to handle errors gracefully, without having them kill your app (unless you’re doing startup)
I felt the same way for a while but eventually you get over it. There are other things that in other languages would take a lot to express but are short in Go (e.g., interface behavior).
Personally I prefer this feture than exception, it’s more explicit than exception, and not that cubersome. Exceptions always make me worried that there may be some unhandled mess deep down the function call so I have to wrap the call with try catch block and finally it indents the main logic futher and gets ugly.
EDIT:
Besides, It’s extremely terrible to mix return error and throw exception in the same application, which is very common if you use different libraries with different error handling styles, while golang makes it consistent.
Personally I prefer this feture than exception, it’s more explicit than exception, and not that cubersome.
But it’s not a binary choice of nullable error types versus exceptions. Sum types solve this problem quite elegantly, especially combined with some facilities to ‘unwrap’ instances of such types safely (e.g. try! in Rust or the Maybe/Either monads in Haskell).
A lot of Go’s features are slick, but this error handling is indeed pretty ugly. I wonder what the underlying reasons were for them to implement it like this.
It’s ugly, but it’s clean enough that I miss it when working with other languages. I always find myself having to deal with C++ functions that might throw exceptions, PHP functions that either use exceptions or have their own get_last_error() function. Haskell which has just about a bajillion ways to do errors (and this 15 pages guide only covers some of them).
Go’s error handling is one of those things that make it boring as hell, but also what makes it so easy to be productive at. I just never overthink stuff as much as I do with other languages.
What about error handling in Rust? It’s basically unwrap() if you want to ignore the error or propagating it (via try!). Other things like ignore and use default value X are also possible.
The main benefit over Go is that you have to handle the error somehow. You can’t forget to ignore it, and you have to ignore it explicitly if you want to.
Don’t forget that Rust still has panics, which for me show up most commonly on array and Vec indexing when I’ve got a bug. Granted, this kind of exception almost always shows up in testing for me, and I rarely encounter libraries which panic on their own, but it’s worth remembering that Result<T, E> isn’t the only way that Rust handles error conditions. I personally don’t worry nearly as much about panics as I do unchecked exceptions in Java or Python, but nonetheless it’s worth noting that Rust doesn’t offer a panacea here, just different tradeoffs (which I’m quite happy with).
I wasn’t attempting to differentiate Rust vs. Go though. If I wanted to differentiate Rust and Go re: error handling, I would point out that the Rust Result<T, E> type issues compiler warnings if you neglect to use it, whereas Go doesn’t appear to do that (although I haven’t written more than hello world in Go). I was pointing out that Result isn’t the whole error handling story in Rust.
I’d believe that for most other teams. But Go is really a language that has been designed. Even OCaml, a language I like very much, with tons of very fancy features, feels less cohesive and cleanly designed than Go.
That’s well possible since MLs like OCaml consist of the core language and the module language and these feel a bit different. There is 1ML which tries to unify these in a coherent whole. Plus OCaml has a lot of syntax and type system featured plastered over the the underlying ML type system which make it a more flexible language but also a more complex.
More intellectual laziness. “Return values worked for me, why should I do anything else?” Thompson et al are certainly smart and accomplished, but definitely fossilized in their thinking.
arguably the key aspect of djb’s coding style that leads to fewer bugs in practice is extremely explicit error handling. I’m sure the OpenBSD folks handle errors the same way.
I will take the balance of good enough features plus fast builds over richer features and painful builds every time. This is definitely an area where personal preference is the deciding factor.
I have to disagree here. Have you seen the various Erlang web frameworks available or tried any of them?
I’ve spent quite a bit of time recently doing so combined with a modern JS front-end stack and my experience was very positive.
Building web apps with Erlang is very reasonable choice. Especially building it on top of the awesome multi-process web servers combined with web sockets is a perfect combination. Additionally being able to use static typing (via dialyzer) and OTP makes working with concurrency and multiple web services/APIs very clean and manageable.
I know Elixir isn’t exactly Erlang, but Phoenix (an Elixir web framework) is a delight to work in. IMO, you get the benefits of Erlang without actually needing to always write pure Erlang.
The reasoning behind Elixir is, in fact, to make Erlang’s power easier to use, especially for web development. Platformatec started building apps in Erlang, but began designing Elixir to make it easier to do common web stuff while leveraging the IO beast that Erlang is.
You did not reverse the list, and for some reason the first measurement of timer.tc can often be off. Plus, it might be that garbage collection triggered while benchmarking map_body. Benchee measures multiple runs and runs garbage collection in between. Might also be something else, though - as the erlang page mentions architecture can also have an impact.
In the shell it indeed seems to behave like mapbody is slower on the first run (at least with the input list you used, 1000000 elements, I ran the benchmark with 10000 elements)
That might be it. Benchee runs a warmup of 2 seconds where it doesn’t measure (to simulate a warm/runnign system) and then takes measurements for 5 seconds so that we have lots of data points.
Might also still be something with elixir and/or hardware :) Maybe I should retry it with Erlang, but not this morning :)
Not sure what exactly you mean, the original benchmark was run compiled not in the shell. This here was just done for comparison with the reported erlang benchmarks - my erlang_fu (which is not very existent at this point in time) I currently fail to do that and don’t have the time to look it up atm :)
Ok, not like an elixir script executable like I’d want to but I wrote a benchmark function that benchmarks it and then called that in the shell - good enough for now I guess. There it is all much faster, and map_body seems to be about as fast as the non reversed tco version or even faster. I’d still need a proper benchmark to determine it all though.
Ok, I ran your code in erlang and I also get consistently faster results for the TCO version. I don’t get it, it is the same function I wrote in Elixir. The interesting thing for me, comparing Erlang and Elixir is, that with the same list size and what I think are equivalent implementations map_body seems to be much slower in Erlang. E,g, compare the numbers here to the other post where I do the same in Elixir and iex. In Elixir map_body settles at around 490k microseconds, the erlang version is between 904k microseconds and 1500k microseconds.
Static types only catch the errors they are designed to catch. Type systems with an escape hatch catch even fewer (although hopefully use of the escape hatch is widely criticized and avoided, like unsafePerformIO in Haskell).
The ability of a type system to express constraints is great, but not all projects will use the type system to encode all constraints. Only those constraints which get encoded into the type system can be caught by the type system. If they don’t, it’s not the fault of the type system for not catching them, but the fault of the programmer for not allowing it to.
As an addendum to that: the biggest rebuttal to this point is that many type systems are hard to use, or result in tedious fiddling to get things to compile. While this may be true, and there are certainly things that can be done to make some type systems friendlier, there’s also the question of whether it’s reasonable to ignore the type system when it’s telling you you’re doing something incorrect. In my experience, the type system is right more often than the programmer.
Essentially, it is only reasonable to evaluate the success of a type system by:
Whether it actually catches the errors it claims to catch
Whether it’s easy enough that programmers will actually use it.
Whether it catches enough different kinds of errors to be worth the additional complexity.
There are other benefits to type systems. Such as improving code readability and making it easier to find applicable stdlib functions. Forcing you to think hard about how you are modelling data and thinking out the constraints of certain design descisions ahead of time - things that I never did very often until I began programming in typed languages.
Plus depedent types will add even more value to static typing which I look forward to using.
Yes, rereading my post, I perhaps gave the wrong impression. I am very much a fan of static typing! My issue is that people often attack static typing with silly/unreasonable arguments about it not doing things it never claimed to be able to do, or by pulling out weak static type systems (like C) and then using their weakness as an argument against static typing in general.
I disagree with this–one of the first things I’ve seen systems supporting type-inference do is implement a mechanism for removing the explicit statement of type (var in C#, C++, etc.).
That hurts readability; alternately, if it improves it, we see an argument for using a language like JS or Ruby.
Five years ago, I tried to tackle Haskell first, and found it way too weird. OCaml was a lovely way of getting used to a lot of concepts in the space while being able to be quickly productive in it (which is kinda necessary if you want to really learn the thing).
Learning OCaml then motivated some issues that Haskell addressed, so I was quite ready and enthused to learn about Haskell’s take.
I’ve used both in significant anger and Haskell in production. Both are good next steps and both will take some serious getting used to over Erlang. Your personal goals will dictate the right next moves, so it’s worth noting that Haskell will introduce you much more forcefully to ideas of “purity” which can be very useful. OCaml is made somewhat simpler to understand by avoiding this issue. OCaml will introduce you instead to the idea of “modularity"—which is not precisely what you might think it is based on prior experience—and I’d say you should take a detour though learning that concept regardless of which one you end up investing more time into.
If your goals are more project-oriented then either can work, but Haskell’s library ecosystem is vastly better developed today. Haskell in production is tricky due to laziness—you will have to learn new techniques to deal with it. OCaml I’ve not used in production, but the challenges there seem more likely to be around getting the right initial momentum as you may have to write some significant portion of library code and work around a bit of ecosystem incompatibility.
Speaking of ecosystem again, OCaml has a standard library problem in that its genuine standard library is often not what the doctor ordered and there’s a choice of “extended” standard libraries. This choice sometimes feels a little forced which is slightly ironic due to the aforementioned modularity goals of OCaml, but c'est la vie.
The Haskell community has issues or benefits in that you will undoubtedly run into someone sooner or later who will happily try to explain the solution to a problem as a mere application of recognizing your algorithm as being a catamorphism over an implicit data type or will suggest refactoring your effect stack according to mtl or a Free monad. It might even be me that does this. Jargon aside there’s some significant insight that Haskell can be a gateway drug to. On the other hand, this can be the absolute last thing you want to hear when trying to get your project working. Mileage varies here.
The OCaml community has issues in that it’s small and sometimes speaks a lot of French.
Both communities are great in that they’re full of passionate and damn smart folks, though.
I’ll throw my hat in for Haskell since you’ve gotten mostly “OCaml then Haskell”, “OCaml”, and “Both” answers.
Learn Haskell. It can be easier and less noisy than OCaml too. The ability to do ad-hoc polymorphism with typeclasses in a way that is more conventionally what people want out of polymorphism without giving up type-safety is pretty nice. If your goal is to learn new things, the overlap-but-different bits between Haskell and OCaml won’t do a lot of good either. Haskell’s built on a simpler core semantically as well.
tl;dr I got tired of writing .mli files, I prefer my types to be next to my code, and I found modules tedious for everyday work. YMMV.
You won’t harm yourself by learning both, but you’ve got to make a choice of where to start. I think there’s a lot to be said for learning Haskell first, hitting some limitations of typeclasses, then playing with modules in OCaml or SML.
Disclosure: I am quite biased, but this is stuff I actually use in my day to day, not night-time kicking around of pet projects. Haskell is my 9-5 and I wouldn’t replace it with OCaml. That said, if Haskell disappears tomorrow the first thing I am doing is writing a Haskell compiler in OCaml and I can’t say I wouldn’t enjoy it :)
I think these points are totally spot on. I learned Haskell first then learned OCaml for a university compilers course. OCaml was trivial to learn if you’ve already learned Haskell. That sentiment is expressed in the introduction to Haskell for OCaml programmers. The OCaml for Haskell programmers really only covers syntactic differences.
Haskell is not really much more difficult to learn than OCaml, I estimate, but I don’t really know because I learned Haskell first.
I’ve programmed a bit in both, and I would say either one would do. Personally I’m doing a bit more with Haskell lately, and typeclasses are a nice feature (as others have mentioned). On the other hand, perhaps the very rigid way in which Haskell deals with effects can be a little overwhelming at times, and OCaml would be a little less rigid on this front. Two other things I can think of which might also push you towards OCaml may be the recent excellent O'Reilly book Real World OCaml by Minsky et al (the same author as the blog post), and the similarity between OCaml and other ML'ish languages such as F#. F# is a fully open-source project and Visual F# is a first-class citizen within the .NET ecosystem, if that happens to be of interest to you. In short I would say you are choosing between two excellent and well-designed languages and really can’t go too wrong. You’ll probably want to check out the “other one” (as I have) irrespective of which way you step now. Good luck!
The good news is that there is experimental work ongoing by
Leo White, Frédéric Bour and Jeremy Yallop to provide something like that – more like modular implicits.
This is relevant to my interests. I’ve been exploring some similar ideas for my pet language, and while I don’t think I’d take this exact approach, it looks like a very good read.
Coming from Erlang, getting used to either one will require a bit of mental rewiring. Haskell more than OCaml, since with OCaml you’re allowed to throw printfs (not literally) around. OCaml’s type system is not as expressive than Haskell’s but it is certainly very advanced in its own right and it probably is easier to learn.
Fundamentally, both belong to the ML family, so learning either one first will help in learning the other. If I could choose I would learn Haskell first, because it is stricter in the type system sense. Moving to OCaml will feel like loosening a belt, so to speak. That said, OCaml isn’t a simpler language by any means: it has a powerful OOP system, module functors (think parametrized static classes) and a great record system. Recent versions added GADTs. Concurrent and parallel programming on OCaml is a sadder story, but work is being done on it.
Both haskell and ocaml are very similar in many ways. Type signatures, HM-like type inference, etc. However, in Haskell, many things are more “magic”, whereas in OCaml, code is typically more explicit.
It depends on what you want to accomplish. I would feel much more comfortable deploying Ocaml to production than Haskell. Ocaml (currently) is very easy to reason about how it will perform, as it does not do any super interesting optimizations. The computation model is also much closer to how I feel comfortable thinking (eager evaluation). Haskell on the other hand can be deployed to production (Facebook is doing it), but they also have Simon Marlow on the team so I’m not sure if that is an unfair advantage. That being said, if you want to expand your world of how computation can be, Haskell is well worth diving into.
I’ve personally found that the bigger issue I have with Javascript’s tooling is the fact that there are too many competing tools and approaches. That coupled with the speed at which projects seem to start, develop and die, I feel utterly lost in the ecosystem. I don’t think having an optimizing compiler would really help Javascript at this point. Many developers are more than content with sending down whatever sized bundle over the wire. In my opinion, Javascript needs to calm whatever storm its been having since node.js came out and start to focus on building for the long haul.
That’s because the tools are rarely good because the developers of them are just as quick to abandon them for the next cool toolset. Good takes time. They may be interesting or an evolution over previous tools but they rarely maintain momentum to pass the growth stage. It’s funny how many times the same stuff has to get reimplemented for each new tool. How much time is wasted? Just look at house many -sass plugins there are for the thousands of static site generators or other tools.
I have to agree, that was a confusing explanation for a simple difference. I personally like Erlang’s minimalism, where:
Example = "one". % binding variable
Example = "one". % pattern match variable (true)
The runtime errors are usually sufficient to catch these mistakes in most situations.
I would like to see the implications of Elixir’s auto-rebinding in a broader scope rather than the narrow example of case statements. While I know those are used heavily in Erlang programs, I’m curious about the risks introduced by rebinding in a larger more complex application with lots of moving parts.
This is likely something that would require using Elixir for a period of time, which I haven’t yet tried, rather than being easy to communicate through blog post code samples.
Regardless, in those cases I would prefer the stricter immutability of variables that Erlang offers. Especially considering the programming style of using multiple processes and the use of concurrency in even basic programs.
I love the idea of having an SQL database just be a file read by a library. However, I can’t phantom why dynamic typing in a database is a good idea. If a row is declared as an integer, I don’t want random string data to silently appear there. I don’t want a subtle bug in my application silently put garbage in my database. I can to some degree understand the appeal of dynamically typed language, but surely you want your database to protect you from storing broken data?
The obvious solution is to write your application in C and have a struct type for each row. Then if you have an int in your struct, you can only put an int in the database. :)
To the extent that it makes sense for a language it makes sense for a database too, I’d say. If you’re following the principle of alternating hard and soft layers, then if you have all your validation and typing in a relatively strict language then it may make sense to use a relatively loose storage layer.
I want to install arch on my 2018 macbook pro really bad but I dont think its a good idea after past experiences… at least in 2015 there was docs on the Arch wiki. Looks dead now.
I’m going to ask the boss for a non-Mac next year…
I don’t understand why they removed the Installation Guide from the Arch Wiki. It used to be such a comprehensive resource to guide you through the installation process. Now all that information is spread across multiple wiki articles and you have to somehow piece it together.
Is it not this?
https://wiki.archlinux.org/index.php/Installation_guide
It is. I meant to say it’s much less comprehensive now than it used to be.
I think all the content is still there it just got split up. It’s a bit of a shame since that install page used to be mostly standalone, now it’s a bit more of a choose your own adventure that branches off at various points.
Yep, exactly. It’s up to you now to piece together all the info spread across multiple pages.
I’m happy that Oil shell (the author) and Elvish (https://elvish.io/) are both innovating with shell development again.
This is definitely an area that had flatlined for the last decade outside of plugin development.
I’ve written tons of ZSH over the last two years as I basically developed my own ZSH framework as a hobby and most of the time I spend fighting the language. Even beyond the learning curve I still run into some many strange issues.
I’d love a new modern language to work with for scripting. Plugins are just not enough to fill the gap.
What does accuracy mean here?
Reading between the lines a little bit (so I could be wrong): threshold the scores at some level, dividing them into “low risk” and “high risk” scores. An accurate prediction is a case where either a defendent given a high-risk score is re-arrested in the following two years, or one given a low-risk score isn’t re-arrested in that time. Accuracy is accurate predictions divided by total cases.
Of course the unstated assumption is that the false positive (Arrested but actually innocent, but found guilty after trial) rate for actual arrests is 0%.
For low income defendants, we know this to be unlikely because they are appointed a public defense attorney, who is likely to ask them to plead guilty for reduced charges. Public Defense Attorneys do this because they do not have time to adequately defend each case, so if they try to fight the charges they’ll likely lose even if they are innocent. This isn’t a case of the Defense Attorney trying to get out of work, this is frequently literally the most advantageous path their client can take. This unfortunately clouds a lot of crime metrics, and until our court system improves we won’t get good enough data to make conclusions going forward.
Interesting. My partner is a lawyer and told me that murder trials can cost $100k (this is in Canada) so only the wealthiest people could afford a private defense attorney. But regardless most of their murder cases are funded by the government (meaning they make around ~10k/yr in income). Otherwise they will be mortgaging the house and borrowing from family.
The bigger problem that often comes up is most people talk to the police way to much. So the only defense they are often left with is challenging civil rights abuses for the search warrants or what not. As the police typically have enough evidence. I’ve watched a number of police interviews from their cases and every time they end up saying something they shouldn’t.
The third thing that really doesn’t get enough attention in the media is how cell phones have made the polices job 10x easier. That data which gets pulled in every case and most of the time it’s useful. Not a good era to be a defense attorney!
These three combined contribute to the 90%(?) plea deal situation.
nor is it a good time to look guilty but be innocent :(
Hmm they’re giving people a big monetary incentive to find them rather than paying them to go away.
I’d imagine the type of company that can pay $250k worth of important data would have a professional set-up with back-ups. But I might be being too presumptive.
Another thing is getting that much BTC would be a huge hassle and probably end up adding a ~10% premium.
This seems like a bigger gamble than the typical ransomware fees.
It’s also critically important they decrypt it otherwise people will stop paying the fee entirely.
Well, I imagine that the advanced type of resources it takes to hunt down hackers at the FBI and elsewhere require some big motivation and each of these attacks is high profile by nature, more so than the other usual rinky-dink randomware. But I guess both types are pretty bad, just quality over quantity.
I’m mostly just not sure what the criminals business model is here. Because I don’t see it working very well.
The article describes this as an attack on the Ukraine’s infrastructure. As you point out, criminals don’t have a strong incentive to do that unless they’re going to get money from it.
But there’s an APT for whom this makes perfect sense - the same one that has been using its associated physical military in that area for a few years now. State actors have motives that aren’t money.
Haskell and ocaml. I am really interested in functional languages for the last 10 years. I have been dabbling on and off and reading papers here and there. Never have I had the chance to program a large professional app in it. I also don’t have any desire to “hobby program” anymore since settling down.
I mainly program C++ (python too, but I have no love for oopy python) and I dabbled in go a bit. Rust doesn’t actually interest me. I haven’t seen enough novelty to actually drive me away from c++. Maybe 15 years ago, everyone was talking about D and then half that many years ago they were all talking about go. I’m sure there was a new hotness language before my time that never came to be. As C++ evolves, it’s going to start addressing enough of the big concerns that moving to a new ecosystem might have more downsides than up.
Haskell is definitely worth the effort. It’s not easy but I’ve learned so much about FP and programming in general from it. Plus I love Quickcheck for testing functions.
It also exposed me to PureScript. Which is great, although still in the early stages. High learning curve will probably scare away most JS devs though.
Yeah. From what I’ve seen, PureScript is more for Haskell devs moving to the web than it is for JavaScript devs moving to FP.
D has completely changed in those 15 years. What is now called D really is a new language that was briefly called D2. You should check it out again! I really am enjoying it. It feels like a completely modern and sane C++.
The tone is a bit abrasive but the information seems to be important.
Agreed, I’m curious why the author is taking this tone.
Was Schildt hostile to his feedback?
It was understandable if he wrote it back in the day without thinking much about it. But he seems to be doubling down here after becoming experienced in programming and (in his words) communication.
I don’t understand why anyone would feel the need to publish such a negative review to begin with, let alone defend it. I did read the first few pages of examples and found them to be valid points, and overall this is written without much exaggeration or irrelevant rhetoric. I guess there must have been a history of divisive argument and this was written to settle it.
I hope it did end the argument.
I think the tone here’s valid. Coding C properly means having a handle on this sort of minutiae, and documentation purporting to give a comprehensive overview (“The Complete Reference”) that contains these sort of errors has the potential to do real harm to people learning off it. Also, the author was on the standards committee - he should’ve done a better job in the first place - and is on his fourth edition of the book - there’s been ample time to fix the errors. He deserves some flak for continuing to peddle known-bad docs.
I think that’s the key thing. The earlier criticism was based on the 3rd edition, so any glaring errors should really have been resolved by then.
I remember reading a few of Herb Schildt’s books when I was in high school (including C: The Complete Reference) and that’s how I picked up a lot of bad habits (
void main(..)
for one), simply because I didn’t know any better (remember, this was before the Internet was available to people outside academia so I had no frame of reference).That’s a good point.
I’m curious how people use personal wikis? Is it for note taking and todos?
I keep a $HOME/notes folders where I keep a bunch of .md files with my notes and .tasks files (using vim-tasks [1]) for todos. Then I created a tiny ZSH script bin ‘note’ to either open a note if it exists already or create a new one from anywhere in terminal.
Will automatically execute:
I like the simplest approaches. But I’m curious if these Wiki’s help keep things organized better.
[1] vim-tasks plugin: https://github.com/irrationalistic/vim-tasks
You’re basically accomplishing the same thing as this project, except he includes git.
I keep a personal wiki with gollum, which works over git as well and can use all sorts of text formats, though I tend to stick to markdown. I figured I would always use plain text editing, but on some occasions I do use the in-browser, markdown preview editor.
The pain of
if err != nil {
has never been more real.There are packages that makes these sections more readable. For example: https://github.com/whitecypher/work
Oh my! Looking at the example, that’s even worse. Now you need to wrap every computation in a closure! There are far more elegant solutions, even with Go’s limited type system. E.g., from the code:
Can be converted into:
Or just write a small function:
Or switch to a language with sum types and monads/macros :). Though, I still have difficulty understanding why the Go team just doesn’t add some syntactic sugar to let errors bubble up, since
error
is an interface.It’s things like this that make Go feel like a spectacular waste of an opportunity to have improved on C. That API could have been more like:
There is no way that this could be avoid break the DRY principle ?
I basically only use Cloudflare for SSL as a stopgap until I have time to buy my own cert and add it to nginx (which is very easy to do).
I otherwise refuse to use them given the fact they basically block Tor, by having captchas appear so often most sites are unusable. They always sidestep confronting this issue by saying they don’t ‘technically’ block Tor which is even more angering.
Plus the whole MITM and passive analytics consumption they have the potential to be coerced into doing. They provide a great service but the compromise is too much to hand over at such a large scale IMO.
Is this worth learning for a self-taught programmer?
I just started using Go recently and really find this to be annoying.
I remember reading (on the Go blog maybe?) about doing a small “assert” function for things that can fail (like command line utilities), it’s been one of my favorites since then:
then you can just do something like:
Of course, if you’re doing web services or any sort of daemon you’d want to handle errors gracefully, without having them kill your app (unless you’re doing startup)
I felt the same way for a while but eventually you get over it. There are other things that in other languages would take a lot to express but are short in Go (e.g., interface behavior).
Personally I prefer this feture than exception, it’s more explicit than exception, and not that cubersome. Exceptions always make me worried that there may be some unhandled mess deep down the function call so I have to wrap the call with
try catch block
and finally it indents the main logic futher and gets ugly.EDIT: Besides, It’s extremely terrible to mix
return error
andthrow exception
in the same application, which is very common if you use different libraries with different error handling styles, while golang makes it consistent.Personally I prefer this feture than exception, it’s more explicit than exception, and not that cubersome.
But it’s not a binary choice of nullable error types versus exceptions. Sum types solve this problem quite elegantly, especially combined with some facilities to ‘unwrap’ instances of such types safely (e.g. try! in Rust or the Maybe/Either monads in Haskell).
A lot of Go’s features are slick, but this error handling is indeed pretty ugly. I wonder what the underlying reasons were for them to implement it like this.
Explicit over magic.
This, so much this.
It’s ugly, but it’s clean enough that I miss it when working with other languages. I always find myself having to deal with C++ functions that might throw exceptions, PHP functions that either use exceptions or have their own
get_last_error()
function. Haskell which has just about a bajillion ways to do errors (and this 15 pages guide only covers some of them).Go’s error handling is one of those things that make it boring as hell, but also what makes it so easy to be productive at. I just never overthink stuff as much as I do with other languages.
What about error handling in Rust? It’s basically
unwrap()
if you want to ignore the error or propagating it (viatry!
). Other things likeignore and use default value X
are also possible.The main benefit over Go is that you have to handle the error somehow. You can’t forget to ignore it, and you have to ignore it explicitly if you want to.
Haven’t had to deal with Rust yet, so I really don’t know what you’re talking about, sorry!
Maybe I should try it again at some point (never seriously considered in the past due to the constantly changing API).
The API has been stable for over a year now, so if you’re interested, now would be a good time to try it out!
Don’t forget that Rust still has panics, which for me show up most commonly on array and Vec indexing when I’ve got a bug. Granted, this kind of exception almost always shows up in testing for me, and I rarely encounter libraries which panic on their own, but it’s worth remembering that
Result<T, E>
isn’t the only way that Rust handles error conditions. I personally don’t worry nearly as much about panics as I do unchecked exceptions in Java or Python, but nonetheless it’s worth noting that Rust doesn’t offer a panacea here, just different tradeoffs (which I’m quite happy with).Go also has panics though, so it’s not really a differentiator.
I wasn’t attempting to differentiate Rust vs. Go though. If I wanted to differentiate Rust and Go re: error handling, I would point out that the Rust
Result<T, E>
type issues compiler warnings if you neglect to use it, whereas Go doesn’t appear to do that (although I haven’t written more than hello world in Go). I was pointing out thatResult
isn’t the whole error handling story in Rust.Laziness, more like.
I’d believe that for most other teams. But Go is really a language that has been designed. Even OCaml, a language I like very much, with tons of very fancy features, feels less cohesive and cleanly designed than Go.
That’s well possible since MLs like OCaml consist of the core language and the module language and these feel a bit different. There is 1ML which tries to unify these in a coherent whole. Plus OCaml has a lot of syntax and type system featured plastered over the the underlying ML type system which make it a more flexible language but also a more complex.
Maybe it’s also because languages which can be completely understandstood feel more “designed” because it is easier to grasp the design and see how things fit together?
More intellectual laziness. “Return values worked for me, why should I do anything else?” Thompson et al are certainly smart and accomplished, but definitely fossilized in their thinking.
arguably the key aspect of djb’s coding style that leads to fewer bugs in practice is extremely explicit error handling. I’m sure the OpenBSD folks handle errors the same way.
Learning the Maybe monad in Haskell started my love affair with the language. After my mixed experience with Go it was particularly prescient.
I will take the balance of good enough features plus fast builds over richer features and painful builds every time. This is definitely an area where personal preference is the deciding factor.
Erlang is awesome for building things like RabbitMQ/Messaging/Whatsapp etc. But for a web app its a terrible language to use.
I have to disagree here. Have you seen the various Erlang web frameworks available or tried any of them?
I’ve spent quite a bit of time recently doing so combined with a modern JS front-end stack and my experience was very positive.
Building web apps with Erlang is very reasonable choice. Especially building it on top of the awesome multi-process web servers combined with web sockets is a perfect combination. Additionally being able to use static typing (via dialyzer) and OTP makes working with concurrency and multiple web services/APIs very clean and manageable.
I know Elixir isn’t exactly Erlang, but Phoenix (an Elixir web framework) is a delight to work in. IMO, you get the benefits of Erlang without actually needing to always write pure Erlang.
The reasoning behind Elixir is, in fact, to make Erlang’s power easier to use, especially for web development. Platformatec started building apps in Erlang, but began designing Elixir to make it easier to do common web stuff while leveraging the IO beast that Erlang is.
Thanks for sharing! Phoenix looks really slick actually. I’m going to give it a try.
Not sure I want to upvote a site that hijacks your scroll bar, even though the prices are good :/
When I tested this myself the tail recursive version was substantially faster.
code
in the erlang shell
You did not reverse the list, and for some reason the first measurement of timer.tc can often be off. Plus, it might be that garbage collection triggered while benchmarking map_body. Benchee measures multiple runs and runs garbage collection in between. Might also be something else, though - as the erlang page mentions architecture can also have an impact.
Here it is with also reversing it after and its still faster. There is a consistent 2 second difference, this is not random fluctuations.
In the shell it indeed seems to behave like mapbody is slower on the first run (at least with the input list you used, 1000000 elements, I ran the benchmark with 10000 elements)
However, running it more often in the same iex session map_body gets faster:
That might be it. Benchee runs a warmup of 2 seconds where it doesn’t measure (to simulate a warm/runnign system) and then takes measurements for 5 seconds so that we have lots of data points.
Might also still be something with elixir and/or hardware :) Maybe I should retry it with Erlang, but not this morning :)
Never benchmark in the shell, it’s an interpreter. Compile the module with the benchmarker included and run that.
Not sure what exactly you mean, the original benchmark was run compiled not in the shell. This here was just done for comparison with the reported erlang benchmarks - my erlang_fu (which is not very existent at this point in time) I currently fail to do that and don’t have the time to look it up atm :)
Ok, not like an elixir script executable like I’d want to but I wrote a benchmark function that benchmarks it and then called that in the shell - good enough for now I guess. There it is all much faster, and map_body seems to be about as fast as the non reversed tco version or even faster. I’d still need a proper benchmark to determine it all though.
code
Ok, I ran your code in erlang and I also get consistently faster results for the TCO version. I don’t get it, it is the same function I wrote in Elixir. The interesting thing for me, comparing Erlang and Elixir is, that with the same list size and what I think are equivalent implementations map_body seems to be much slower in Erlang. E,g, compare the numbers here to the other post where I do the same in Elixir and iex. In Elixir map_body settles at around 490k microseconds, the erlang version is between 904k microseconds and 1500k microseconds.
I reran your benchmark and got similar results (consistently over 10x runs) that is around 35-40% faster:
Does it have something to do with the fact he used Elixir?
Sigh, this argument again.
Here are the problems:
unsafePerformIO
in Haskell).Essentially, it is only reasonable to evaluate the success of a type system by:
There are other benefits to type systems. Such as improving code readability and making it easier to find applicable stdlib functions. Forcing you to think hard about how you are modelling data and thinking out the constraints of certain design descisions ahead of time - things that I never did very often until I began programming in typed languages.
Plus depedent types will add even more value to static typing which I look forward to using.
Yes, rereading my post, I perhaps gave the wrong impression. I am very much a fan of static typing! My issue is that people often attack static typing with silly/unreasonable arguments about it not doing things it never claimed to be able to do, or by pulling out weak static type systems (like C) and then using their weakness as an argument against static typing in general.
Part of this comes from an issue of language. A lot of the terms we use to talk about types (outside of academia) are fluid and may mean different things to different people.
I disagree with this–one of the first things I’ve seen systems supporting type-inference do is implement a mechanism for removing the explicit statement of type (
var
in C#, C++, etc.).That hurts readability; alternately, if it improves it, we see an argument for using a language like JS or Ruby.
Question for you Lobsters. Which should I learn next (after Erlang)… OCaml or Haskell? I can’t decide… I’ve heard good things about both.
Five years ago, I tried to tackle Haskell first, and found it way too weird. OCaml was a lovely way of getting used to a lot of concepts in the space while being able to be quickly productive in it (which is kinda necessary if you want to really learn the thing).
Learning OCaml then motivated some issues that Haskell addressed, so I was quite ready and enthused to learn about Haskell’s take.
You may find this route works for you.
I’ve used both in significant anger and Haskell in production. Both are good next steps and both will take some serious getting used to over Erlang. Your personal goals will dictate the right next moves, so it’s worth noting that Haskell will introduce you much more forcefully to ideas of “purity” which can be very useful. OCaml is made somewhat simpler to understand by avoiding this issue. OCaml will introduce you instead to the idea of “modularity"—which is not precisely what you might think it is based on prior experience—and I’d say you should take a detour though learning that concept regardless of which one you end up investing more time into.
If your goals are more project-oriented then either can work, but Haskell’s library ecosystem is vastly better developed today. Haskell in production is tricky due to laziness—you will have to learn new techniques to deal with it. OCaml I’ve not used in production, but the challenges there seem more likely to be around getting the right initial momentum as you may have to write some significant portion of library code and work around a bit of ecosystem incompatibility.
Speaking of ecosystem again, OCaml has a standard library problem in that its genuine standard library is often not what the doctor ordered and there’s a choice of “extended” standard libraries. This choice sometimes feels a little forced which is slightly ironic due to the aforementioned modularity goals of OCaml, but c'est la vie.
The Haskell community has issues or benefits in that you will undoubtedly run into someone sooner or later who will happily try to explain the solution to a problem as a mere application of recognizing your algorithm as being a catamorphism over an implicit data type or will suggest refactoring your effect stack according to
mtl
or a Free monad. It might even be me that does this. Jargon aside there’s some significant insight that Haskell can be a gateway drug to. On the other hand, this can be the absolute last thing you want to hear when trying to get your project working. Mileage varies here.The OCaml community has issues in that it’s small and sometimes speaks a lot of French.
Both communities are great in that they’re full of passionate and damn smart folks, though.
Speak of the devil: https://www.reddit.com/r/haskell/comments/42sf41/i_had_trouble_understanding_church_encoding_so_i/czdpnv1
I’ll throw my hat in for Haskell since you’ve gotten mostly “OCaml then Haskell”, “OCaml”, and “Both” answers.
Learn Haskell. It can be easier and less noisy than OCaml too. The ability to do ad-hoc polymorphism with typeclasses in a way that is more conventionally what people want out of polymorphism without giving up type-safety is pretty nice. If your goal is to learn new things, the overlap-but-different bits between Haskell and OCaml won’t do a lot of good either. Haskell’s built on a simpler core semantically as well.
tl;dr I got tired of writing
.mli
files, I prefer my types to be next to my code, and I found modules tedious for everyday work. YMMV.You won’t harm yourself by learning both, but you’ve got to make a choice of where to start. I think there’s a lot to be said for learning Haskell first, hitting some limitations of typeclasses, then playing with modules in OCaml or SML.
Disclosure: I am quite biased, but this is stuff I actually use in my day to day, not night-time kicking around of pet projects. Haskell is my 9-5 and I wouldn’t replace it with OCaml. That said, if Haskell disappears tomorrow the first thing I am doing is writing a Haskell compiler in OCaml and I can’t say I wouldn’t enjoy it :)
I think these points are totally spot on. I learned Haskell first then learned OCaml for a university compilers course. OCaml was trivial to learn if you’ve already learned Haskell. That sentiment is expressed in the introduction to Haskell for OCaml programmers. The OCaml for Haskell programmers really only covers syntactic differences.
Haskell is not really much more difficult to learn than OCaml, I estimate, but I don’t really know because I learned Haskell first.
Why not dip your toes into both and see which you prefer? You’ll only be better for it.
I’ve programmed a bit in both, and I would say either one would do. Personally I’m doing a bit more with Haskell lately, and typeclasses are a nice feature (as others have mentioned). On the other hand, perhaps the very rigid way in which Haskell deals with effects can be a little overwhelming at times, and OCaml would be a little less rigid on this front. Two other things I can think of which might also push you towards OCaml may be the recent excellent O'Reilly book Real World OCaml by Minsky et al (the same author as the blog post), and the similarity between OCaml and other ML'ish languages such as F#. F# is a fully open-source project and Visual F# is a first-class citizen within the .NET ecosystem, if that happens to be of interest to you. In short I would say you are choosing between two excellent and well-designed languages and really can’t go too wrong. You’ll probably want to check out the “other one” (as I have) irrespective of which way you step now. Good luck!
I personally prefer Ocaml, although I wish it had haskell style typeclasses.
The good news is that there is experimental work ongoing by Leo White, Frédéric Bour and Jeremy Yallop to provide something like that – more like modular implicits.
This is relevant to my interests. I’ve been exploring some similar ideas for my pet language, and while I don’t think I’d take this exact approach, it looks like a very good read.
Coming from Erlang, getting used to either one will require a bit of mental rewiring. Haskell more than OCaml, since with OCaml you’re allowed to throw
printf
s (not literally) around. OCaml’s type system is not as expressive than Haskell’s but it is certainly very advanced in its own right and it probably is easier to learn.Fundamentally, both belong to the ML family, so learning either one first will help in learning the other. If I could choose I would learn Haskell first, because it is stricter in the type system sense. Moving to OCaml will feel like loosening a belt, so to speak. That said, OCaml isn’t a simpler language by any means: it has a powerful OOP system, module functors (think parametrized static classes) and a great record system. Recent versions added GADTs. Concurrent and parallel programming on OCaml is a sadder story, but work is being done on it.
Both, but start with OCaml.
Both haskell and ocaml are very similar in many ways. Type signatures, HM-like type inference, etc. However, in Haskell, many things are more “magic”, whereas in OCaml, code is typically more explicit.
It depends on what you want to accomplish. I would feel much more comfortable deploying Ocaml to production than Haskell. Ocaml (currently) is very easy to reason about how it will perform, as it does not do any super interesting optimizations. The computation model is also much closer to how I feel comfortable thinking (eager evaluation). Haskell on the other hand can be deployed to production (Facebook is doing it), but they also have Simon Marlow on the team so I’m not sure if that is an unfair advantage. That being said, if you want to expand your world of how computation can be, Haskell is well worth diving into.
Haskell, or F# instead of OCaml. You’ll find that F# is more widely used because of Microsoft’s backing.
I’ve personally found that the bigger issue I have with Javascript’s tooling is the fact that there are too many competing tools and approaches. That coupled with the speed at which projects seem to start, develop and die, I feel utterly lost in the ecosystem. I don’t think having an optimizing compiler would really help Javascript at this point. Many developers are more than content with sending down whatever sized bundle over the wire. In my opinion, Javascript needs to calm whatever storm its been having since node.js came out and start to focus on building for the long haul.
That’s because the tools are rarely good because the developers of them are just as quick to abandon them for the next cool toolset. Good takes time. They may be interesting or an evolution over previous tools but they rarely maintain momentum to pass the growth stage. It’s funny how many times the same stuff has to get reimplemented for each new tool. How much time is wasted? Just look at house many-sass plugins there are for the thousands of static site generators or other tools.
I have to agree, that was a confusing explanation for a simple difference. I personally like Erlang’s minimalism, where:
The runtime errors are usually sufficient to catch these mistakes in most situations.
I would like to see the implications of Elixir’s auto-rebinding in a broader scope rather than the narrow example of case statements. While I know those are used heavily in Erlang programs, I’m curious about the risks introduced by rebinding in a larger more complex application with lots of moving parts.
This is likely something that would require using Elixir for a period of time, which I haven’t yet tried, rather than being easy to communicate through blog post code samples.
Regardless, in those cases I would prefer the stricter immutability of variables that Erlang offers. Especially considering the programming style of using multiple processes and the use of concurrency in even basic programs.
I would personally prefer it if Elixir had Erlang’s matching by default, but optional rebinding with ^ rather than what they’re doing right now.
I love the idea of having an SQL database just be a file read by a library. However, I can’t phantom why dynamic typing in a database is a good idea. If a row is declared as an integer, I don’t want random string data to silently appear there. I don’t want a subtle bug in my application silently put garbage in my database. I can to some degree understand the appeal of dynamically typed language, but surely you want your database to protect you from storing broken data?
The obvious solution is to write your application in C and have a struct type for each row. Then if you have an int in your struct, you can only put an int in the database. :)
To the extent that it makes sense for a language it makes sense for a database too, I’d say. If you’re following the principle of alternating hard and soft layers, then if you have all your validation and typing in a relatively strict language then it may make sense to use a relatively loose storage layer.
I can’t imagine relying on a database for type safety. That is a much higher level problem IMO.
This connects well with the RethinkDB article posted earlier https://lobste.rs/s/dqxcch/jepsen_rethinkdb_2_1_5/comments/wlk4z5#c_wlk4z5: