Nah, the whole reason things like Flash and Java applets failed and were ultimately bug ridden is that they represented totally different sub-platforms from the web itself.
In 2018, Javascript is as much a part of the web as HTML, so WebAssembly gets to benefit from all the incredibly hard work that goes into sandboxing and security around the browser platform.
Nah, the whole reason things like Flash and Java applets failed and were ultimately bug ridden is that they represented totally different sub-platforms from the web itself.
That is one of the points that I attempted to make in the article.
I am totally guilty of posting before reading, but I just read it and it’s an excellent article!
As an adjunct to your point around both adding additional paradigms and platforms to the already perhaps too rich web platform, Java applets had the disadvantage that the Java folks kept trying to switch frameworks. Started with AWT, then hey let’s implement Swing in applets, then let’s try that other thing whose name I can’t remember … JavaX maybe?
I usually link to Betteridge’s Law when I write a post like this, but didn’t this time.
Apparently a significant portion of people found the title to be clickbait-y, but I thought it was a pretty straightforward question. Oh well!
This knee-jerk reaction against “clickbait” kind of annoys me. Imo there is nothing wrong with an article having a title that attempts to engage a reader and pique their interest. I would also much rather a title pose a question and answer it in the article, rather than containing the answer in the title itself. (The latter can lead to people just reading the title and missing any nuance the article conveys).
I agree. Clickbait really implies that the article has no meaningful content. If the article is actually worth reading, it’s not clickbait, it’s catchy.
“WebAssembly is not the return of Java Applets and Flash.”
Edit: I did enjoy the article, however.
Edit2: As site comment:
I had no idea what the “kudos” widget was, moved my mouse to it, saw some animation happening, and realized I just “upvoted” a random article, with no way to undo it. Wondeful design. >.<
It’s one of the styles of English possessive for singular words that end in an ‘s’. When making a plural word that ends in ‘s’ into a possessive, all authorities agree that you just add an apostrophe (“the employees’ salaries”). But when it’s a singular word that happens to end in an ‘s’, some styles prefer that you treat it the same way as any other singular word and add apostrope-s (“Alger Hiss’s trial”), while others prefer that you treat it in the same way as plural words ending in ‘s’, and add just apostrophe (“Alger Hiss’ trial”). Both styles are pretty common for a few centuries now I think. I tend to use the apostrophe-s style because it’s how I would speak (I’d say “hiss-es trial”, or in this case, “boats-es personal barricade”, to indicate the possessive). I guess this one is extra-weird because the person’s handle, boats, is a plural English word, but adopted as a handle for a single individual.
I’ll add a citation in honor or @mjn’s fine reply, Wikipedia (Wikisource) has the rule from the original Strunk & White text - Strunk and White is one of the better (and readable) style guides that most people should use for the English language.
Strunk and White is one of the better (and readable) style guides that most people should use for the English language.
It really depends who you ask. See for example the paper linked in https://www.washingtonpost.com/news/volokh-conspiracy/wp/2015/04/21/against-strunk-whites-the-elements-of-style/ for example.
Agreed. If you are at the point where you disagree based on an actual reason, like in the linked rebuttal, or are even aware of other style guides, then weigh the pros and cons appropriately. If your discipline/profession/place of work doesn’t have one and you aren’t being supervised by a professor, this is a pretty good default.
I actually hesitated at wording it as rule and would have preferred guideline, but my link had it titled as rule, so take things with a grain of a salt.
In practice, I would guess most authors do something simpler than S&W and just stick to either the apostrophe-only or the apostrophe-s form, though I have no data on that. Seems a bit fiddly to recommend apostrophe-s almost always, but then carve out an exception for “ancient proper names ending in -es and -is”, a second exception specifically for Jesus, and a third one for traditional punctuation of phrases like “for righteousness’ sake”. I could imagine that working as a publication’s house style that their copyeditors enforce, but I would be surprised to find it much in the wild.
Please note this is from April; a lot has happened.
This week, the networking WG is supposed to be making some posts overviewing the state of play as it is today. We have landed futures in the core library as well as the first implementation of async/await in the compiler. Still more work to do, though!
This Week in Rust collects the latest news, upcoming events and a week-by-week account of changes in the Rust language and libraries.
The Rust Blog is where the Rust team makes announcements about major developments.
And nearly everything happening in Rust is discussed on the unofficial subreddit, /r/rust.
Probably following the network working group newsletter; one should be out this week with a summary of where we’re at with everything, and what is left to be done. I’ll make sure it gets posted to lobste.rs.
Here’s an update on this: https://lobste.rs/s/anlitu/futures_0_3_0_alpha_1
So what’s the current state regarding implicit vs. explicit execution? Last time I checked there were both explicit executors and poll
.
I think you’re asking about Tokio; the standard library doesn’t provide an executor. The executor is implicit. You still call tokio::run
, but that’s it. See here: https://tokio.rs/blog/2018-03-tokio-runtime/
poll
is still the core of how Futures works. https://doc.rust-lang.org/nightly/std/future/trait.Future.html
So future.map(...).filter(...)
won’t start executing until it is polled explicitly? I found the documentation to be somewhat silent on that.
So why is Rav1e as slow as it is? How can it be sped up to real-time? Threads? Chunk-based encoding?
From what I understood from T.Daede’s presentation when it was new, is that its speed advantage comes from being brutally simple, such as only using the smallest transform.
The reference implementation is purely for accuracy, and doens’t care about speed at all.
They also don’t have --release
in their README; I wonder if this number was created without it, if so, doing just that should show anywhere from 2x-100x improvements.
EDIT: sent a PR for the README; they said their test scripts use it, so that must not be the case.
Maybe our idea of the “web” is what’s too small! Web Assembly is more than appropriately named in my opinion for how well it works for the transfer and immediate evaluation of procedures over networks, but the imagination of many for the idea of the web is lost beyond the horizon of their web browser. There’s a lot of our thinking that could use a bit of Imagination Fuel
I’m very much on that team. I’m giving a full conference talk about it in Barcelona next week. Different framing work well for different people, so I picked this one for this post, as I wanted it to be short, and this framing is shorter.
Didn’t mean it to be nitpicking, it’s too easy to accidentally write posts in that style lol. Very good article and I’m excited for any future ones about this topic!
Friday: https://jscamp.tech/schedule/
It says TBD but it’s a wasm talk.
Yeah I think we browsers have become too big, monolithic, and homogeneous. I would like to see more diversity in web clients. Those clients could use WebAssembly outside the context of the browser.
The browser has a very specific and brittle set of security policies, and WebAssembly doesn’t change that. It will inherit the same problems that JavaScript has.
Sort of! You know, at parse time, every function a wasm program could call. This is extremely useful in a security context.
Imagination Fuel
Love It. The captures a lot of meetings I have been trying to have with folks at work. They are thinking low level performance fixes for things, while necessary, they are having a huge problem jumping up a couple abstraction levels and thinking transformatively.
I agree 100% with the quote you pulled; I’ve found it a really interesting and useful way of framing stuff.
I wonder if the Firefox build team has considered exploring Nix for allowing the builders to be internet-free, but without bundling dependencies in the repo.
Does Nix work on Windows? Firefox build team must produce Windows binary, in fact, it is the most important build in terms of users.
Yep, this is how I figured out monads too, but when using Rust! There is more to them though - the laws are important, but it’s sometimes easier to learn them by examples first!
Can you show an example where a monad is useful in a Rust program?
(I’m not a functional programmer, and have never knowingly used a monad)
I learned about monads via Maybe in Haskell; the equivalent in Rust is called Option.
Option<T>
is a type that can hold something or nothing:
enum Option<T> {
None,
Some(T),
}
Rust doesn’t have null; you use option instead.
Options are a particular instance of the more general Monad concept. Monads have two important operations; Haskell calls them “return” and “bind”. Rust isn’t able to express Monads as a general abstraction, and so doesn’t have a particular name. For Option<T>
, return
is the Some
constructor, that is,
let x = Option::Some("hello");
return
takes some type T
, in this case, a string slice, and creates an Option<T>
. So here, x
has the type Option<&str>
.
bind
takes two arguments: something of the monad type, and a function. This function takes something of a type, and returns an instance of the monad type. That’s… not well worded. Let’s look at the code. For Option<T>
, bind
is called and_then
. Here’s how you use it:
let x = Option::Some("Hello");
let y = x.and_then(|arg| Some(format!("{}!!!", arg)));
println!("{:?}", y);
this will print Some("Hello!!!")
. The trick is this: the function it takes as an argument only gets called if the Option
is Some
; if it’s None
, nothing happens. This lets you compose things together, and reduces boilerplate when doing so. Let’s look at how and_then
is defined:
fn and_then<U, F>(self, f: F) -> Option<U>
where F: FnOnce(T) -> Option<U>
{
match self {
Some(x) => f(x),
None => None,
}
}
So, and_then
takes an instance of Option
and a function, f
. It then matches on the instance, and if it’s Some
, calls f
passing in the information inside the option. If it’s None
, then it’s just propagated.
How is this actually useful? Well, these little patterns form building blocks you can use to easily compose code. With just one and_then
call, it’s not that much shorter than the match
, but with multiple, it’s much more clear what’s going on. But beyond that, other types are also monads, and therefore have bind and return! Rust’s Result<T, E>
type, similar to Haskell’s Either
, also has and_then
and Ok
. So once you learn the and_then
pattern, you can apply it across a wide array of types.
Make sense?
Make sense?
It absolutely does! I’ve used and_then
extensively in my own Rust code, but never known that I was using a monad. Thanks for the explanation Steve.
But there’s one gap in my understanding now. Languages like Haskell need monads to express things with side-effects like IO (right?). What’s unique about a monad that allows the expression of side effects in these languages?
No problem!
This is also why Rust “can’t express monads”, we can have instances of individual monads, but can’t express the higher concept of monads themselves. For that, we’d need a way to talk about “the type of a type”, which is another phrasing for “higher minded types”.
So, originally, Haskell didn’t have monads, and IO was done another way. So it’s not required. But, I am about to board a flight, so my answer will have to wait a bit. Maybe someone else will chime in too.
A monad has the ability to express sequence, which is useful for imperative programming. It’s not unique, e.g. you can write many imperative programs using just monoid, functor, applicative or many other tools.
The useful function you get out of realising that IO forms a Monad is:
(>>=) :: IO a -> (a -> IO b) -> IO b
An example of using this function:
getLine >>= putStrLn
I should say Monad is unique in being able to express that line of code, but there’s many imperative programs which don’t need Monad. For example, just Semigroup can be used for things like this:
putStrLn "Hello" <> putStrLn "World"
Or we could read some stuff in with Applicative:
data Person = Person { firstName :: String, lastName :: String }
liftA2 Person getLine getLine
So Monad isn’t about side-effects or imperative programming, it’s just that imperative programming has a useful Monad, among other things.
You are way ahead of me here and I’m probably starting to look silly, but isn’t expressing sequence in imperative languages trivial?
For example (Python):
x = f.readline()
print(x)
x
must be evaluated first because it is an argument of the second line. So sequence falls out of the hat.
Perhaps in a language like Haskell where you have laziness, you can never be sure if you have guarantees of sequence, and that’s why a monad is more useful in that context? Even then, surely data dependencies somewhat impose an ordering to evaluation?
For me, the utility of Steve’s and_then
example wasn’t only about sequence, it was also about being able to (concisely) stop early if a None
arose in the chain. That’s certainly useful.
but isn’t expressing sequence in imperative languages trivial?
Yes.
In Haskell it is too:
(>>=) :: IO a -> (a -> IO b) -> IO b
But we generalise that function signature to Monad:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
We don’t have a built in idea of sequence. We just have functions like these. A generalisation which comes out is Monad. It just gives code reuse.
Maybe
is an instance of a monad, and there are many different kinds of monads. If you think of Maybe
as “a monad that uses and_then
for sequencing”, then “vanilla” sequencing can be seen as “a monad that uses id
for sequencing” (and Promises in JavaScript can be seen as “a monad that uses Promise#flatMap
for sequencing”).
Yes, expressing sequence in eager imperative languages is trivial because you can write statements one after the other. Now imagine a language where you have no statements, and instead everything is expressions. In this expression-only language, you can still express sequence by using data dependencies (you hit this nail right on the head). What would that look like? Probably something like this (in pseudo-JavaScript):
function (next2) {
(function (next) {
next(f.readline())
})(function (readline_result) {
next2(print(readline_result))
})
}
with additional plumbing so that each following step has access to the variables bound in all steps before it (e.g. by passing a dictionary of in-scope variables). A monad captures the spirit of this, so instead of doing all the plumbing yourself, you choose a specific implementation of >>=
that does your plumbing for you. The “vanilla” monad’s (this is not a real thing, I’m just making up this name to mean “plain old imperative sequences”) implementation of >>=
just does argument plumbing for you, whereas the Maybe
monad’s implementation of >>=
also checks whether things are None
, and the Promise
monad’s implementation of >>=
also calls Promise#then
and flattens any nested promises for you.
What’s useful here is the idea that there is this set of data structures (i.e. monads) that capture different meanings of “sequencing”, and that they all have a similar interface (e.g. they have all an implementation of >>=
and return
with the same signature) so you can write functions that are generic over all of them.
Does that make sense?
There is a comment below saying it pretty succintly:
A monad is basically defined around the idea that we can’t always undo whatever we just did (…)
To make that concrete, readStuffFromDisk |> IO.andThen (\stuff -> printStuff stuff)
- in the function after andThen
, the “stuff” is made available to you - the function runs after the side effect happened. You can say it needed specific API and the concept of monads satisfies that API.
Modelling IO with monads allows you to run functions a -> IO b
(take a pure value and do an effectful function on it). Compare that to functions like a -> b
(Functor). These wouldn’t cut it - let’s say you’d read a String
from the disk - you could then only convert it to another string but not do an additional effect.
EDIT: I might not have got the wording entirely right. I ommited a part of the type annotation that says the a
comes an effect already. With Functor you could not reuse values that came from an effect; with Monad you can.
I’m a student of philosophy, biology, and mathematics.
quotes Deleuze in an article about functional programming
Are you me lmao. Although I’m a math student:) How did you get into Deleuze and Guattari?
Accidentally :) I wrote my undergrad thesis on Levinas, so I was exposed to the French thinkers and naturally read bits of Foucault and D&G.
Wow that’s so great to hear!:D Deleuze is one of the primary reasons behind the gust of motivation for mathematics I’ve gained in the past year. It’s really great how it made me sprawl in so many directions, dynamical systems over formal logic to abstract algebra. It was also really awesome to see these intersect when in the middle of studying for my analysis 2 exam, I stumbled upon an article connecting Taylor series and major analysis topics to algebraic types:D
It’s like SSH, but more secure, and with cool modern features.
And less portable and will take forever to compile 😕
As someone involved in the packaging team on FreeBSD: I’m compiling all the time, and we have lots of users that prefer to compile ports instead of use packages for various reasons as well.
I meant, after you compile, how often do you then use the resulting compiled artifact? I submit that the ratio of time spent compiling against time spent using approaches zero for most anyone, regardless of how long it takes to compile the thing being used.
That depends on various factors. This is an OS with rolling-release packages. If I compile my own packages and update regularly, I will be re-compiling Oxy every time a direct dependency of Oxy gets updated in the tree.
I’m familiar with FreeBSD ports :)
It sounds like all you’re saying is, “All Rust programs take an unacceptably long time to compile,” which, fine, but you can see how that sounds when it’s laid out plainly.
To be fair to @feld, compile times continue to be a number one request from users, and something we’re constantly working at improving.
It’s appreciated. My #2 complaint as someone involved in packaging echoes the problems with the Go ecosystem: the way dependencies are managed is not great. Crates are only a marginal improvement over Go’s “you need a thousand checkouts from github of these exact hashes” issue we encounter.
We want a stable ecosystem where we can package up the dependencies and lots of software can use the same dependencies with stable SEMVER release engineering. Unfortunately that’s just not the reality right now, so each software we package comes with a huge laundry list of distfiles/tarballs that need to be downloaded just to compile. As a consequence it also isn’t possible for someone to install from packages all dependencies for some software so they could do their own local development.
Note: we can’t just cheat and use git as a build dependency (or whatever other tooling that wallpapers over git). Our entire package building process has to happen in a cleanroom environment without any network access. This is intentionally done for security and reproducibility.
edit: here’s a particularly egregious example in Go. Look at how many dependencies we have to download that cannot be shared with other software. This all has to be audited and tracked by hand as well, which makes even minor updates of the software a daunting task.
https://svnweb.freebsd.org/ports/head/security/vuls/distinfo?revision=455595&view=markup
That use-case should be well supported; it’s what Firefox and other Linux distros do. They handle it in different ways; Firefox uses vendoring, Debian/Fedora convert cargo packages to .deb/.rpm and use them like any other dependency.
Reproducibility has been a goal from day 1; that’s why lockfiles exist. Build scripts are the tricky bit, but most are good about it. I don’t know of any popular package that’s not well behaved in this regard.
I’m fairly certain feld wants the OS packager to manage the dependencies, not just a giant multi-project tarball.
Application authors should just publish release tarballs with vendored dependencies.
Check out this port: https://bugs.freebsd.org/bugzilla/attachment.cgi?id=194079&action=diff
It looks like any normal port, just with BUILD_DEPENDS=cargo:lang/rust
. One single distfile. That contains all the Rust stuff.
Rust’s trait object syntax is one that we ultimately regret.
I appreciate they’re willing to admit their mistakes.
I recently was at JSConf EU, where Ryan gave a talk about his regrets with node, ten years later. I think it was discussed here.
I immediately started dreaming about my own talk, seven years from now…
I think I’ve watched that talk on YouTube. I’m not a JS developer, but that was a fantastic presentation that I think really highlighted lessons that apply equally to all programmers.
Rust was eliminated for lack of nested functions, which is entirely fair. Although I understand why it was not included in Rust (because of safety problems), I sometimes miss it.
Clueless people on HN are arguing why closures are not sufficient. Well, because it’s a different feature.
How is it different? Other than the lack of mutual exclusion enforcement in D, Rust closures seem the same to me. What am missing?
A nested function can modify a variable in the enclosing scope. I don’t think Rust can do that.
https://dlang.org/spec/function.html#nested
Edit: Whoa, whoa, RESF, just one of you was enough.
Rust can do that: http://play.rust-lang.org/?gist=cbedf929dc6ee3a45b8e5fa5787460a4&version=stable&mode=debug
I wrote it a little differently than D’s example because of the borrow checker. It’s just an example after all, so I don’t feel that bad about it, but if you want to be a stickler to match D’s example more precisely, I’d probably just use interior mutability: http://play.rust-lang.org/?gist=16d817b9a819518d2c51436730b48e75&version=stable&mode=debug
Go can also do this as well.
Man, I really have a hard time reading Rust. Why did they have to pick such a weird syntax that looks like nothing else?
Looks fine to me. I was never bothered by it, even when I first started, when the syntax was considerably noisier (before 1.0).
But then again, I can’t remember the last time I ever bothered to complain about the syntax of anything. I realize people disagree with me, but as long as it’s reasonableish, I don’t think it matters very much.
I also think discussions about syntax are mildly annoying, primarily because most of it probably isn’t going to change. Either you’re willing to live with it or you’re not.
edited to soften my language
So it turns out that OCaml is a strong inspiration for Rust and that’s why it looks foreign to me. I didn’t know that.
I don’t know OCaml either, but I have done a fair amount of work in Standard ML. That got me used to the tick marks used as lifetime parameters (type parameters in SML).
Types after names I think is in Go, which I’ve also done a fair amount of work in. I don’t know if Go originated that though. I’d guess not.
Most of the rest of the syntax is pretty standard IMO. Some things are too abbreviated for some folks’ taste, but I don’t really mind, because once you start writing code, those sorts of things just disappear. They are really only weird to virgin eyes. Some other oddities include using !
in macros and perhaps the different syntaxes for builtin pointer types, e.g., &mut T
. (Probably defining macros themselves looks really strange too, but there are a relatively small amount of rules.)
Maybe the closure syntax is weird too, I don’t know. I think it’s similar to Ruby’s syntax? I definitely appreciate the terseness, as found in SML, Haskell and heck, even Javascript’s arrow functions, particularly when compared to Python’s or Lua’s longer syntax. Go’s is also mildly annoying because there’s no type inference, and you need to write out the return
keyword. But, you don’t use closures as much in Go as you do in Rust as parameters to higher order functions (in my experience) because of the lack of generics. It’s for
loops all the way down.
As long as it’s within a broad realm of reasonableness, syntax mostly just bleeds into the background for me.
It’s also worth mentioning that the use of interior mutability can lead to more noise in the code, especially in an example as terse as the one up-thread.
You can if the closure borrows a reference to the value (which is necessarily how it works in D). Sharing a mutable reference requires an explicit Cell/RefCell/UnsafeCell, though. UnsafeCell would presumably have the same semantics as in D.
Edit: Whoa, whoa, RESF, just one of you was enough.
Instead of being a troll about it, you might consider that we all started responding around the same time and didn’t know others had already done so. (e.g., When I clicked post
, Steve’s comment wasn’t on my page.)
Well, the RESF is pretty strong. I make one off-hand wrong comment and three of you, whether intentionally or not, jump on it. I practically never get that many comments in quick succession in Lobsters. I obviously struck a nerve by speaking a falsehood that must be corrected immediately.
I’m also slightly unhappy that we can’t ever talk about D without “I would like to interject for a moment, what you are referring to as D is in fact Rust or as I’ve taken to calling it, CRATE+Rust”.
I called out the RESF effect here. Hit Twitter immediately haha. Ive been keeping an eye out since. Ive seen no evidence of an active RESF here or on HN. Just looks like a mainstream language with a lot of fans, areas of usefulness, and therefore lot of comments. I see more comments about Go than Rust.
And then there’s Pony. Now, that one comes across as having an evangelism team present. ;)
that one comes across as having an evangelism team present
Where? :) I’d like to read more actual articles about Pony, but all I’ve seen so far are links to minor release announcements…
It was an inside joke about the Pony fan club we have here on Lobsters. Sometimes I see more Pony articles than those for mainstream languages. They usually have plenty of detail to be interesting enough for the site. Only exceptions are release submissions. I’m against release submissions in general being on this site since people that care about that stuff will likely find the release anyway. Might as well use that slot for something that can teach us something or otherwise enjoyable.
The OP mentions Rust, and you were talking about it too. Scanning the posts tagged with D
, I see exactly one substantive discussion involving Rust other than this thread. So I’m going to have to call shenanigans.
Well, the RESF is pretty strong.
Just stop. If you have an issue, then address it head on instead passive aggressively trolling.
D and Rust are used to solve similar sets of problems. This invites comparisons, not just on the Rust side but on the D side too. A lot of people who haven’t used Rust get their claims mixed up or are just outright wrong. I see nothing bad about politely correcting them. That’s what I did for your comment. What do I get in exchange? Whining about syntax and some bullshit about being attacked by the RESF. Please. Give me a break.
Do you want to know why you don’t see me talking about D? Because I don’t know the language. I try not to talk about things that I don’t know about. And if I did, and I got something wrong, I’d hope someone would correct me, not just for me, but for anyone else who read what I said and got the wrong idea.
As a junior developer doing my best to learn as much as I can, both technically and in terms of engineering maturity, I’d love to hear what some of the veterans here have found useful in their own careers for getting the most out of their jobs, projects, and time.
Anything from specific techniques as in this post to general mindset and approach would be most welcome.
Several essentials have made a disproportionate benefit on my career. In no order:
These have had an immense effect on my abilities. They’ve helped me navigate away from burnout and cultivated a strong intrinsic motivation that has lasted over ten years.
Thank you for these suggestions!
Would you mind expanding on the ‘be political’ point? Do you mean to be involved in the ‘organizational politics’ where you work? Or in terms of advocating for your own advancement, ensuring that you properly get credit for what you work on, etc?
Being political is all about everything that happens outside the editor. Working with people, “managing up”, figuring out the “real requirements’, those are all political.
Being political is always ensuring you do one-on-ones, because employees who do them are more likely to get higher raises. It’s understanding that marketing is often reality, and you are your only marketing department.
This doesn’t mean put anyone else down, but be your best you, and make sure decision makers know it.
Basically, politics means having visibility in the company and making sure you’re managing your reputation and image.
A few more random bits:
start a habit of programming to learn for 15 minutes a day, every day
Can you give an example? So many days I sit down after work or before in front of my computer. I want to do something, but my mind is like, “What should I program right now?”
As you can probably guess nothing gets programmed. Sigh. I’m hopeless.
Having a plan before you sit down is crucial. If you sit and putter, you’ll not actually improve, you’ll do what’s easy.
I love courses and books. I also love picking a topic to research and writing about it.
Some of my favorite courses:
I’ve actually started SICP and even bought the hard copy a couple weeks ago. I’ve read the first chapter and started the problems. I’m on 1.11 at the moment. I also started the Stanford 193P course as something a bit easier and “fun” to keep variety.
One thing that I’ve applied in my career is that saying, “never be the smartest person in the room.” When things get too easy/routine, I try to switch roles. I’ve been lucky enough to work at a small company that grew very big, so I had the opportunity to work on a variety of things; backend services, desktop clients, mobile clients, embedded libraries. I was very scared every time I asked, because I felt like I was in over my head. I guess change is always a bit scary. But every time, it put some fun back into my job, and I learned a lot from working with people with entirely different skill sets and expertise.
I don’t have much experience either but to me the best choice that I felt in the last year was stop worrying about how good a programmer I was and focus on how to enjoy life.
We have one life don’t let anxieties come into play, even if you intellectually think working more should help you.
This isn’t exactly what you’re asking for, but, something to consider. Someone who knows how to code reasonably well and something else are more valuable than someone who just codes. You become less interchangeable, and therefore less replaceable. There’s tons of work that people who purely code don’t want to do, but find very valuable. For me, that’s documentation. I got my current job because people love having docs, but hate writing docs. I’ve never found myself without multiple options every time I’ve ever looked for work. I know someone else who did this, but it was “be fluent In Japanese.” Japanese companies love people who are bilingual with English. It made his resume stand out.
. I got my current job because people love having docs, but hate writing docs.
Your greatest skill in my eyes is how you interact with people online as a community lead. You have a great style for it. Docs are certainly important, too. I’d have guessed they hired you for the first set of skills rather than docs, though. So, that’s a surprise for me. Did you use one to pivot into the other or what?
Thanks. It’s been a long road; I used to be a pretty major asshole to be honest.
My job description is 100% docs. The community stuff is just a thing I do. It’s not a part of my deliverables at all. I’ve just been commenting on the internet for a very long time; I had a five digit slashdot ID, etc etc. Writing comments on tech-oriented forums is just a part of who I am at this point.
Four things:
People will remember you for your big projects (whether successful or not) as well as tiny projects that scratch an itch. Make room for the tiny fixes that are bothering everyone; the resulting lift in mood will energize the whole team. I once had a very senior engineer tell me my entire business trip to Paris was worth it because I made a one-line git fix to a CI system that was bothering the team out there. A cron job I wrote in an afternoon at an internship ended up dwarfing my ‘real’ project in terms of usefulness to the company and won me extra contract work after the internship ended.
Pay attention to the people who are effective at ‘leaving their work at work.’ The people best able to handle the persistent, creeping stress of knowledge work are the ones who transform as soon as the workday is done. It’s helpful to see this in person, especially seeing a deeply frustrated person stand up and cheerfully go “okay! That’ll have to wait for tomorrow.” Trust that your subconscious will take care of any lingering hard problems, and learn to be okay leaving a work in progress to enjoy yourself.
Having a variety of backgrounds is extremely useful for an engineering team. I studied electrical engineering in college and the resulting knowledge of probability and signal processing helped me in environments where the rest of the team had a more traditional CS background. This applies to backgrounds in fields outside engineering as well: art, history, literature, etc will give you different perspectives and abilities that you can use to your advantage. I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.
Learn about the concept of the ‘asshole filter’ (safe for work). In a nutshell, if you give people who violate your boundaries special treatment (e.g. a coworker who texts you on your vacation to fix a noncritical problem gets their problem fixed) then you are training people to violate your boundaries. You need to make sure that people who do things ‘the right way’ (in this case, waiting for when you get back or finding someone else to fix it) get priority, so that over time people you train people to respect you and your boundaries.
I once saw a presentation about using art critique principles to guide your code reviews. Inspiration can come from anywhere; the more viewpoints you have in your toolbelt the better.
The methodology from that talk is here: http://codecrit.com/methodology.html
I would change “If the code doesn’t work, we shouldn’t be reviewing it”. There is a place for code review of not-done work, of the form “this is the direction I’m starting to go in…what do you think”. This can save a lot of wasted effort.
The biggest mistake I see junior (and senior) developers make is key mashing. Slow down, understand a problem, untangle the dependent systems, and don’t just guess at what the problem is. Read the code, understand it. Read the code of the underlying systems that you’re interacting with, and understand it. Only then, make an attempt at fixing the bug.
Stabs in the dark are easy. They may even work around problems. But clean, correct, and easy to understand fixes require understanding.
Another thing that helps is the willingness to dig into something you’re obsessed with even if it is deemed not super important by everyone around you. eg. if you find a library / language / project you find fun and seem to get obsessed with, that’s great, keep going at it and don’t let the existential “should i be here” or other “is everyone around me doing this too / recommending this” questions slow you down. You’ll probably get on some interesting adventures.
Never pass up a chance to be social with your team/other coworkers. Those relationships you build can benefit you as much as your work output.
(This doesn’t mean you compromise your values in any way, of course. But the social element is vitally important!)
This is very cool! Congrats on shipping.
Using control-click to open in a new tab is broken for some reason, fixing that would be great!
Thank you, and thank you for taking a look :)
Odd that the control-click issue is still present. I thought I fixed it for both Ctrl and Cmd (for Macs), unfortunately I only have a Mac with me so I can’t verify the Ctrl issue. I’m checking for ctrlKey
, I wonder if it isn’t true
for some reason.
So, I went to look at this today and it works perfectly now. I don’t know if anything changed, but it seems to work just fine.
Does this means Rust dumping a lot assembly code than C by providing some feature over the developer side?
Please consider my above thought as humble question, I haven’t even write a single program in Rust but I found lots of good word about it from the community and may be in future I will give a try.
But, I found that FF is fast now but taking huge memory ( > 750 MB while running with couple of tabs ) and process ( 54% of total process ) on my Pentium system. It just my thought that Rust provide good abstraction by giving an easiness way to write system code but dumping lots of code that make it huge and processor lover! Don’t consider me negative, I may wrong, but just asking you for further explore.
There’s still so little Rust in Firefox compared to the whole codebase that that shouldn’t be the sole issue with something like this.
In general, it should be roughly the same as C or C++, not significant more. Sometimes it’s less!
Nice article. How do you feel about the size of the language? One thing that keeps me off from looking at rust seriously is the feeling that it’s more of a C++ replacement (kitchen & sink) vs a C replacement.
The Option example feels a bit dropped off too early, you started by showing an example that fails and then jumped to a different code snippet to show nicer compiler error messages without ever going back and showing how the error path is handled with the Option type.
You should also add Ada to the list of your languages to explore, you will be surprised how many of the things you found nice or interesting were already done in the past (nice compiler errors, infinite loop semantics, very rich type system, high level language yet with full bare metal control).
Thank you for commenting! I agree that Rust’s standard library feels as big as C++‘s, but I haven’t been too bothered by the size of either one. To quote Bjarne Stroustrup’s “Foundations of C++” paper, “C++ implementations obey the zero-overhead principle: What you don’t use, you don’t pay for [BS94]. And further: What you do use, you couldn’t hand code any better.” I haven’t personally noticed any drawbacks of having a larger standard library (aside from perhaps binary size constraints, but you would probably end up including a similar amount of code anyway, just code that you wrote yourself), and in addition to the performance of standards-grade implementations of common data structures, my take on it is that having a standardized interface to them improves readability quite a bit - when you go off to look through a codebase, the semantics of something like a hashmap shouldn’t be surprising. It’s a minor draw, but I feel like I have to learn a new hash map interface whenever I go off to grok a new C codebase.
I’ll definitely take a look at Ada, seems like a very promising language. Do you have any recommendations for books? I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.
Also, thank you for pointing out the issue with the Option example, I’ll make an edit to the post at some point today.
It’s funny how perspectives change; to C and JavaScript people, we have a huge standard library, but to Python, Ruby, Java, and Go people, our standard library is minuscule.
I remember when someone in the D community proposed to include a basic web server in the standard library. Paraphrased:
“Hell no, are you crazy? A web server is a huge complex thing.”
“Why not? Python has one and it is useful.”
What you don’t use, you don’t pay for [BS94]
That is true however you have little impact on what others use. Those features will leak into your code via libraries or team mates using features you might not want. Additionally when speaking about kitchen & sink I didn’t only mean the standard library, the language itself is much larger than C.
I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.
Last I did anything related to Ada was somewhere around 2012. I recall the Barnes books were well regarded but I don’t know if that changed in any significant way.
For casual reading the Ada Gems from Adacore are fun & informing reads.
I’ll definitely take a look at Ada, seems like a very promising language. Do you have any recommendations for books? I think my friend has a copy of John Barnes’ Programming in Ada 2012 I can borrow, but I’m wondering if there’s anything else worth reading.
I recommend Building High Integrity Applications in SPARK. It covers enough Ada to get you into the meat of SPARK (the compile time proving part of Ada) and goes through a lot of safety features that will look familiar after looking at Rust. I wrote an article converting one of the examples to ATS in Capturing Program Invariants in ATS. You’ll probably find yourself thinking “How can I do that in Rust” as you read the book.
All these points mentioned in the post are also applied to C except latest language standard revision. Also, C have C11.
Why I am pointing out C? Because I am still not fan of C++ syntax.
I think it is a stretch to say C is in active development. It is at best in maintenance mode.
C++ is in active development.
It looks like C is on track to possibly get a new published standard around 2021/2022. It also seems to me that C has always been a significantly simpler language than C++. Where C++ is getting everything and the kitchen sink, making an already complex language even more complex, C has less to change and therefore changes less frequently.
One barrier here is that Microsoft has seemingly decided to stop working on C compatibility with MSVC; it doesn’t even fully support C99 yet, let alone C11. A new standard doesn’t matter much if one of the largest platforms in the world won’t support it.
A new standard doesn’t matter much if one of the largest platforms in the world won’t support it.
These days I would not be much surprised if Microsoft would replace MSVC with clang or even GCC.
Why? My impression is that the MSVC compiler is quite good. I only use the linker daily, not the compiler itself, but especially recently, I’ve only heard good things. Very different than ten or even five years ago.
Why?
A project manager making their numbers look better on the compiler side by using less programmers and moving at higher velocity. The reason: clang or GCC are doing most of that work now with MSVC a front end for them.
I’m sorry, I’m finding this reply really hard to parse.
Are you saying, people will move compilers because they want to use the new standard, which brings benefits?
And what’s this about MSVC being a front-end for Clang?
You asked why Microsoft would ditch their proprietary compiler that they pay to maintain in favor of a possibly-better one others maintain and improve. I offered cost cutting or possibly-better aspects as reasons that a Microsoft manager might cite for a move. Business reasons for doing something should always be considered if wondering what business people might do.
Far as front end part, that was just speculation about how they might keep some MSVC properties if they swapped it out for a different compiler. I’ve been off MSVC for a long time but I’d imagine there’s features in there their code might rely on which GCC or Clang might not have. If so, they can front end that stuff into whatever other compiler can handle. If not and purely portable code, then they don’t need a front end at all.
Glad you liked TRPL! I agree that some of the examples can be too abstract at times, it’s tough. Totally open issues if you want to talk about it; I’m open to improving things, but I’m also pretty particular about the book.
Glad to hear confirmation that the book is open to those kinds of suggestions :) And I can totally understand being particular about something like that, hopefully I’ll be able to offer some analogies that are up to snuff
I’m not particularly thrilled with either the tone or content of this essay.
First, there’s some revisionism:The essay goes on to redefine “success” in some super-niche way and qualify it, but this is rhetorically dishonest. I might as well write in an essay that “The first reason I disagree with them is that @steveklabnik is a fascist” and then go on to explain that by that I mean a fascist who supports government control of resources and careful policing of speech…sure, under that tortured definition I am both consistent and not wrong, but you wouldn’t be faulted for observing that I’m not right either.
JVM was never standardized.
Flash was not built on ES3/ES4 standards.
Unfortunately, just because something is standardized and went through the wringer doesn’t mean it isn’t hot steaming garbage.
Most of those features didn’t exist when Java applets first came out in 1995. Most of the useful stuff that Flash was good at didn’t exist in browsers for most of the 00’s. The essay is trying to sell you on a state of history that didn’t exist in any meaningful way.
“…unless it gets you marketshare” is the rest of the sentence that the author leaves out. Browser vendors (and hell, let’s be real here: we mean Google, Apple, Microsoft, and that’s basically it these days) break things all the time and don’t care. https://caniuse.com/ is a monument to this fact.
Then there’s seeming misunderstanding about how browser development works:“Making profit” is how IE managed to deliver the first working version of CSS. Similarly, how they delivered a little thing called XMLHttpRequest, a small api with passing utility later adopted by every other browser manufacturer.
Google Chrome delivered lots of neat features ahead of standardization specifically so they could feed the ad machine. And Mozilla happily rode those coattails for a good long time.
I think the notion of “let’s make the web better” ultimately–intentionally or not–boils down to “let’s serve ads better”, once you look at things in context.
…and how companies work…So, two ad-driven companies, a company known specifically for locking-down platforms (and goofing around in standards and industry groups if one’s familiar with OpenGL or the development of the Cell processor), and a company who is switching to moving as much of their stuff into the cloud–where it can be taxed and leased to users without fear of reprisal. I see why they might want to support WASM.
…and how maintenance works…Runtimes don’t all need to be integrated, and we handily managed to keep the runtimes for JVM and Flash maintained for more than a decade, by letting interested parties support them.
…and how language proliferation and the Tower of Babel work…Picking one language is better for longevity’s sake. Standardization–the shibboleth touched on here and there in this essay–would suggest that we take one language and adopt and update it as needed.
Wasm, as I rallied about many a time, is likely to make frontend dev skills even less portable than they already are. It’s an opportunity to let language writers and well-meaning developers rediscover every mistake in their language and find some way of introducing it into the browser and then saddling devs with their shortcomings!
…and how security works.And yet it directly enables malware and side-channel attacks, while it’s proponents kinda ignore the issue and forge ahead.
~
We’re all going to be stuck with the outcome of this, and folks promoting the technology for their own career advancement or aesthetics are doing so, seemingly to me, without enough care for the ramifications of what they’re pushing.
EDIT: Cleanups to soften this up a bit…my annoyance got in the way of my politeness.
EDIT: Removed many “actually”’s in various headers, since I read them in my head as “well ak-shu-alee”
I’m going to give you one reply, but that’s it; we have such a divergence of opinion so often that I don’t feel like we’re really going to agree, but I would like to respond to some things.
I tried to be super clear here that this is from an implementor’s perspective. Success in this context means “becoming part of the web platform.” I don’t feel that’s dishonest, it’s about being clear about exactly what I’m trying to talk about.
That’s a specification, not a standard.
“built on” does not mean “conforms with”; ActionScript and ECMAScript are similar, but different.
This is true, but that’s why I qualified what this post is about. The user’s perspective is something else entirely.
CSS was in development at the time, and shipped in 1996, it’s true. But regardless of the start, that kept being true. They could have added support, but they did not. It’s effectively the same.
CanIUse, to me, is more about when you can adopt new features. They ship in different browsers at different times, but eventually, a lot of stuff is the same. Crossplatform web development has never been easier.
Yet they still were full of vulnerabilities. This was due to the disconnect I talk about in the article. It also doesn’t address the inherent complexity in coordinating two runtimes compared to one.
This was already possible with JS, wasm doesn’t fundamentally change things here
This article is clickbait nonsense. It says that this could happen, but ignores that everyone involved with WebAssembly is acutely aware of these issues, and is not moving forward until they’re addressed. Heck, before Meltdown/Spectre was announced, I was in a room with one of the main designers of WebAssembly, and someone brought up SharedArrayBuffer for some reason. He said “yeah so you shouldn’t rely on that and I can’t talk about why”. Then we all found out why.
You’re letting your disdain bias you against the facts.
TC39 is not the body that standardizes WebAssembly.
This kind of slander is why I rarely post to lobste.rs anymore. I’m out.
It makes things easier, and the general increase in performance allows more interesting and obtrusive categories of malware than we saw with JS.
There’s no sane way to have multiple threads and shared memory primitives of which I’m aware that don’t enable timing attacks at this time. The option seems to be to remove them entirely, and the Github link shows that at least a few people are hesitant to do that.
Thank you for the correction–I don’t know if there is a lot of bleedover between the groups. If there is, my concern remains.
That’s my honest opinion, slander wasn’t my intent. This pattern is being repeated everywhere in our industry. If you don’t think it applies in your case, please correct me.
Making an unsubstantiated accusation in a public forum is slander, even if you happen to believe the accusation to be true.
And to be clear, you accused promoters of wasm of self-interestedly putting career/aesthetics above the common good. You made no allowance for the idea that they might actually have decent reasons for believing and acting as they do. If they disagree with you on wasm, then they are simply a bad person.
Putting all of that behind a “it seems to me” doesn’t actually change what you are saying. If you meant something else, I strongly suggest rewording. If not, then please don’t post such attacks on Lobsters.
Please consider the original phrasing–there’s a reason I picked it:
I state one thing as fact: we’re stuck with the outcome of the wasm debate.
I state the rest as opinion: the set of people promoting the technology, who are doing so for their own career advancement or aesthetics, seem to be doing so without enough care for the ramifications of widespread wasm adoption it seems to me.
There is a much more direct way of slandering people if that’d been my goal.
Sidestepping the question of whether it’s slander: it’s non-constructive and counterproductive to speculate about the intentions and motivations of the people you are discussing with. It’s poisoning the well.
It’s an accusation, not a matter of opinion.
I’m sure you know you’re writing to a Mozilla employee. Does saying “you’re not even a real browser vendor” really help your argument?
(I’ve worked at Mozilla. Loved it, but competing against some of the largest and most well-funded corporations in the world is hard, and it can be frustrating to see what they get away with. Jabs like this pile up and demoralize you.)
In my haste I plain forgot to list Mozilla. One of many glaring oversights I’ve made today.
That said, my point is that there are really only 4 vendors that matter, since the vast, vast majority of browsers are powered using one of 3 or 4 engines.
That makes more sense. Thanks for clarifying.