@MiraWelner you mentioned that one of your posts hit the front page of HN for an hour or so. How many visits did you get? And the site served everyone with no downtime? It always amuses me to see a site go down from the HN hug of death, knowing that other sites (such as yours) successfully serve all the traffic from literally a hobbyist computer in someone’s house
This is actually a really funny story - when this post and the git post were on the front page my site was fine. However with this post in particular on HN the site did go down and commenters assumed it was the hug of death.
But actually I don’t think it was the hug of death because it went down exactly at 2am EST - which is when it updates and reboots if necessary!
Is there a simple way to have a backup server for the site? E.g. can I point one of the DNS A records to my IP and another one to my GitHub-Pages-hosted version of the site?
This is one of my dream projects but I could never figure out how to expose the site to the public internet. I will try the port forwarding thing but I have a feeling my Xfinity router has locked that down (please correct me if I’m wrong)
Got it working with like 10 minutes of effort, lol. Don’t know why I struggled previously so much
One thing I previously was stumped about was getting my public IP address. Kinda surprising that I just go to a site like whatsmyip.com and get the value from there. I thought that wouldn’t work because Xfinity always rotates my public IP
Try Cloudflare tunnel (like a comment above suggested).
It creates a private connection between your home network and cloudflare, which won’t expose your home IP or network to the outside.
It’s a compromise to have cloudflare MITM your self-hosted website, but it’s better than burning through your (very generous—sarcastic) xfinity monthly cap.
In theory, yes. You get access to their CDN when using a tunnel. You can setup custom caching rules to serve content from their edge network and reduce your outgoing bandwidth.
In practice, no one visits my site so I can’t test it. lol.
The type of person the author talks about exists I’m sure, but I think most of these people are just… tired. Again, I’m not trying to defend the bad ones, just throwing out some counter-examples for the ones who get caught up in it all.
that engineer who guards the code against anyone else’s changes, wanting all the credit for themselves
Maybe they were told to prevent all but the absolutely necessary changes by leadership, and are tired of constantly having to argue with other teams whose leaders told their engineers “this is top priority, it has to go in.” Neither side of engineers cares that much, it’s the leaders who don’t agree. They are tired.
The part about “wanting all the credit” is unnecessary, and quite rude if the situation is more like what I said above. They may just be doing their exact jobs, too.
that engineer who refuses to add anything else to their codebase, because they don’t want to maintain it
Maybe it’s the situation above, the ancient massive core they have been told to keep running, which sprint-driven product teams are told by their leaders to go muck with. Maybe they are tired of code being merged that is abandoned as the authoring team is “on a new sprint” or “has new quarterly objectives” or whatever. It’s the authors who “don’t want to maintain it” in this case, but the old guard doesn’t want to either, because their objective was “prevent all but the most necessary changes” and they constantly have to break that. They are tired.
that engineer who tries to gain code ownership over more areas, to control more of the code and more of the company
Sure, this type of person probably exists. It’s also very likely one of the direct, measurable things considered for promotion, and… come on, it’s (likely) a for-profit software company, everyone should be going for promotion all the time. They want to retire sooner, because they are tired.
—
To firmly reiterate, I’m just throwing some different viewpoints out there because it’s way too easy to jump on the strawman sometimes. Of course the jerks are out there too, but some of these people are just doing their job, and have different responsibilities than you do. When in doubt, approach them as humans who are tired first, instead of just assuming things about them.
All assumptions about internal state are on shaky ground IMO. Assuming tiredness is a little better than jealousy because it’s assuming good intent versus less-good, but it’s still an assumption. Analyzing external incentives, like you did, is probably the most productive way forward.
Cloudflare Tunnel is free and a good solution for those behind CG-NATs or an ISP firewall. It also offers effortless DoS protection.
I will admit, however, that I think it’s slightly “cooler” in some sense to host your site directly from your home, with no assistance from Cloudflare or other giant tech companies, even if you don’t really get much tangible benefit from doing it that way.
(By these standards of course, my personal site is rather lame because it’s just your standard Jekyll + GitHub Pages site.)
What are the risks of port forwarding and hosting on home network? I get the general risk of giving the public internet direct access to my home devices. But how do people specifically exploit this? It depends on me misconfiguring or not properly locking down the web server, right?
Pretty much, but nobody has ever made an unhackable server. So even if you “properly” configure the server it’s not 100% secure because nothing is.
I did get my router hacked and it had third party malicious software installed on it and it didn’t function until I got the NetGear people to fix it which is why I installed fail2ban vibe has worked so far. But nothing is foolproof.
Let’s assume you forward port 443 to your Pi running Apache. You’re basically exposing the following bits of software to the Internet:
Your kernel’s TCP/IP stack
Apache
Any software you may choose to place behind an Apache reverse proxy
The biggest risk is an RCE in any of those pieces, because you’re truly pwned, but I’d lay pretty long odds against an RCE in the Linux network stack, and I don’t think your average Apache config is at much risk either – these things have both been highly battle-tested. Some sort of denial-of-service exploit is more likely but again, Linux+Apache have powered a huge chunk of the Internet for the last 25+ years. Now, if you write an HTTP server which executes arbitrary shell commands from the body of POST requests and proxy it behind Apache, you have only yourself to blame…
I expose HTTP and a few other services from my home network via port forwarding. I don’t lose sleep over it.
I worry that if the steward still has the responsibility of maintenance and bug fixes, they may burn out if there aren’t other members.
The house metaphor is powerful. But I’m not sure it matches well. Everyday people have a generally good sense of how to keep the floor clean, coffee machine working etc. But where do the stewards find the support needed to keep their projects going? Even well intentioned support by developers who don’t understand the nuances or consequences of different design decisions can be a burden for the steward. Ultimately, I worry the stewardship is a noble idea, but without a more concrete formula and a strong team, it’ll lead to burnout/going back to ownership.
Yes, I was surprised that they used a house metaphor and not the most obvious stewardship metaphor: steward of a nature sanctuary. Very easy to imagine a sense of service to a greater cause when imagining that you’re protecting the last remaining land where a certain species of animal can thrive.
The house metaphor is also ambiguous. In the USA, owner-occupied homes are generally in better condition than renter-occupied ones. Pride of ownership is a real phenomenon. But you also see a lot of that shitty ownership behavior as described in the article. E.g. go on Nextdoor and look at all the paranoid and petty comments. Renter-occcupied neighborhoods have their own problems, but they are of a different nature.
Side note: that expansion board for the Pico is phenomenal. Really ergonomic
Note that the name of the display is SSD1306, not SDD1306, as the article incorrectly says a few times. Definitely makes it easier to find examples when you get the name right!
There’s also a few Rust crates for it, including one that’s compatible with embedded-hal
I’m pretty sure that there is no separate datasheet for the 128x32 variant; the datasheet you’re looking at is correct. I recall it saying towards the start of the datasheet that there are a few different size variants but the underlying I2C spec is the same for all variants
Documentation as Code (Docs as Code) refers to a philosophy that you should be writing documentation with the same tools as code: Issue Trackers, Version Control (Git), Plain Text Markup (Markdown, reStructuredText, Asciidoc), Code Reviews, Automated Tests
This means following the same workflows as development teams, and being integrated in the product team. It enables a culture where writers and developers both feel ownership of documentation, and work together to make it as good as possible.
I wonder what the first “X as Code” term was? I think Docs as Code started around 2015
Week 4 of baby bonding leave! Besides learning how to be a good dad I am finding a fair amount of time to learn embedded Rust and work on a personal project related to automatically updating documentation.
I am a proper Rust n00b ramping up as we speak. Even before getting into conceptual challenges of understanding lifetimes, I think it’s important to mention that the syntax was truly jarring for me. In literally every other language I’ve learned, a single unclosed quote means that I have written my program incorrectly in a very fundamental way. I’ve been programming for over 10 years so there’s a lot of muscle memory to undo here. Sorry if this bikeshed-y topic has been discussed to death, but since the article explicitly covers why lifetimes are hard to learn and doesn’t mention this point, I figured it’s fair game to mention again.
I personally like the notation, but I could see how it looks jarring. FWIW, Rust probably borrowed the notation from ML, where single quotes are used to denote type variables. For example, the signature of map in Standard ML is:
val map : (’a -> ’b) -> ’a list -> ’b list.
I got nerd-sniped thinking about how old the use of ' as a sigil might be. It’s used in lisp; I think it’s oldest use might go back to MACLISP in 1966. I think older dialects of lisps required that you say (quote foo) instead of 'foo. See section 1.3 “Notational Conventions” on page 3 of the MACLISP manual (large PDF warning).
Is there a reason that it’s only for lifetimes and not all type parameters? My guess would be because it makes them easy to distinguish (and since type parameters are more common, they get the shorter case), but I could be wrong of course.
I recently saw it in reading the various Cyclone papers, where they introduced it for identifying differing regions, with those having different lifetimes. However, I believe Cyclone was itself drawing inspiration from ML (or Caml).
It also has mention of “borrowing” in what may be a similar fashion.
The ' is an odd one. I had another skim of the history of Standard ML but it isn’t very interested in questions of notation. However, it reminded me that ML type variables are often rendered in print as α, β instead of 'a, 'b, which made me think this might be a kind of stropping. “Stropping” comes from apostrophe, and prefix ' was one form it could take. (Stropping varied a lot depending on the print and punch equipment available at each site.) But this is a wild guess.
I got nerd-sniped thinking about how old the use of ’ as a sigil might be. It’s used in lisp
Oh, yeah, it took me a good six months to stop typing a ( right after the '. The joke about Greenspun’s tenth rule practically writes itself at this point :-).
I’m only mentioning this for the laughs. As far as I’m concerned, any syntax that’s sufficiently different from that of a Turing tarpit is fine. I mean, yeah, watching the compiler try to figure out what the hell I had in mind there was hilarious, but hardly the reason why I found lifetimes so weird to work with.
I haven’t heard this particular challenge before. I came to Rust long after I learned Lisp and Standard ML, so it never occurred to me that it would be jarring, but if you’ve only worked in recent members of the ALGOL family I can see that being the case.
What do you mean muscle memory? Do you usually double your quotes manually? Does your IDE not do it for you?
Not trying to “attack” you or anything, genuinely curious as these kind of syntactical changes are more-or-less invisible to me when writing code due to a good IDE “typing” what I intend based on context for the most part.
As for a single single quote being jarring, I believe it does have some history in LISPs for “unquote” or for symbols. Possibly, the latter case was the inspiration in case of Rust?
Edit: I see there is a much better discussion in the sibling thread regarding its origin.
Ah yes, now that you mention it, “muscle memory” is not the right phrase here. I didn’t mean muscle memory in the (proper) sense of a command that you’ve been using for years, and then now you need to use slightly differently. What I meant was that for years, whenever I saw a single quote, I expected another quote to close it somewhere later in the code. And now I have to undo that expectation.
it has no room for chapters with non-reference docs, like tutorials, getting started. That can be stuffed into module-level docs, but that’s not ideal.
it sorts types alphabetically, not in order of importance, nor even order of definition. All types in a module are thrown into one list. That makes it hard to find how to use it if you don’t already know what to search for. In practice libraries have some main type you use to initialize them, but good luck finding it among all the minor error types, newtype wrapper types, iterator types, and all kinds of helpers.
it takes a skill to understand the trait implementation sections. There’s a lot of boilerplate and noise there. Traits repeat all their methods even when this is redundant. It’s important to know if a type is an Iterator, but not list the same builtin 100 iter methods every time.
Apart from blanket impls (which are boring noise with too much prominence), there’s no UI distinction between standard traits, crate-local traits, foreign traits. These are usually implemented for very different reasons.
if you see Pattern arg in std, you won’t know it can be a closure with various arguments. Figuring this out needs diving deep into the trait impls and their bounds.
it doesn’t handle big types with lots of methods well. There is no explicit support for grouping them (str has search methods, has splitting, case changes, but they’re all mixed in the nav). It doesn’t even de-emphasise deprecated or nightly methods. I’d like _mut() and non-mut grouped as two flavors of the same method. I’d like to see “static” functions distinguished from self methods.
It’s like cargo – good enough, and all the value is in having it consistently for every crate.
it sorts types alphabetically, not in order of importance, nor even order of definition
We debate this all the time on pigweed.dev. Among Pigweed contributors opinion is roughly split: one half prefers organizing alphabetically, the other half prefers organizing by order-of-importance.
If I was benevolent dictator of rustdoc I personally would stick to alphabetical organization and would never allow alternate organization schemas. As you said, the value is the consistency. Alphabetical may be sub-optimal in terms of finding what you need as quickly as possible, but it’s a system that can be unambiguously enforced in every library in the entire ecosystem. That’s a really powerful level of consistency. Also, the most logical organization for the crate owners is sometimes not the most logical organization for crate users, and in those cases the order-of-importance organization may actually be less effective than the highly predictable alphabetical organization. From the perspective of hypothetical rustdoc benevolent dictator, there’s no way for me to guarantee that the order-of-importance that the crate owner has decided upon is the same as the order-of-importance for crate users. At least, much more difficult to guarantee than alphabetical organization.
Reference material is useful when it is consistent. Standard patterns are what allow us to use reference material effectively. Your job is to place the material that your user needs know where they expect to find it, in a format that they are familiar with.
Very interesting to see where exactly rustdoc falls short from someone who has obviously looked over many real, non-trivial API references extensively. Thanks for the details.
I think a lot of appeal of alphabetical sorting is merely a comfort of sticking with something we’ve been always doing, even though the original reason for it — making it possible to search on paper — is gone. It’s more a consistency to have consistency, than to solve a user problem.
Rustdoc has an instant search. It can match more than just the prefix, and even supports name aliases. It’s almost a disservice to users to make a UI that suggests they can search lists manually, limited to just the prefix (is it under file_open, open_file, load_file, try_get_file?).
Alpha sorting creates implicit grouping of names with common prefixes, but that has systemic failures — new, from_, with_, and builder constructors are all over the place. as_, into_ and to_ are scattered. try_ methods are divorced from their panicking alternatives.
As a library author, I think I have a pretty good idea of what is most important in my library. Even if there can be different views, it’s not going to be the type starting with A. I’d rather have the ordering usually helpful than consistently irrelevant.
In my experience, a conceptually-grouped list takes a lot of thought to do well, and requires careful maintenance. It’s often the case that there are several categorizations that cut at different angles, so functions might need to be listed in multiple groups. Generally I like to see that kind of thing in handwritten overview section, in addition to a comprehensive automatically maintained alphabetically sorted reference section.
I like to add cross-references to closely-related functions. If functions are grouped, maybe the groups should be explicit tags in each function’s description, so a reader can jump to the list of functions tagged in the same group.
Reference documentation can be too DRY. If two different ways of organizing it are at odds, the tools should probably help us to produce both ways, not just one.
merely a comfort of sticking with something we’ve been always doing
The value IMO is that everyone is familiar with this organizational scheme and can recognize it very quickly, à la Don’t Make Me Think.
As a library author, I think I have a pretty good idea of what is most important in my library
Yes, this is a fair point. I wrote the library, I know the core use cases.
How do you imagine the mechanics of order-of-importance organization working though? Marking up each API item with an attribute would be toilsome. I guess rustdoc could spit out a flat JSON list of all API items, and then it’s just a matter of re-organizing the list…
(Or a nested dict, to indicate grouping of API items)
That same Diataxis page I linked to before does however make an argument along your lines:
The way a map corresponds to the territory it represents helps us use the former to find our way through the latter. It should be the same with documentation: the structure of the documentation should mirror the structure of the product, so that the user can work their way through them at the same time.
It doesn’t mean forcing the documentation into an unnatural structure. What’s important is that the logical, conceptual arrangement of and relations within the code should help make sense of the documentation.
Rustdoc, like a lot of tooling in Rust was actually significant step up from what other languages had when it was being created (pre 1.0). In 2025, after years of using it the novelty and amazement wore off and is a distant memory.
My favorite aspect of it immediately was that it was unobtrusive. I could just write stupid Markdown after three slashes and get decently looking automatically generated API documentation. Seems … basic and obvious, now, but in 2014 it was mind blowing how simple and convenient it is, when Java or C++ would requires some @doc or whatever obnoxious and tedious syntax.
Now, I really love Rust, and I love Rustdoc, but since me and most posters will keep praising it I will specifically focus on the negative side (cuz I’m Slavic, and complaining is our hobby).
The way inline documentation tests are handled (which is and even more so was when introduced an amazing idea), is kind of limited and inconvenient. You need to execute them with special syntax, fixing them is weird because tooling reports them weirdly, at least my text editor (Helix) can’t recognize them as nested Rust code.
LSP does not highlight broken links in documentation. If there’s a way to enable it, I’d love to know, and it really should be the default. After refactoring, I often have to run rustdoc and fix up all the links “manually”.
Integrating with larger documentation could be better. In our project we generate and publish (from the CI) rustdoc documentation with some embedded extra documentation, turning it all into kind of a mdbook. It works, but is limited and required quite a bit of hacking things together. Basically even customizing the landing page is an unstable feature, IIRC. I feel like rustdoc is so good at getting api documentation, that it makes sense to “embed it” into other, larger bodies of documentation in serious larger projects, and it would be easier if it exposed more hooks to tie into.
LSP does not highlight broken links in documentation. If there’s a way to enable it, I’d love to know, and it really should be the default. After refactoring, I often have to run rustdoc and fix up all the links “manually”.
If this is a comment about rust-analyzer: I can find a few open and closed issues about intra-links support in r-a, but nothing that quite describes that. Could you file a bug?
The way inline documentation tests are handled (which is and even more so was when introduced an amazing idea), is kind of limited and inconvenient.
One thing I’m wondering about for pigweed.dev is how good these docs tests work in an embedded Rust context. E.g. some of our code examples are intended to be run on physical RP2350s. Is there a mechanism to cross-compile for the RP2350 and then actually run the docs tests on physical RP2350s as hardware-in-the-loop tests? Based on this comment it sounds like rustc and rustdoc may have no problem with this, but I could also imagine it being quite a can of worms to get working correctly.
LSP does not highlight broken links in documentation. If there’s a way to enable it, I’d love to know, and it really should be the default.
Yes, considering all the craftsmanship in the rest of rustdoc I’m surprised that it has overlooked this core, easily verifiable aspect of docs quality!
Tangentially related, does rustdoc have cross-reference syntactic sugar like Sphinx/reStructuredText? E.g. I can create a section with an ID like this:
.. _guide:
==================
How to foo the bar
==================
And then in other docs to link to this section all I need to do is:
For more information, see :ref:`guide`.
And at “compile-time” it gets rendered as:
For more information, see <a href="…">How to foo the bar</a>.
Integrating with larger documentation could be better.
Definitely know this struggle! I need to figure out how to unify Pigweed’s auto-generated Python, C++, and Rust references with the rest of the site in a consistent manner.
Is there a mechanism to cross-compile for the RP2350 and then actually run the docs tests on physical RP2350s as hardware-in-the-loop tests?
You could make it work, but I believe it doesn’t work OOB. IIRC, the test harness (a program that the test framework generates that calls the test functions) requires std, so you are out of luck on platforms that do not have std.
There’s a few projects out there for this kind of stuff (I think there’s no difference between doctests and regular tests here that matters), but we haven’t found yet anything that ticks our boxes. Embassy is using teleprobe, and if you start to survey the major Rust embedded projects you’ll find a few more, but I don’t think there’s any ready to consume in a nice way yet.
Yeah, the visual difference between our generated rustdoc and the rest of our docs site is pretty jarring… I’ve started to look into theming the rustdoc output. My thinking right now is to create a shared CSS file that the main site and the rustdoc subsite both rely on. E.g. in this shared CSS we would define fonts, colors, and stuff like that which can be safely shared to make the visuals between the two sites look more cohesive.
I can generate rustdoc for a project of mine without the need to install additional tools
Yeah, this is a big differentiator. When I migrated pigweed.dev to Bazel it was quite a lot of work to get Doxygen working correctly within a hermetic environment whereas integrating Rust was trivial. To be fair, however, we have a lot more C/C++ libraries than Rust libraries currently.
What other languages provide auto-generated API references as a built-in feature?
Okay, rustdoc is actually the reason I came to Rust about 12 years ago. I joined at about 0.4. All that I’m writing about, exists roughly since then. Because rustdoc is a tool built by people who understood what can be useful for programming in the real, not in the abstract. Also, I won’t repeat @ssokolow.
On the surface, it’s a documentation generator like any other, with a reasonably nice HTML template (it comes out of browser vendor after all), which is very readable. They have an eye on how docs are consumed. It’s built for deep-linking in all aspects - want to link to a function? Sure. A header in one of the comments? Sure. Line of code in the underlying source code? No problem. It follows the rules of a good document.
It goes the extra mile by trying to make some of the weirder bits of Rust more accessible. A lot of functions that can be called on String are not from String, but from &str through Rust dereference. Of course, there’s a header called Methods from Deref<Target = str> that still lists them all in String. A return type is more a value that is returned because of the traits (think interfaces) it implements? There’s a small (I) next to it to tell you on hover and so on.
Before I go into those features, a little nerdery at first: rustdoc and rustc are intertwined in the sense that rustc is predominantly a library with multiple binaries on top, one of them is the compiler, one of them a linter (clippy) and one of them rustdoc. That means rustdoc always sees everything the compiler sees and is always built with the compiler. That means it can resolve connections between types properly, etc, etc. You can see a status of tools working over rustc nightly here: https://rust-lang.github.io/rustup-components-history/
Another interesting thing to know is that Rust comments are actually understood by the compiler much more than in other languages. They are not just commented out lines.
Something like:
/// This lobster is angry
fn pinch(&mut self, &mut Other) {
}
Docs in Rust are attributes of the thing they document, so they can be picked up through the AST. That means that rustdoc always has a very clear view what a comment belongs to.
What I’m getting at: documentation in Rust is much more a first-class citizen since about forever than you may realise.
That leads to a number of interesting features.
Rustdoc, since forever, e.g. has the ability to parse comments from your documentation and run them as code. If a code example is marked as rust code (```rust), rustdoc can extract it, compile it in the context of the library and run it as a test. This can be controlled, e.g. you can also tell it to compile the code, not run it (e.g. if it’s an example that starts a server and never returns) or even that the example should fail. rustdoc is more than a renderer.
That leads to a culture where Rust documentation examples are often very correct. Particularly all the stdlib examples are run through that process.
The second interesting thing is that rustdoc actually nowadays can be used to output a JSON dump of its data that holds enough meta data to write an extremely good semver checker in it (https://crates.io/crates/cargo-semver-checks).
I really like that it stuck to its guns and uses Markdown a text flavour. That’s a great choice, because, while Markdown lacks certain features one may want (I love rst as well), it is: very low bar, well understood and the right thing for writing a quick documentation comment.
I’m actually not a huge fan of the search function. It’s good, much better things are possible. I used to be a search engineer though, I appreciate that this is hard work that may bot be worth it in the end.
I think rustdoc is not cool because it’s in any way fancy. rustdoc is cool because it’s your reliable worked doing all the annoying work to do docs in good detail for you.
It’s built for deep-linking in all aspects - want to link to a function? Sure.
Just a note: As with non-lexical lifetimes, this is a feature that wasn’t present in Rust v1.0, felt like it took far too long to arrive for us old-timers, and now feels so natural that you’d never imagine it wasn’t there from the start.
Originally, you just had to generate the docs, see what the URL would be, and then reference it as an ordinary relative hyperlink.
I think you’re both saying different things. @skade was saying the anchors in the HTML were there from a very long time. You were saying the ability for rustdoc to compute and insert the link to a symbol is relatively more recent.
I don’t remember when the second ability was added but I remember reading it in release notes and being surprised it wasn’t possible yet! I’m pretty sure it had a nice dedicated section to highlight it.
EDIT: it was in Rust 1.48, released Nov. 19, 2020.
Thanks, this is exactly the kind of “in-depth info from people who have worked extensively with the tool for a long time” that I was looking for.
Docs in Rust are attributes of the thing they document, so they can be picked up through the AST.
Fascinating, did not know this. Do Doxygen and Javadoc not do this? I suppose the main difference is perhaps that Doxygen and Javadoc need to do their own work a lot more to associate the comments with the code, whereas in Rust the compiler itself is doing all that work by default.
rustdoc actually nowadays can be used to output a JSON dump of its data
Does this mean that it would be feasible to write a library like Breathe but for Rust? As commented here I’m pretty sure it’s not actually a good idea. Just curious if it’s possible. One thing I am very interested in, however, is whether it’s possible to pull in rustdoc metadata to improve Sphinx’s built-in, client-side search. E.g. when you go to pigweed.dev and press Ctrl+K to open the in-site search, ideally you can search by Rust type signatures from that UI.
I’m actually not a huge fan of the search function. It’s good, much better things are possible.
Fascinating, did not know this. Do Doxygen and Javadoc not do this? I suppose the main difference is perhaps that Doxygen and Javadoc need to do their own work a lot more to associate the comments with the code, whereas in Rust the compiler itself is doing all that work by default.
Yes, it’s entirely up to the doc tool. This is what the Java Specification has to say about Comments:
This is what the Rust/Ferrocene spec has to say about comments. This goes very much into details about what a comment applies to and how it is transformed on a language level.
rustdoc is just the agent that collects those things and puts them into nicer form.
Does this mean that it would be feasible to write a library like Breathe but for Rust? As commented here I’m pretty sure it’s not actually a good idea. Just curious if it’s possible. One thing I am very interested in, however, is whether it’s possible to pull in rustdoc metadata to improve Sphinx’s built-in, client-side search. E.g. when you go to pigweed.dev and press Ctrl+K to open the in-site search, ideally you can search by Rust type signatures from that UI.
Yes, though the format is unstable and may break between versions (it is indeed behind a barrier so that people are aware of the unstableness). But yes, you could totally use it.
Would love to hear more about this.
It’s very much “left to right”. So e.g. if I search for “Vec Push”, i find something, if i find “push vec”, I do not. I would really love be able to search for return values that implement a certain trait, etc. etc.
I like it a lot. My first similar tool was Javadoc and I loved it. Never found a good Python tool that clicked for me.
rustdoc has a couple of interesting things to me:
It works very well out of the box without being served in a web server. IIRC, other equivalent tools have issues.
Some things in Markdown bother me, but it’s the pragmatic choice. Yesterday we learned that Vale supports Rust comments in Markdown pretty well, so I am trying to set up our project for that. I could get live spell checking in Emacs working (and most browsers should support it).
At first, I didn’t get why cargo doc generates documentation for your dependencies (because it’s slow). The other day I realized this means that it makes your docs be able to crossreference the documentation of your dependencies… in a painless way, with automatic matching of versions. Even if the dependency does not publish their docs!
Crossrefs in general work pretty well and I find them more intuitive to get working than other systems I have used.
Also documentation tests work quite well OOB. I’m prone to abusing them, though.
The only thing I don’t like so far is that doing one-sentence-per-line does not seem easy to get working well.
I may be ignorant of other tools with those virtues, but rustdoc is polished and nice, I like it.
The only major issue I see is that it seems you have to do weird stuff for including documents not tied to an API.
At first, I didn’t get why cargo doc generates documentation for your dependencies (because it’s slow). The other day I realized this means that it makes your docs be able to crossreference the documentation of your dependencies… in a painless way, with automatic matching of versions. Even if the dependency does not publish their docs!
It’s also tremendously useful when developing locally, because cargo doc gives you the api doc of your project and every thing you’re using without having to jump around the internet, and you can query through the entire set from one location.
Sadly for some of the bigger crates a fair amount of doc really lives in mdbooks so e.g. having serde’s rustdoc locally is nearly useless because it doesn’t document any of the #[serde] attributes, or have any of the guides or advanced examples.
The only major issue I see is that it seems you have to do weird stuff for including documents not tied to an API.
That’s exactly because it’s designed specifically and almost exclusively for API docs. So abusing it for non-API docs (howto, guides) is a hack.
it doesn’t document any of the #[serde] attributes
Very interesting! Is it not possible to auto-document these with rustdoc?
it’s designed specifically and almost exclusively for API docs. So abusing it for non-API docs (howto, guides) is a hack
Your serde example demonstrates the problem of project information getting fragmented across many sources, however. As mentioned here I have a teammate who argues that all of our Rust docs should be generated via rustdoc. I am empathetic to his argument in the sense that it would reduce information fragmentation, like what you’ve described with serde.
But really I’ve never had to produce really nice Python docs. Sphinx is what I’ve used most.
In my current job we’re supposed to sell some Rust crates, so although high quality docs may not be strictly needed, we’re trying to get into the swing of having “commercial-grade” docs.
It works very well out of the box without being served in a web server.
A few people have mentioned this. It’s very surprising to me that any auto-generated API references would require a web server. I.e. the docs are not generated as a static website. Can anyone point me to specific languages that have this problem? (I believe you that the problem exists, I just want to dig into the issue more.)
I know a lot of technical writers that are obsessed with Vale. Pretty cool that it works in a rustdoc context!
makes your docs be able to crossreference the documentation of your dependencies… in a painless way, with automatic matching of versions. Even if the dependency does not publish their docs!
That sounds super powerful and was not on my radar whatsoever. Thanks for sharing.
it seems you have to do weird stuff for including documents not tied to an API.
Can anyone point me to specific languages that have this problem?
I may be wrong about this. I had the habit of always using python3 -m http.server for this kind of thing, or looking for a “serve” command in tools. This may have been caused by some static website generator which may have been misconfigured.
I know a lot of technical writers that are obsessed with Vale. Pretty cool that it works in a rustdoc context!
I have my ups and downs with it, but likely I’d have them too with the alternatives. The cool thing is that it does a lot of the correct things by default- e.g. skipping literals (stuff in backticks) for spell checking, making it possible to specify the right case for stuff, etc. And you can create quite fancy rules to match your style.
At work, we are making a bigger focus on documenting our APIs, and any tool that helps address the low-hanging fruit (e.g. obvious typos) gives us more time to focus on the difficult stuff. I was very pleased when I realized now I could get a squiggly line if I made a typo while writing a comment!
(Now, if there was a good equivalent for identifiers in code. I think I’ve heard that JetBrains has such a thing. It feels like it would need a ton of finetuning to work well, but typos in identifiers are common, and very frustrating for users.)
As someone who uses both rustdoc and Sphinx, dabbles in Doxygen for C retro-hobby projects, and used to use Epydoc before Python 2.x went too far EOL, I think the most important detail to consider is what kind of documentation tool rustdoc is.
The thing I love most about rustdoc is that it’s an apidoc tool first and foremost. You have to go out of your way to not have at least the bare minimum auto-generated documentation for something in your code. Epydoc was also this way. (Epydoc went as far as to both parse and import your Python code to discover things that showed up in one representation but not the other… such as compiled-only modules for the import or things that aren’t runtime-introspectable for the parsing.)
Sphinx, on the other hand, isn’t a replacement for JavaDoc-style tools… it’s a replacement for how the Python upstream used to use LaTeX for Python 2.x… and you can really feel that, for better and for worse. (Also, in my experience, it lends itself to people designing themes which prioritize nice-looking non-apidoc over readable apidoc. The linked page is my best effort without patching either the theme or the autodoc module.)
A lot of people use Sphinx for non-apidoc purposes since it’s friendlier than LaTeX and can export to formats like EPUB… but, with Sphinx, I regularly run into Python projects where the docs are incomplete because someone forgot to add an autodoc directive while refactoring their code. I complained about this on IRC maybe ten years ago and got “You should be spending 30% of your time on writing docs anyway” as an answer… which makes no sense for the same reason Rust has a borrow checker… not all activities you could be spending your time on are created equal. (An autopackage directive to encapsulate what the sphinx-apidoc utility does and complement automodule would go a long way to solving that, but would require some design work to lay out how to customize what gets generated to the level that Sphinx people like.)
However, I’d say that, when it comes to generating a “TODO index”, rustdoc is worse than Sphinx autodoc and Sphinx autodoc is worse than ePyDoc.
Furthermore, rustdoc is only an API documentation tool (Doxygen lets you specify a list of Markdown files to also render and add to the Table of Contents) —probably because mdBook exists, even if, as far as I know, there’s no way to integrate mdBook and rustdoc into a docs.rs build— and that’s why things like Clap create trees of dummy Rust modules just to write prose-form documentation sections.
Sphinx, on the other hand, isn’t a replacement for JavaDoc-style tools…
That’s… not entirely true.
It’s not quite automatic because sphinx is not just a replacement for javadoc-style tool, but in my experience autodoc generally does a good job (and does indeed “parser and import your Python code” in order to both discover what there is to document and extract the docstrings to format in).
The linked page is my best effort without patching either the theme or the autodoc module.
The problem of the page you link (and a large number of “modern” sphinx themes) is that the theme itself does not lend itself to autodoc, as the bodies are way too narrow: your theme has a content section that tops out at just 660 px wide, rustdoc’s main section’s has a max-width of 960 px, 45% wider.
And sphinx documents things module-at-a-time, whereas rustdoc works symbol-at-a-time, which is a lot less busy (but requires having a lot more tabs open to get all the information).
Finally, Sphinx will display the repr of top-level constants (and default values for that matter), which rustdoc simply does not bother with. You can instruct sphinx to not do that (via :meta hide-value:), sadly it’s per-name and I don’t think it has a global toggle.
It’s not quite automatic because sphinx is not just a replacement for javadoc-style tool, but in my experience autodoc generally does a good job (and does indeed “parser and import your Python code” in order to both discover what there is to document and extract the docstrings to format in).
Sorry but, from my perspective, that feels like a “You don’t need Rust. You’re just holding C++ wrong” answer.
“It’s not quite automatic” is the deal-breaker. If I don’t explicitly slap a #[doc(hidden)] on something, it should either show up in the docs or fail with a parse error. Anything less is a source of footguns and can’t claim to be a proper apidoc tool in my books.
It’s bad enough that rustdoc doesn’t have Epydoc’s @todo annotations which are guaranteed to be collected into a single “TODO Index”.
The problem of the page you link (and a large number of “modern” sphinx themes) is that the theme itself does not lend itself to autodoc, as the bodies are way too narrow: your theme has a content section that tops out at just 660 px wide, rustdoc’s main section’s has a max-width of 960 px, 45% wider.
When you take window tiling and a tabs sidebar into account, my Firefox’s content pane is roughly 1024px wide, and there’s maybe a centimetre of gutter on either side of the page in that theme… and yet Epydoc and rustdoc are both much more readable in it and Epydoc is even readable on narrower displays.
…plus, if anything, i’d say the biggest problem with that theme is that it’s drunk the flat design cool-aid a little too far for even a “print” style and doesn’t provide proper visual separation and grouping.
Even a simple dl.class, dl.function { margin-top: 3em; } in the DOM inspector makes it more readable… I just haven’t had time to go searching for other themes or to do the kind of QA I insist on for hacking up someone else’s theme.
As mentioned here I’m also coming to conviction that the “not-quite automatic” nature of the system is a deal-breaker. Many times I’ve seen a Pigweed contributor make an honest effort to document their code but the docs never got published because they forgot about the extra glue step of adding autodoc or doxygen* directives into reStructuredText files.
Another issue with this approach: the ability to inject API reference content anywhere, at any level of granularity, leads to inconsistent organization. Some of our API references are organized alphabetically. Others are organized by order of importance. E.g. the most popular class is listed first, then the second-most popular, etc.
Thank you for kicking off the convo with this thoughtful comment!
I personally work in Sphinx a lot so we have a lot of shared experience. I’m docs lead for pigweed.dev, which is powered by Sphinx.
Epydoc went as far as to both parse and import your Python code to discover things that showed up in one representation but not the other
Interesting! I had not heard of Epydoc before. We use autodoc. Does it also not parse and import like this?
Sphinx, on the other hand, isn’t a replacement for JavaDoc-style tools
Agreed. In the parlance of technical writers, Sphinx is optimized for tutorial, guide, and explanation content, whereas tools like rustdoc and Javadoc are optimized for reference content.
I regularly run into Python projects where the docs are incomplete because someone forgot to add an autodoc directive while refactoring their code
I know that feel. We use Doxygen to auto-generate our C/C++ API references and then insert the reference content into our Sphinx site with Breathe. Contributors usually remember to markup their C/C++ code with Doxygen comments, but they sometimes forget to also add a doxygen* directive into a reStructuredText doc. It’s very frustrating to make an honest effort to document your library, and then discover 2-3 months later that the docs never actually got published because you forgot about that extra setup step. I think I’m also developing the conviction that the not-quite automatic nature of the setup is a deal-breaker for large projects with many contributors.
rustdoc is only an API documentation tool
One of my Pigweed teammates loves rustdoc and thinks that all Rust content should be handled within rustdoc. They say that there are many examples of Cargo crates that handle all forms of documentation (tutorials, guides, references, explanations) within rustdoc. I’m keeping an open mind but my impression is similar to what you’re saying. rustdoc is an API reference tool first and foremost. I’m not sure it’s the right tool for the job when it comes to other forms of content. This is a big open question for us. We either 1) figure out how to seamlessly bridge the gap between Sphinx and rustdoc 2) go all-in on rustdoc for all Rust content.
Interesting! I had not heard of Epydoc before. We use autodoc. Does it also not parse and import like this?
I was more intending to draw a contrast with rustdoc on that one, but the documentation for sphinx.ext.autodoc seems to indicate that it relies exclusively on importing the code.
I know that feel. We use Doxygen to auto-generate our C/C++ API references and then insert the reference content into our Sphinx site with Breathe.
I think I remember Breathe from my list of “in case I ever need this” resources but I have no experience with it because, for my retro-hobby Doxygen use-cases, the big question right now is “Is it feasible to write these docs in a way that they’ll work with modern Doxygen and also work with the last version of Doxygen with a DPMI DOS build?” and the previous question was how much C preprocessor do I need to use to hide the near and far keywords from Doxygen?
We either 1) figure out how to seamlessly bridge the gap between Sphinx and rustdoc 2) go all-in on rustdoc for all Rust content.
Have you evaluated mdBook yet? I generally try to avoid mixing reStructuredText and Markdown in documentation for my single-language projects and from what I remember, Sphinx’s Markdown plugin doesn’t benefit from how heavily Sphinx leverages reST’s native support for syntax extensions, even if it does use a Markdown parser that aims to be that. (That’s also why I go README.rst for my Python repos. All Markdown or all reStructuredText unless it’s something where PyO3 is in the picture and, in that case, I still try to keep the markdown and reST separated based on which language they’re describing to minimize the chance of typos due to imperfect mental context switching.)
Can you at least give a small indicator of what you need the answer for? Improving rustdoc, assessing rustdoc, writing a new tool inspired by rustdoc? That would really help me answer.
Sorry if it was too vague. I was trying to be neutral and not steer the conversation in any particular direction.
My main goal is simply that I want to make my fellow technical writers (TWs) more aware of Rust’s approach to auto-generated docs. TWs at large have some familiarity with how it’s done in any other languages (Doxygen for C++, Javadoc for Java, autodoc for Python, etc.) but I have a hunch that they’re not as aware of Rust’s approach. I’ve dabbled with rustdoc but would like to hear more from people who have extensively worked with it, and especially how it stacks up to how it’s done in other languages. I will eventually probably write blog posts on https://technicalwriting.dev comparing/contrasting how different languages approach auto-generated docs. E.g. I know that being able to search by type signatures is a particularly beloved feature of rustdoc and I have a hunch that a lot of TWs aren’t aware that SWEs often heavily use this search strategy.
I agree that the article employed poor rhetoric. I didn’t really understand what I wanted to argue, if anything at all. And I agree the main value is the fun anecdote (see my comment here).
I think you’re being a little hard on yourself. It doesn’t sound like it now but ten years from now you’ll look back on this blog post and find that your words failed you in that you were trying to get at something a little deeper than you could articulate in an anecdote.
Being able to reach out for the right theoretical tool is just the first layer, and the most superficial form, of instrumentalising engineering education. I’m not trying to minimise its importance, it’s just not the final form of this skill yet :).
The bigger deal, which your anecdote captures only indirectly, is the ability to formulate a viable, formal model for an informal problem whose vital features aren’t immediately obvious. Formal CS education is not the only way to hone this skill; but it helps, and it gives you very adequate tools for it. The real skill isn’t recognising that a real world problem sounds suspiciously like a theoretical CS problem. The real skill is formulating a theoretical CS problem that’s suspiciously like a real-world problem.
That’s why “will I ever use this in the real life” is the wrong way to look at it. You spend, what, four years in university, maybe a few more with masters. If six years are all it takes to learn everything you’ll need to apply for the next fourty years of your career modulo things like the version control tool du jour, that’s going to be the most boring career ever. Quit while you still can!
Those four years are valuable as a tour of what lays ahead – what you don’t know, and you’ll spend the next fourty years of your life trying to figure out.
There is an inherent risk that a good chunk of what you’ll learn in uni is going to be useless. That’s true for any educational format. I thought about half of what I learned was going to be completely useless – I was mostly right about the percentage but holy shit did I get the distribution completely wrong.
Now that you mention it, yeah, I guess it could be. Although the content is certainly not beyond the abilities of LLM, Googling some sentences from that pages suggests that it’s original content – so I guess a human wrote these?
@MiraWelner you mentioned that one of your posts hit the front page of HN for an hour or so. How many visits did you get? And the site served everyone with no downtime? It always amuses me to see a site go down from the HN hug of death, knowing that other sites (such as yours) successfully serve all the traffic from literally a hobbyist computer in someone’s house
This is actually a really funny story - when this post and the git post were on the front page my site was fine. However with this post in particular on HN the site did go down and commenters assumed it was the hug of death.
But actually I don’t think it was the hug of death because it went down exactly at 2am EST - which is when it updates and reboots if necessary!
Murphy strikes again
Is there a simple way to have a backup server for the site? E.g. can I point one of the DNS A records to my IP and another one to my GitHub-Pages-hosted version of the site?
Not to my knowledge using the stack I described although there probably is a way. But I doubt you would be able to use simple tools like i am
This is one of my dream projects but I could never figure out how to expose the site to the public internet. I will try the port forwarding thing but I have a feeling my Xfinity router has locked that down (please correct me if I’m wrong)
I think it works on Xfinity?
https://www.xfinity.com/support/articles/port-forwarding-xfinity-wireless-gateway
Yes, that looks promising! Thanks for the research
Got it working with like 10 minutes of effort, lol. Don’t know why I struggled previously so much
One thing I previously was stumped about was getting my public IP address. Kinda surprising that I just go to a site like whatsmyip.com and get the value from there. I thought that wouldn’t work because Xfinity always rotates my public IP
Try Cloudflare tunnel (like a comment above suggested).
It creates a private connection between your home network and cloudflare, which won’t expose your home IP or network to the outside.
It’s a compromise to have cloudflare MITM your self-hosted website, but it’s better than burning through your (very generous—sarcastic) xfinity monthly cap.
Does Cloudflare Tunnel help with your bandwidth cap? Do they offer caching or something?
In theory, yes. You get access to their CDN when using a tunnel. You can setup custom caching rules to serve content from their edge network and reduce your outgoing bandwidth.
In practice, no one visits my site so I can’t test it. lol.
The type of person the author talks about exists I’m sure, but I think most of these people are just… tired. Again, I’m not trying to defend the bad ones, just throwing out some counter-examples for the ones who get caught up in it all.
Maybe they were told to prevent all but the absolutely necessary changes by leadership, and are tired of constantly having to argue with other teams whose leaders told their engineers “this is top priority, it has to go in.” Neither side of engineers cares that much, it’s the leaders who don’t agree. They are tired.
The part about “wanting all the credit” is unnecessary, and quite rude if the situation is more like what I said above. They may just be doing their exact jobs, too.
Maybe it’s the situation above, the ancient massive core they have been told to keep running, which sprint-driven product teams are told by their leaders to go muck with. Maybe they are tired of code being merged that is abandoned as the authoring team is “on a new sprint” or “has new quarterly objectives” or whatever. It’s the authors who “don’t want to maintain it” in this case, but the old guard doesn’t want to either, because their objective was “prevent all but the most necessary changes” and they constantly have to break that. They are tired.
Sure, this type of person probably exists. It’s also very likely one of the direct, measurable things considered for promotion, and… come on, it’s (likely) a for-profit software company, everyone should be going for promotion all the time. They want to retire sooner, because they are tired.
—
To firmly reiterate, I’m just throwing some different viewpoints out there because it’s way too easy to jump on the strawman sometimes. Of course the jerks are out there too, but some of these people are just doing their job, and have different responsibilities than you do. When in doubt, approach them as humans who are tired first, instead of just assuming things about them.
All assumptions about internal state are on shaky ground IMO. Assuming tiredness is a little better than jealousy because it’s assuming good intent versus less-good, but it’s still an assumption. Analyzing external incentives, like you did, is probably the most productive way forward.
I would suggest cloudflared (cloudflare proxy) versus opening a port on your home router and port forwarding.
cloudflare regularly blocks my access to sites, from both home and work, so I am not a fan of cloudflare services…
Cloudflare Tunnel is free and a good solution for those behind CG-NATs or an ISP firewall. It also offers effortless DoS protection.
I will admit, however, that I think it’s slightly “cooler” in some sense to host your site directly from your home, with no assistance from Cloudflare or other giant tech companies, even if you don’t really get much tangible benefit from doing it that way.
(By these standards of course, my personal site is rather lame because it’s just your standard Jekyll + GitHub Pages site.)
Can the cloudflare proxy reach the server without opening a port, etc ?
Ah, I did not read close enough. This thing creates a tunnel: https://github.com/cloudflare/cloudflared
What are the risks of port forwarding and hosting on home network? I get the general risk of giving the public internet direct access to my home devices. But how do people specifically exploit this? It depends on me misconfiguring or not properly locking down the web server, right?
Pretty much, but nobody has ever made an unhackable server. So even if you “properly” configure the server it’s not 100% secure because nothing is.
I did get my router hacked and it had third party malicious software installed on it and it didn’t function until I got the NetGear people to fix it which is why I installed fail2ban vibe has worked so far. But nothing is foolproof.
Let’s assume you forward port 443 to your Pi running Apache. You’re basically exposing the following bits of software to the Internet:
The biggest risk is an RCE in any of those pieces, because you’re truly pwned, but I’d lay pretty long odds against an RCE in the Linux network stack, and I don’t think your average Apache config is at much risk either – these things have both been highly battle-tested. Some sort of denial-of-service exploit is more likely but again, Linux+Apache have powered a huge chunk of the Internet for the last 25+ years. Now, if you write an HTTP server which executes arbitrary shell commands from the body of POST requests and proxy it behind Apache, you have only yourself to blame…
I expose HTTP and a few other services from my home network via port forwarding. I don’t lose sleep over it.
Oh I didn’t know they had a free tier but it looks like they do! I’ll look into it.
Also are you the same whalesalad on HN that gave me the advice on the browser text width?
Insightful.
I worry that if the steward still has the responsibility of maintenance and bug fixes, they may burn out if there aren’t other members.
The house metaphor is powerful. But I’m not sure it matches well. Everyday people have a generally good sense of how to keep the floor clean, coffee machine working etc. But where do the stewards find the support needed to keep their projects going? Even well intentioned support by developers who don’t understand the nuances or consequences of different design decisions can be a burden for the steward. Ultimately, I worry the stewardship is a noble idea, but without a more concrete formula and a strong team, it’ll lead to burnout/going back to ownership.
Yes, I was surprised that they used a house metaphor and not the most obvious stewardship metaphor: steward of a nature sanctuary. Very easy to imagine a sense of service to a greater cause when imagining that you’re protecting the last remaining land where a certain species of animal can thrive.
The house metaphor is also ambiguous. In the USA, owner-occupied homes are generally in better condition than renter-occupied ones. Pride of ownership is a real phenomenon. But you also see a lot of that shitty ownership behavior as described in the article. E.g. go on Nextdoor and look at all the paranoid and petty comments. Renter-occcupied neighborhoods have their own problems, but they are of a different nature.
Discovered this while watching Shane Mattner’s ESP32-C3 embedded Rust tutorials: https://youtu.be/vUSHaogHs1s
Possibly love at first sight. I just did
sudo apt install neovimand then followed the install instructions (https://docs.astronvim.com/#-installation) and finally enabled the community Rust pack (https://docs.astronvim.com/#-astrocommunity) and now I have a very feature-rich Rust coding environmentWhat editor did you use previously?
I love that little display. I have one too. It came with a Pico sensor kit I got on Amazon: https://www.amazon.com/gp/aw/d/B09C3NW8DX
Side note: that expansion board for the Pico is phenomenal. Really ergonomic
Note that the name of the display is SSD1306, not SDD1306, as the article incorrectly says a few times. Definitely makes it easier to find examples when you get the name right!
There’s also an example for this display in pico-examples: https://github.com/raspberrypi/pico-examples/tree/master/i2c/ssd1306_i2c
It’s a very capable little device
There’s also a few Rust crates for it, including one that’s compatible with embedded-hal
I’m pretty sure that there is no separate datasheet for the 128x32 variant; the datasheet you’re looking at is correct. I recall it saying towards the start of the datasheet that there are a few different size variants but the underlying I2C spec is the same for all variants
The definition for Docs as Code is easier for me to grok: https://www.writethedocs.org/guide/docs-as-code/
I wonder what the first “X as Code” term was? I think Docs as Code started around 2015
Terraform was advertising itself “infra as code” and their initial release was in 2014.
Week 4 of baby bonding leave! Besides learning how to be a good dad I am finding a fair amount of time to learn embedded Rust and work on a personal project related to automatically updating documentation.
Congratulations! The first few months are a dream. 🥰
We’re on month nine … and I only just to got back to my side projects.
I am a proper Rust n00b ramping up as we speak. Even before getting into conceptual challenges of understanding lifetimes, I think it’s important to mention that the syntax was truly jarring for me. In literally every other language I’ve learned, a single unclosed quote means that I have written my program incorrectly in a very fundamental way. I’ve been programming for over 10 years so there’s a lot of muscle memory to undo here. Sorry if this bikeshed-y topic has been discussed to death, but since the article explicitly covers why lifetimes are hard to learn and doesn’t mention this point, I figured it’s fair game to mention again.
I personally like the notation, but I could see how it looks jarring. FWIW, Rust probably borrowed the notation from ML, where single quotes are used to denote type variables. For example, the signature of
mapin Standard ML is:I got nerd-sniped thinking about how old the use of
'as a sigil might be. It’s used in lisp; I think it’s oldest use might go back to MACLISP in 1966. I think older dialects of lisps required that you say(quote foo)instead of'foo. See section 1.3 “Notational Conventions” on page 3 of the MACLISP manual (large PDF warning).I am the one who proposed the notation (can’t find the reference now, but it’s there, trust me) and yes it is from ML.
Is there a reason that it’s only for lifetimes and not all type parameters? My guess would be because it makes them easy to distinguish (and since type parameters are more common, they get the shorter case), but I could be wrong of course.
Yes because types and lifetimes are different kinds, and yes because types are more common they are unmarked.
Is it?
I recently saw it in reading the various Cyclone papers, where they introduced it for identifying differing regions, with those having different lifetimes. However, I believe Cyclone was itself drawing inspiration from ML (or Caml).
It also has mention of “borrowing” in what may be a similar fashion.
http://www.cs.umd.edu/~mwh/papers/ismm.pdf
Also the syntax there seems to evolve over various papers.
The
'is an odd one. I had another skim of the history of Standard ML but it isn’t very interested in questions of notation. However, it reminded me that ML type variables are often rendered in print as α, β instead of'a,'b, which made me think this might be a kind of stropping. “Stropping” comes from apostrophe, and prefix'was one form it could take. (Stropping varied a lot depending on the print and punch equipment available at each site.) But this is a wild guess.Oh, yeah, it took me a good six months to stop typing a
(right after the'. The joke about Greenspun’s tenth rule practically writes itself at this point :-).I’m only mentioning this for the laughs. As far as I’m concerned, any syntax that’s sufficiently different from that of a Turing tarpit is fine. I mean, yeah, watching the compiler try to figure out what the hell I had in mind there was hilarious, but hardly the reason why I found lifetimes so weird to work with.
Is there also any precedent for this kind of notation in math?
ML is on my list of ur-languages to study: https://news.ycombinator.com/item?id=35816454
I can’t think of instances of
'xin math, but ofcx'is used frequently for the derivative, as well as any kind of transformed version ofx.Or if you just want another variable with a name “like”
x. This is also how it’s used in Haskell.I haven’t heard this particular challenge before. I came to Rust long after I learned Lisp and Standard ML, so it never occurred to me that it would be jarring, but if you’ve only worked in recent members of the ALGOL family I can see that being the case.
What do you mean muscle memory? Do you usually double your quotes manually? Does your IDE not do it for you?
Not trying to “attack” you or anything, genuinely curious as these kind of syntactical changes are more-or-less invisible to me when writing code due to a good IDE “typing” what I intend based on context for the most part.
As for a single single quote being jarring, I believe it does have some history in LISPs for “unquote” or for symbols. Possibly, the latter case was the inspiration in case of Rust?
Edit: I see there is a much better discussion in the sibling thread regarding its origin.
Ah yes, now that you mention it, “muscle memory” is not the right phrase here. I didn’t mean muscle memory in the (proper) sense of a command that you’ve been using for years, and then now you need to use slightly differently. What I meant was that for years, whenever I saw a single quote, I expected another quote to close it somewhere later in the code. And now I have to undo that expectation.
I like it, but it has some flaws:
it has no room for chapters with non-reference docs, like tutorials, getting started. That can be stuffed into module-level docs, but that’s not ideal.
it sorts types alphabetically, not in order of importance, nor even order of definition. All types in a module are thrown into one list. That makes it hard to find how to use it if you don’t already know what to search for. In practice libraries have some main type you use to initialize them, but good luck finding it among all the minor error types, newtype wrapper types, iterator types, and all kinds of helpers.
it takes a skill to understand the trait implementation sections. There’s a lot of boilerplate and noise there. Traits repeat all their methods even when this is redundant. It’s important to know if a type is an Iterator, but not list the same builtin 100 iter methods every time.
Apart from blanket impls (which are boring noise with too much prominence), there’s no UI distinction between standard traits, crate-local traits, foreign traits. These are usually implemented for very different reasons.
if you see Pattern arg in std, you won’t know it can be a closure with various arguments. Figuring this out needs diving deep into the trait impls and their bounds.
it doesn’t handle big types with lots of methods well. There is no explicit support for grouping them (
strhas search methods, has splitting, case changes, but they’re all mixed in the nav). It doesn’t even de-emphasise deprecated or nightly methods. I’d like _mut() and non-mut grouped as two flavors of the same method. I’d like to see “static” functions distinguished from self methods.It’s like cargo – good enough, and all the value is in having it consistently for every crate.
We debate this all the time on pigweed.dev. Among Pigweed contributors opinion is roughly split: one half prefers organizing alphabetically, the other half prefers organizing by order-of-importance.
If I was benevolent dictator of
rustdocI personally would stick to alphabetical organization and would never allow alternate organization schemas. As you said, the value is the consistency. Alphabetical may be sub-optimal in terms of finding what you need as quickly as possible, but it’s a system that can be unambiguously enforced in every library in the entire ecosystem. That’s a really powerful level of consistency. Also, the most logical organization for the crate owners is sometimes not the most logical organization for crate users, and in those cases the order-of-importance organization may actually be less effective than the highly predictable alphabetical organization. From the perspective of hypotheticalrustdocbenevolent dictator, there’s no way for me to guarantee that the order-of-importance that the crate owner has decided upon is the same as the order-of-importance for crate users. At least, much more difficult to guarantee than alphabetical organization.From Diataxis:
Very interesting to see where exactly
rustdocfalls short from someone who has obviously looked over many real, non-trivial API references extensively. Thanks for the details.I think a lot of appeal of alphabetical sorting is merely a comfort of sticking with something we’ve been always doing, even though the original reason for it — making it possible to search on paper — is gone. It’s more a consistency to have consistency, than to solve a user problem.
Rustdoc has an instant search. It can match more than just the prefix, and even supports name aliases. It’s almost a disservice to users to make a UI that suggests they can search lists manually, limited to just the prefix (is it under
file_open,open_file,load_file,try_get_file?).Alpha sorting creates implicit grouping of names with common prefixes, but that has systemic failures —
new,from_,with_, andbuilderconstructors are all over the place.as_,into_andto_are scattered.try_methods are divorced from their panicking alternatives.As a library author, I think I have a pretty good idea of what is most important in my library. Even if there can be different views, it’s not going to be the type starting with A. I’d rather have the ordering usually helpful than consistently irrelevant.
In my experience, a conceptually-grouped list takes a lot of thought to do well, and requires careful maintenance. It’s often the case that there are several categorizations that cut at different angles, so functions might need to be listed in multiple groups. Generally I like to see that kind of thing in handwritten overview section, in addition to a comprehensive automatically maintained alphabetically sorted reference section.
I like to add cross-references to closely-related functions. If functions are grouped, maybe the groups should be explicit tags in each function’s description, so a reader can jump to the list of functions tagged in the same group.
Reference documentation can be too DRY. If two different ways of organizing it are at odds, the tools should probably help us to produce both ways, not just one.
The value IMO is that everyone is familiar with this organizational scheme and can recognize it very quickly, à la Don’t Make Me Think.
Yes, this is a fair point. I wrote the library, I know the core use cases.
How do you imagine the mechanics of order-of-importance organization working though? Marking up each API item with an attribute would be toilsome. I guess
rustdoccould spit out a flat JSON list of all API items, and then it’s just a matter of re-organizing the list…(Or a nested dict, to indicate grouping of API items)
That same Diataxis page I linked to before does however make an argument along your lines:
Rustdoc, like a lot of tooling in Rust was actually significant step up from what other languages had when it was being created (pre 1.0). In 2025, after years of using it the novelty and amazement wore off and is a distant memory.
My favorite aspect of it immediately was that it was unobtrusive. I could just write stupid Markdown after three slashes and get decently looking automatically generated API documentation. Seems … basic and obvious, now, but in 2014 it was mind blowing how simple and convenient it is, when Java or C++ would requires some
@docor whatever obnoxious and tedious syntax.Now, I really love Rust, and I love Rustdoc, but since me and most posters will keep praising it I will specifically focus on the negative side (cuz I’m Slavic, and complaining is our hobby).
The way inline documentation tests are handled (which is and even more so was when introduced an amazing idea), is kind of limited and inconvenient. You need to execute them with special syntax, fixing them is weird because tooling reports them weirdly, at least my text editor (Helix) can’t recognize them as nested Rust code.
LSP does not highlight broken links in documentation. If there’s a way to enable it, I’d love to know, and it really should be the default. After refactoring, I often have to run rustdoc and fix up all the links “manually”.
Integrating with larger documentation could be better. In our project we generate and publish (from the CI) rustdoc documentation with some embedded extra documentation, turning it all into kind of a mdbook. It works, but is limited and required quite a bit of hacking things together. Basically even customizing the landing page is an unstable feature, IIRC. I feel like rustdoc is so good at getting api documentation, that it makes sense to “embed it” into other, larger bodies of documentation in serious larger projects, and it would be easier if it exposed more hooks to tie into.
If this is a comment about rust-analyzer: I can find a few open and closed issues about intra-links support in r-a, but nothing that quite describes that. Could you file a bug?
One thing I’m wondering about for pigweed.dev is how good these docs tests work in an embedded Rust context. E.g. some of our code examples are intended to be run on physical RP2350s. Is there a mechanism to cross-compile for the RP2350 and then actually run the docs tests on physical RP2350s as hardware-in-the-loop tests? Based on this comment it sounds like
rustcandrustdocmay have no problem with this, but I could also imagine it being quite a can of worms to get working correctly.Yes, considering all the craftsmanship in the rest of
rustdocI’m surprised that it has overlooked this core, easily verifiable aspect of docs quality!Tangentially related, does
rustdochave cross-reference syntactic sugar like Sphinx/reStructuredText? E.g. I can create a section with an ID like this:And then in other docs to link to this section all I need to do is:
And at “compile-time” it gets rendered as:
Definitely know this struggle! I need to figure out how to unify Pigweed’s auto-generated Python, C++, and Rust references with the rest of the site in a consistent manner.
You could make it work, but I believe it doesn’t work OOB. IIRC, the test harness (a program that the test framework generates that calls the test functions) requires
std, so you are out of luck on platforms that do not havestd.There’s a few projects out there for this kind of stuff (I think there’s no difference between doctests and regular tests here that matters), but we haven’t found yet anything that ticks our boxes. Embassy is using teleprobe, and if you start to survey the major Rust embedded projects you’ll find a few more, but I don’t think there’s any ready to consume in a nice way yet.
I have always found it extremely ugly but the search bar is great
Yeah, the visual difference between our generated rustdoc and the rest of our docs site is pretty jarring… I’ve started to look into theming the rustdoc output. My thinking right now is to create a shared CSS file that the main site and the rustdoc subsite both rely on. E.g. in this shared CSS we would define fonts, colors, and stuff like that which can be safely shared to make the visuals between the two sites look more cohesive.
What I really like about rustdoc:
Yeah, this is a big differentiator. When I migrated pigweed.dev to Bazel it was quite a lot of work to get Doxygen working correctly within a hermetic environment whereas integrating Rust was trivial. To be fair, however, we have a lot more C/C++ libraries than Rust libraries currently.
What other languages provide auto-generated API references as a built-in feature?
Okay, rustdoc is actually the reason I came to Rust about 12 years ago. I joined at about 0.4. All that I’m writing about, exists roughly since then. Because rustdoc is a tool built by people who understood what can be useful for programming in the real, not in the abstract. Also, I won’t repeat @ssokolow.
On the surface, it’s a documentation generator like any other, with a reasonably nice HTML template (it comes out of browser vendor after all), which is very readable. They have an eye on how docs are consumed. It’s built for deep-linking in all aspects - want to link to a function? Sure. A header in one of the comments? Sure. Line of code in the underlying source code? No problem. It follows the rules of a good document.
It goes the extra mile by trying to make some of the weirder bits of Rust more accessible. A lot of functions that can be called on String are not from String, but from &str through Rust dereference. Of course, there’s a header called Methods from Deref<Target = str> that still lists them all in String. A return type is more a value that is returned because of the traits (think interfaces) it implements? There’s a small (I) next to it to tell you on hover and so on.
Before I go into those features, a little nerdery at first: rustdoc and rustc are intertwined in the sense that rustc is predominantly a library with multiple binaries on top, one of them is the compiler, one of them a linter (clippy) and one of them rustdoc. That means rustdoc always sees everything the compiler sees and is always built with the compiler. That means it can resolve connections between types properly, etc, etc. You can see a status of tools working over rustc nightly here: https://rust-lang.github.io/rustup-components-history/
Another interesting thing to know is that Rust comments are actually understood by the compiler much more than in other languages. They are not just commented out lines.
Something like:
is the same as
Docs in Rust are attributes of the thing they document, so they can be picked up through the AST. That means that rustdoc always has a very clear view what a comment belongs to.
What I’m getting at: documentation in Rust is much more a first-class citizen since about forever than you may realise.
That leads to a number of interesting features.
Rustdoc, since forever, e.g. has the ability to parse comments from your documentation and run them as code. If a code example is marked as rust code (```rust), rustdoc can extract it, compile it in the context of the library and run it as a test. This can be controlled, e.g. you can also tell it to compile the code, not run it (e.g. if it’s an example that starts a server and never returns) or even that the example should fail. rustdoc is more than a renderer.
That leads to a culture where Rust documentation examples are often very correct. Particularly all the stdlib examples are run through that process.
The second interesting thing is that rustdoc actually nowadays can be used to output a JSON dump of its data that holds enough meta data to write an extremely good semver checker in it (https://crates.io/crates/cargo-semver-checks).
I really like that it stuck to its guns and uses Markdown a text flavour. That’s a great choice, because, while Markdown lacks certain features one may want (I love rst as well), it is: very low bar, well understood and the right thing for writing a quick documentation comment.
I’m actually not a huge fan of the search function. It’s good, much better things are possible. I used to be a search engineer though, I appreciate that this is hard work that may bot be worth it in the end.
I think rustdoc is not cool because it’s in any way fancy. rustdoc is cool because it’s your reliable worked doing all the annoying work to do docs in good detail for you.
Just a note: As with non-lexical lifetimes, this is a feature that wasn’t present in Rust v1.0, felt like it took far too long to arrive for us old-timers, and now feels so natural that you’d never imagine it wasn’t there from the start.
Originally, you just had to generate the docs, see what the URL would be, and then reference it as an ordinary relative hyperlink.
I think you’re both saying different things. @skade was saying the anchors in the HTML were there from a very long time. You were saying the ability for rustdoc to compute and insert the link to a symbol is relatively more recent.
I don’t remember when the second ability was added but I remember reading it in release notes and being surprised it wasn’t possible yet! I’m pretty sure it had a nice dedicated section to highlight it.
EDIT: it was in Rust 1.48, released Nov. 19, 2020.
This is correct, I talked about deep-linking (linking into any aspect of the document), not intra-doc-links, which indeed came late and were missed.
Thanks, this is exactly the kind of “in-depth info from people who have worked extensively with the tool for a long time” that I was looking for.
Fascinating, did not know this. Do Doxygen and Javadoc not do this? I suppose the main difference is perhaps that Doxygen and Javadoc need to do their own work a lot more to associate the comments with the code, whereas in Rust the compiler itself is doing all that work by default.
Does this mean that it would be feasible to write a library like Breathe but for Rust? As commented here I’m pretty sure it’s not actually a good idea. Just curious if it’s possible. One thing I am very interested in, however, is whether it’s possible to pull in rustdoc metadata to improve Sphinx’s built-in, client-side search. E.g. when you go to pigweed.dev and press
Ctrl+Kto open the in-site search, ideally you can search by Rust type signatures from that UI.Would love to hear more about this.
Yes, it’s entirely up to the doc tool. This is what the Java Specification has to say about Comments:
https://docs.oracle.com/javase/specs/jls/se11/html/jls-3.html#jls-3.7
This is what the Rust/Ferrocene spec has to say about comments. This goes very much into details about what a comment applies to and how it is transformed on a language level.
https://spec.ferrocene.dev/lexical-elements.html#comments
rustdoc is just the agent that collects those things and puts them into nicer form.
Yes, though the format is unstable and may break between versions (it is indeed behind a barrier so that people are aware of the unstableness). But yes, you could totally use it.
It’s very much “left to right”. So e.g. if I search for “Vec Push”, i find something, if i find “push vec”, I do not. I would really love be able to search for return values that implement a certain trait, etc. etc.
I like it a lot. My first similar tool was Javadoc and I loved it. Never found a good Python tool that clicked for me.
rustdoc has a couple of interesting things to me:
cargo docgenerates documentation for your dependencies (because it’s slow). The other day I realized this means that it makes your docs be able to crossreference the documentation of your dependencies… in a painless way, with automatic matching of versions. Even if the dependency does not publish their docs!Also documentation tests work quite well OOB. I’m prone to abusing them, though.
The only thing I don’t like so far is that doing one-sentence-per-line does not seem easy to get working well.
I may be ignorant of other tools with those virtues, but rustdoc is polished and nice, I like it.
The only major issue I see is that it seems you have to do weird stuff for including documents not tied to an API.
It’s also tremendously useful when developing locally, because
cargo docgives you the api doc of your project and every thing you’re using without having to jump around the internet, and you can query through the entire set from one location.Sadly for some of the bigger crates a fair amount of doc really lives in mdbooks so e.g. having serde’s rustdoc locally is nearly useless because it doesn’t document any of the
#[serde]attributes, or have any of the guides or advanced examples.That’s exactly because it’s designed specifically and almost exclusively for API docs. So abusing it for non-API docs (howto, guides) is a hack.
Very interesting! Is it not possible to auto-document these with
rustdoc?Your serde example demonstrates the problem of project information getting fragmented across many sources, however. As mentioned here I have a teammate who argues that all of our Rust docs should be generated via
rustdoc. I am empathetic to his argument in the sense that it would reduce information fragmentation, like what you’ve described with serde.Have you looked into https://pdoc.dev/? I haven’t had a chance to test it out myself yet, but it’s trying to occupy the niche Epydoc used to.
That looks nice!
But really I’ve never had to produce really nice Python docs. Sphinx is what I’ve used most.
In my current job we’re supposed to sell some Rust crates, so although high quality docs may not be strictly needed, we’re trying to get into the swing of having “commercial-grade” docs.
A few people have mentioned this. It’s very surprising to me that any auto-generated API references would require a web server. I.e. the docs are not generated as a static website. Can anyone point me to specific languages that have this problem? (I believe you that the problem exists, I just want to dig into the issue more.)
I know a lot of technical writers that are obsessed with Vale. Pretty cool that it works in a rustdoc context!
That sounds super powerful and was not on my radar whatsoever. Thanks for sharing.
ssokolow and I are discussing this idea here
I may be wrong about this. I had the habit of always using
python3 -m http.serverfor this kind of thing, or looking for a “serve” command in tools. This may have been caused by some static website generator which may have been misconfigured.I have my ups and downs with it, but likely I’d have them too with the alternatives. The cool thing is that it does a lot of the correct things by default- e.g. skipping literals (stuff in backticks) for spell checking, making it possible to specify the right case for stuff, etc. And you can create quite fancy rules to match your style.
At work, we are making a bigger focus on documenting our APIs, and any tool that helps address the low-hanging fruit (e.g. obvious typos) gives us more time to focus on the difficult stuff. I was very pleased when I realized now I could get a squiggly line if I made a typo while writing a comment!
(Now, if there was a good equivalent for identifiers in code. I think I’ve heard that JetBrains has such a thing. It feels like it would need a ton of finetuning to work well, but typos in identifiers are common, and very frustrating for users.)
As someone who uses both rustdoc and Sphinx, dabbles in Doxygen for C retro-hobby projects, and used to use Epydoc before Python 2.x went too far EOL, I think the most important detail to consider is what kind of documentation tool rustdoc is.
The thing I love most about rustdoc is that it’s an apidoc tool first and foremost. You have to go out of your way to not have at least the bare minimum auto-generated documentation for something in your code. Epydoc was also this way. (Epydoc went as far as to both parse and
importyour Python code to discover things that showed up in one representation but not the other… such as compiled-only modules for theimportor things that aren’t runtime-introspectable for the parsing.)Sphinx, on the other hand, isn’t a replacement for JavaDoc-style tools… it’s a replacement for how the Python upstream used to use LaTeX for Python 2.x… and you can really feel that, for better and for worse. (Also, in my experience, it lends itself to people designing themes which prioritize nice-looking non-apidoc over readable apidoc. The linked page is my best effort without patching either the theme or the autodoc module.)
A lot of people use Sphinx for non-apidoc purposes since it’s friendlier than LaTeX and can export to formats like EPUB… but, with Sphinx, I regularly run into Python projects where the docs are incomplete because someone forgot to add an autodoc directive while refactoring their code. I complained about this on IRC maybe ten years ago and got “You should be spending 30% of your time on writing docs anyway” as an answer… which makes no sense for the same reason Rust has a borrow checker… not all activities you could be spending your time on are created equal. (An
autopackagedirective to encapsulate what thesphinx-apidocutility does and complementautomodulewould go a long way to solving that, but would require some design work to lay out how to customize what gets generated to the level that Sphinx people like.)However, I’d say that, when it comes to generating a “TODO index”, rustdoc is worse than Sphinx autodoc and Sphinx autodoc is worse than ePyDoc.
Furthermore, rustdoc is only an API documentation tool (Doxygen lets you specify a list of Markdown files to also render and add to the Table of Contents) —probably because mdBook exists, even if, as far as I know, there’s no way to integrate mdBook and rustdoc into a docs.rs build— and that’s why things like Clap create trees of dummy Rust modules just to write prose-form documentation sections.
That’s… not entirely true.
It’s not quite automatic because sphinx is not just a replacement for javadoc-style tool, but in my experience autodoc generally does a good job (and does indeed “parser and import your Python code” in order to both discover what there is to document and extract the docstrings to format in).
The problem of the page you link (and a large number of “modern” sphinx themes) is that the theme itself does not lend itself to autodoc, as the bodies are way too narrow: your theme has a content section that tops out at just 660 px wide, rustdoc’s main section’s has a max-width of 960 px, 45% wider.
And sphinx documents things module-at-a-time, whereas rustdoc works symbol-at-a-time, which is a lot less busy (but requires having a lot more tabs open to get all the information).
Finally, Sphinx will display the repr of top-level constants (and default values for that matter), which rustdoc simply does not bother with. You can instruct sphinx to not do that (via
:meta hide-value:), sadly it’s per-name and I don’t think it has a global toggle.Sorry but, from my perspective, that feels like a “You don’t need Rust. You’re just holding C++ wrong” answer.
“It’s not quite automatic” is the deal-breaker. If I don’t explicitly slap a
#[doc(hidden)]on something, it should either show up in the docs or fail with a parse error. Anything less is a source of footguns and can’t claim to be a proper apidoc tool in my books.It’s bad enough that rustdoc doesn’t have Epydoc’s
@todoannotations which are guaranteed to be collected into a single “TODO Index”.When you take window tiling and a tabs sidebar into account, my Firefox’s content pane is roughly 1024px wide, and there’s maybe a centimetre of gutter on either side of the page in that theme… and yet Epydoc and rustdoc are both much more readable in it and Epydoc is even readable on narrower displays.
…plus, if anything, i’d say the biggest problem with that theme is that it’s drunk the flat design cool-aid a little too far for even a “print” style and doesn’t provide proper visual separation and grouping.
Even a simple
dl.class, dl.function { margin-top: 3em; }in the DOM inspector makes it more readable… I just haven’t had time to go searching for other themes or to do the kind of QA I insist on for hacking up someone else’s theme.As mentioned here I’m also coming to conviction that the “not-quite automatic” nature of the system is a deal-breaker. Many times I’ve seen a Pigweed contributor make an honest effort to document their code but the docs never got published because they forgot about the extra glue step of adding
autodocordoxygen*directives into reStructuredText files.Another issue with this approach: the ability to inject API reference content anywhere, at any level of granularity, leads to inconsistent organization. Some of our API references are organized alphabetically. Others are organized by order of importance. E.g. the most popular class is listed first, then the second-most popular, etc.
Thank you for kicking off the convo with this thoughtful comment!
I personally work in Sphinx a lot so we have a lot of shared experience. I’m docs lead for pigweed.dev, which is powered by Sphinx.
Interesting! I had not heard of Epydoc before. We use autodoc. Does it also not parse and import like this?
Agreed. In the parlance of technical writers, Sphinx is optimized for tutorial, guide, and explanation content, whereas tools like rustdoc and Javadoc are optimized for reference content.
I know that feel. We use Doxygen to auto-generate our C/C++ API references and then insert the reference content into our Sphinx site with Breathe. Contributors usually remember to markup their C/C++ code with Doxygen comments, but they sometimes forget to also add a
doxygen*directive into a reStructuredText doc. It’s very frustrating to make an honest effort to document your library, and then discover 2-3 months later that the docs never actually got published because you forgot about that extra setup step. I think I’m also developing the conviction that the not-quite automatic nature of the setup is a deal-breaker for large projects with many contributors.One of my Pigweed teammates loves rustdoc and thinks that all Rust content should be handled within rustdoc. They say that there are many examples of Cargo crates that handle all forms of documentation (tutorials, guides, references, explanations) within rustdoc. I’m keeping an open mind but my impression is similar to what you’re saying. rustdoc is an API reference tool first and foremost. I’m not sure it’s the right tool for the job when it comes to other forms of content. This is a big open question for us. We either 1) figure out how to seamlessly bridge the gap between Sphinx and rustdoc 2) go all-in on rustdoc for all Rust content.
I was more intending to draw a contrast with rustdoc on that one, but the documentation for
sphinx.ext.autodocseems to indicate that it relies exclusively on importing the code.I think I remember Breathe from my list of “in case I ever need this” resources but I have no experience with it because, for my retro-hobby Doxygen use-cases, the big question right now is “Is it feasible to write these docs in a way that they’ll work with modern Doxygen and also work with the last version of Doxygen with a DPMI DOS build?” and the previous question was how much C preprocessor do I need to use to hide the
nearandfarkeywords from Doxygen?Have you evaluated mdBook yet? I generally try to avoid mixing reStructuredText and Markdown in documentation for my single-language projects and from what I remember, Sphinx’s Markdown plugin doesn’t benefit from how heavily Sphinx leverages reST’s native support for syntax extensions, even if it does use a Markdown parser that aims to be that. (That’s also why I go
README.rstfor my Python repos. All Markdown or all reStructuredText unless it’s something where PyO3 is in the picture and, in that case, I still try to keep the markdown and reST separated based on which language they’re describing to minimize the chance of typos due to imperfect mental context switching.)Can you at least give a small indicator of what you need the answer for? Improving rustdoc, assessing rustdoc, writing a new tool inspired by rustdoc? That would really help me answer.
Sorry if it was too vague. I was trying to be neutral and not steer the conversation in any particular direction.
My main goal is simply that I want to make my fellow technical writers (TWs) more aware of Rust’s approach to auto-generated docs. TWs at large have some familiarity with how it’s done in any other languages (Doxygen for C++, Javadoc for Java, autodoc for Python, etc.) but I have a hunch that they’re not as aware of Rust’s approach. I’ve dabbled with
rustdocbut would like to hear more from people who have extensively worked with it, and especially how it stacks up to how it’s done in other languages. I will eventually probably write blog posts on https://technicalwriting.dev comparing/contrasting how different languages approach auto-generated docs. E.g. I know that being able to search by type signatures is a particularly beloved feature ofrustdocand I have a hunch that a lot of TWs aren’t aware that SWEs often heavily use this search strategy.I enjoyed the article but this sentence seems out-of-place:
The main takeaway from the anecdotes seems to be that “real-world problem X is actually very similar to theoretical CS problem Y”.
I agree that the article employed poor rhetoric. I didn’t really understand what I wanted to argue, if anything at all. And I agree the main value is the fun anecdote (see my comment here).
I think you’re being a little hard on yourself. It doesn’t sound like it now but ten years from now you’ll look back on this blog post and find that your words failed you in that you were trying to get at something a little deeper than you could articulate in an anecdote.
Being able to reach out for the right theoretical tool is just the first layer, and the most superficial form, of instrumentalising engineering education. I’m not trying to minimise its importance, it’s just not the final form of this skill yet :).
The bigger deal, which your anecdote captures only indirectly, is the ability to formulate a viable, formal model for an informal problem whose vital features aren’t immediately obvious. Formal CS education is not the only way to hone this skill; but it helps, and it gives you very adequate tools for it. The real skill isn’t recognising that a real world problem sounds suspiciously like a theoretical CS problem. The real skill is formulating a theoretical CS problem that’s suspiciously like a real-world problem.
That’s why “will I ever use this in the real life” is the wrong way to look at it. You spend, what, four years in university, maybe a few more with masters. If six years are all it takes to learn everything you’ll need to apply for the next fourty years of your career modulo things like the version control tool du jour, that’s going to be the most boring career ever. Quit while you still can!
Those four years are valuable as a tour of what lays ahead – what you don’t know, and you’ll spend the next fourty years of your life trying to figure out.
There is an inherent risk that a good chunk of what you’ll learn in uni is going to be useless. That’s true for any educational format. I thought about half of what I learned was going to be completely useless – I was mostly right about the percentage but holy shit did I get the distribution completely wrong.
This is a pretty cool glossary, frankly. Not sure this needs to be tagged as satire really.
https://www.lenovo.com/us/en/glossary/
The Lenovo stock inside my head went up a bit.
I thought it was a quality glossary too! Re:
satireI erred on the side of caution because the meaning of PEBKAC itself is quite jest-fulit’s pretty weird what they have in it: https://www.lenovo.com/us/en/glossary/minecraft-mods/ - feels like a SEO thing?
Now that you mention it, yeah, I guess it could be. Although the content is certainly not beyond the abilities of LLM, Googling some sentences from that pages suggests that it’s original content – so I guess a human wrote these?