By this I mean books on any topic of computer science or mathematics (ideally computer science adjacent mathematics) that opened up a new way of thinking about problems and exposed you to new ideas.
Some of my recommendations:
Why Programs Fail, Andreas Zeller
Introduction to the Analysis of Algorithms, Flajolet and Sedgewick
The Art and Craft of Mathematical Problem Solving, Paul Zeitz
+1 for Data and Reality, but the first edition is better, imo. IIRC, the second edition is posthumous, with a few chapters removed, and interspersed with comments from another author, which I found more distracting than enlightening. Anyway, a great reading, highly recommended, and mandatory if you’re into data modeling!
IIRC, the second edition is posthumous, with a few chapters removed, and interspersed with comments from another author, which I found more distracting than enlightening.
You’re thinking of the third edition. As Hillel wrote in that newsletter post, second edition was by Kent only.
This is wildly off topic, but I guess I’m going to shoot my shot.
Once more: we are not modeling reality, but the way information about reality is processed, by people. — Bill Kent
I’ve been wondering for a while if the confusion about quantum physics in public discourse is because quantum physics is about modeling physics when operating at scales where measurement affects what is measured and therefore observer is inextricably linked to observation.
Is the probabilistic nature of quantum physics just modeling that accounts for the large relative size of the tools of observation relative to the object of observation? (E.g., when you observe a billiard ball with light the mass of light is far smaller than the mass of the billiard ball so it doesn’t after the speed or positioning of the ball. If you observed a billiard ball with another billiard ball, it would affect the speed or positioning of the observed billiard ball.) Or is there something more fundamental about the nature of the model that I’m misunderstanding? Does “objective reality” exist beyond the capabilities of the model?
(That last question is squishy as hell.)
Nobody that I’ve talked to that’s into quantum physics seems to know much about it except as a vehicle to support their brand of metaphysics and I don’t know how to parse what you find on the internet. (Of course, the only person I personally know who I consider to be qualified to speak on the subject, a graduate level physics student, said she really didn’t know that much about it.)
Also, I’m semi-afraid I won’t understand the answers that I receive.
Coincidentally enough I have a degree in physics, though I haven’t looked at it in a while, so this answer is based on a nine-year old memory:
It’s not a measurement error, the information literally doesn’t exist at that level of granularity. At a quantum level, particles are also waves, right? The energy of a wave correlates with its frequency, while the energy of a particle is in its kinetic energy, which correlates with momentum. The position of the particle correlates with the peaks and troughs of the wave: if the wave peaks at a certain point, there’s a higher probability the particle “is there”.
Now, to have a precise frequency, you need a lot of wavelengths, so there’s a lot of places the particle can be. If you add together lots of different waves with different frequencies, you can get a sum wave with a single enormous peak, and the particle is localized. But now there’s many different frequencies, so the kinetic energy is uncertain.
Mathematically, any physical observation corresponds to a transformation of the wave vector. And the position and momentum transformations don’t commute, so mathematically measuring position-then-momentum must give a different result than measuring momentum-then-position.
It’s all weird and messy I know and I’m not doing a good job explaining it. The important thing is that this all holds regardless of how we measure it. And it leads to all sorts of weird consequences like quantum tunneling. if you put a barrier between a particle and measure the momentum very precisely, the position can get uncertain enough that it could be on the other side of the barrier.
First off, thank you for taking the time to respond to this! I don’t think your reply was weird or messy at all. If you want weird and messy, keep reading!
Now, to have a precise frequency, you need a lot of wavelengths, so there’s a lot of places the particle can be. If you add together lots of different waves with different frequencies, you can get a sum wave with a single enormous peak, and the particle is localized. But now there’s many different frequencies, so the kinetic energy is uncertain.
This is great and I think it is core to my question. You are talking about how there is a probabilistic way to calculate the frequency, which, if I’m understanding correctly, means you can determine where the particle is as it is traveling down the path of the wave. So this is a model of the behavior of the particle.
However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time. Instead, we are using the model as a way to understand the behavior of a particle. Which is why the quote above sparked my question: are we modeling reality or the way we think about reality?
You also said this:
It’s not a measurement error, the information literally doesn’t exist at that level of granularity.
Which might imply that my intuition is wrong and at the physical level particles fundamentally don’t behave the way I think they should. Or maybe you mean that the information is unknowable because of how we model the problem.
And it leads to all sorts of weird consequences like quantum tunneling. if you put a barrier between a particle and measure the momentum very precisely, the position can get uncertain enough that it could be on the other side of the barrier.
Again, is this a matter with the model or with reality? Are we essentially dealing with mathematical errors in the model that build up, but the particles actually have smooth trajectories in space? Or is there something more fundamentally random occurring?
I know that at a certain point we are dealing with questions so far beyond concrete human abilities that as far as we can tell the model is reality so maybe this is really a philosophical point. Furthermore, I hope that my line of reasoning doesn’t feel totally unreasonable and unsatisfiable.
Mathematically, any physical observation corresponds to a transformation of the wave vector. And the position and momentum transformations don’t commute, so mathematically measuring position-then-momentum must give a different result than measuring momentum-then-position.
However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time. Instead, we are using the model as a way to understand the behavior of a particle. Which is why the quote above sparked my question: are we modeling reality or the way we think about reality?
That’s the crazy part: when it’s not being interacted with, the particle is a wave. The particle is nowhere and the wave is everywhere. If something, anything, then interacts with the wave, it collapses back into being a particle, with either a more-defined position and a less-defined velocity or a more-defined velocity and less-defined position. It’s weird and wonderful and makes no sense whatsoever, but it’s also what’s borne out in experiment after experiment. The universe just doesn’t care about what makes sense to humans.
@wizeman already mentioned the double-slit experiment, where a series of individual particles diffracts just like waves do. But there’s other, wilder experiments, like quantum bomb-testing. If a live bomb is guaranteed to be set off by a single photon, you can cleverly exploit wave interference to determine whether a bomb is live or not without setting it off (50% of the time).
People have tried to reconcile QM with human intuition, and a lot of them fall under what we call Bell experiments to find more “rational” explanations. In the end the weird unintuitive QM rules always win.
However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time.
Wouldn’t that contradict the observations of the double-slit experiment? Especially the ones that were performed by firing a single photon/electron/atom/molecule at a time.
Is the probabilistic nature of quantum physics just modeling that accounts for the large relative size of the tools of observation relative to the object of observation? … Does “objective reality” exist beyond the capabilities of the model?
This is dealt with by the field ‘foundations of quantum mechanics,’ and the experiments are fascinatingly subtle. It turns out that not only is there no such thing as negligible measurement, there isn’t a value of an observable beforehand. We generally get at that via measurements on entangled systems that we have manipulated in some clever way.
If you really don’t want probability in the underlying theory, the usual route is to go to Everett’s interpretation, which does away with a classical observer entirely. In that case the random chance we see is because we perceive ourselves as on one lobe of a bifurcating wave function.
If you really don’t want probability in the underlying theory, the usual route is to go to Everett’s interpretation, which does away with a classical observer entirely. In that case the random chance we see is because we perceive ourselves as on one lobe of a bifurcating wave function.
Oh, no! My fears have been realized!
My issue isn’t really with having probability in the theory.* It’s that when people talk about quantum physics I can’t tell if they are talking about the model of quantum physics or how the world actually works. I know that might sound crazy: theory is supposed to be about how the world works! But sometimes theory is an incomplete picture of how the world works. It’s a particular view that solves particular problems. That’s not necessarily how theory is always talked about and it may be completely inaccurate here.
You have all sorts of pop sci articles that claim that objectiverealitydoesn’texist. I can’t tell how seriously I should take such claims. Are they misinterpreting experiments that demonstrate the defects of the model or is there actually a deeper more fundamental question being raised here?
I don’t want to sit here and ignore what might be very fundamental deviations from how I view the world, but I also feel the need to clarify how I should interpret these claims before accepting them.
I don’t really really have an issue here besides my own ignorance.
talking about the model of quantum physics or how the world actually works
Given we haven’t (to my knowledge) got any single theory that describes all observed phenomena (from small to large scales) - I assume that anything other than experimental results is “about the model” (which is not to dismiss it!).
I read a bit from the quantum foundations wikipedia article. It seems like all of this is still under debate and I’m not crazy.
From the wiki page:
A physical property is epistemic when it represents our knowledge or beliefs on the value of a second, more fundamental feature. The probability of an event to occur is an example of an epistemic property. In contrast, a non-epistemic or ontic variable captures the notion of a “real” property of the system under consideration.
There is an on-going debate on whether the wave-function represents the epistemic state of a yet to be discovered ontic variable or, on the contrary, it is a fundamental entity.
Thank you @madhadron for giving me the right field to look under. Thank you @hwayne for entertaining my questions. Apologies to @teymour for so thoroughly derailing the conversation.
The pop sci articles are hard to interpret because they’re trying to bring something back to colloquial English that is deeply unfamiliar. But the objective reality does not exist parts are some of the experimental facts that any theory has to account for rather than quirks of a particular theory.
Out of all these, Specifying Systems really hit me in a way that’s stuck with me deeply. After the first chapter, I felt like I understood software on a whole new level.
The other books by Nielson & Nielson are very much worth looking into. They make up a nice sequence, but only this one is free. Likewise, Certified Programing for Dependent Types also by Adam Chlipala. I did not list this one above as I feel is much better known. I feel FRAP is really good for beginners, plus it offers some nice and practical insights which are missing from e.g. Concrete Semantics.
+1 to Program = Proof. Samuel introduces things in a very accessible way. Section 4.4 is especially gold as a reference for implementation of unification-based type systems
It’s interesting that you say that FRAP is good for beginners. I consider myself pretty intermediate in level wrt formal methods, and the wording and notation in FRAP is still way too dense for me. I feel like while it is an overview of a lot of different areas, it’s way too advanced for most people to grok.
In fact, I really wish there were a great book for beginners that covered like, 75% of FRAP in a more digestible way.
Adam recommends Software Foundations as a much slower paced alternative to FRAP. Perhaps the version that is being written by Wadler using Agda is also worth taking into consideration, as Agda is friendlier than Coq.
I really liked FRAP as it answered lots of questions I had concerning practical usage of theorem provers. I did a whole MSc in Formal Methods and our coverage of theorem provers from that angle was suboptimal.
You are right in the sense that FRAP is very dense, like some mathematics books. So it might take an hour or two per page. Sometimes more. It does not require a lot of background, though.
This other book might be a better suggestion as a more reasonable prequel to FRAP: https://link.springer.com/book/10.1007/978-3-030-05156-3. The epilogue points to Certified Programming with Dependent Types as a possible follow-up read. FRAP is a bit less advanced.
Interestingly I find Software Foundations similarly dense / opaque.
As an example of a book in a similar vein that I was able to align with is Concrete Semantics. That is pretty close to what I’m talking about as being a good beginner book for formal methods. The only thing it leaves to be desired is that it’s very focused on language semantics and not so much of an overview of formal methods in general.
Yes, I have gone through Concrete Semantics too. The issue you mentioned annoyed me as there are no good Isabelle followups on practical usage. So I switched to Coq.
I later on read that Nipkow & Klein just aimed at formalizing Winskel, a very well known semantics book, when writing Concrete Semantics. Most results in Winskel are fairly trivial, so Concrete Semantics is a lot of work that doesn’t take you that far.
I don’t have any alternative suggestions aside from the Nielson & Nielson Springer book I linked, which is way simpler and more practical than Concrete Semantics, perhaps Agda? Some entries here look promising, especially Stump’s book: https://agda.readthedocs.io/en/v2.6.2.1/getting-started/tutorial-list.html. However, I’m not sure Agda is mature enough to be able to do some of the things Chlipala is teaching.
I came up through BASIC, Java, JavaScript, and other mainstream languages. I took as essential all their trappings like native types, OO patterns, and shared mutable state. Starting over with Structure and Interpretation of Computer Programs has been excellent for clarifying what’s essential to computation and abstraction, versus the tradeoffs made when adding concepts like mutating assignment. I chose the 2nd edition in Scheme, though there is a JS edition too.
The first is about how we should think about and describe computation via state machines, and the second is about how to define programming language semantics in a proof assistant. There is nothing more important than our model of computation, because everything else stems from that. And I agree with Leslie Lamport that state machines are the right model to place at the foundation.
Once we have that, our main tools are programming languages, and understanding how they work and what exactly is the semantics of a language is extremely important. Using a proof assistant is just an added benefit, as we can actually have a tool check our work when building complex logical systems. That’s actually invaluable, and I’m extremely regretful that I didn’t learn how to use these tools sooner.
Ideas That Created the Future is a collection of 46 extremely influential computer science papers. My favorites in here include:
Mathematical Problems (1900) - David Hilbert
On Computable Numbers, with an Application to the Entscheidungsproblem (1936) - Alan Turing
Computing Machinery and Intelligence (1950) - Alan Turing
The Perceptron (1958) - Frank Rosenblatt
The Unix Time-Sharing System (1974) - Dennis Ritchie & Kenneth Thompson
Quantum Computing Since Democritus by Scott Aaronson was pretty detailed, funny, and eye-opening in a lot of ways. Scott’s style of writing is, imo, perfect for communicating the insanely technical topics this book introduces. If you’re at all interested in quantum computing but have 0 experience, this is a good introduction to the ideas that brought qc into existence.
The Little Typer is a lovely introduction to dependent type theory. The entire book is framed as a conversation, teaching you how to build programs using a little lisp-like language.
+1 for Data and Reality, but the first edition is better, imo. IIRC, the second edition is posthumous, with a few chapters removed, and interspersed with comments from another author, which I found more distracting than enlightening. Anyway, a great reading, highly recommended, and mandatory if you’re into data modeling!
You’re thinking of the third edition. As Hillel wrote in that newsletter post, second edition was by Kent only.
This is wildly off topic, but I guess I’m going to shoot my shot.
I’ve been wondering for a while if the confusion about quantum physics in public discourse is because quantum physics is about modeling physics when operating at scales where measurement affects what is measured and therefore observer is inextricably linked to observation.
Is the probabilistic nature of quantum physics just modeling that accounts for the large relative size of the tools of observation relative to the object of observation? (E.g., when you observe a billiard ball with light the mass of light is far smaller than the mass of the billiard ball so it doesn’t after the speed or positioning of the ball. If you observed a billiard ball with another billiard ball, it would affect the speed or positioning of the observed billiard ball.) Or is there something more fundamental about the nature of the model that I’m misunderstanding? Does “objective reality” exist beyond the capabilities of the model?
(That last question is squishy as hell.)
Nobody that I’ve talked to that’s into quantum physics seems to know much about it except as a vehicle to support their brand of metaphysics and I don’t know how to parse what you find on the internet. (Of course, the only person I personally know who I consider to be qualified to speak on the subject, a graduate level physics student, said she really didn’t know that much about it.)
Also, I’m semi-afraid I won’t understand the answers that I receive.
Coincidentally enough I have a degree in physics, though I haven’t looked at it in a while, so this answer is based on a nine-year old memory:
It’s not a measurement error, the information literally doesn’t exist at that level of granularity. At a quantum level, particles are also waves, right? The energy of a wave correlates with its frequency, while the energy of a particle is in its kinetic energy, which correlates with momentum. The position of the particle correlates with the peaks and troughs of the wave: if the wave peaks at a certain point, there’s a higher probability the particle “is there”.
Now, to have a precise frequency, you need a lot of wavelengths, so there’s a lot of places the particle can be. If you add together lots of different waves with different frequencies, you can get a sum wave with a single enormous peak, and the particle is localized. But now there’s many different frequencies, so the kinetic energy is uncertain.
Mathematically, any physical observation corresponds to a transformation of the wave vector. And the position and momentum transformations don’t commute, so mathematically measuring position-then-momentum must give a different result than measuring momentum-then-position.
It’s all weird and messy I know and I’m not doing a good job explaining it. The important thing is that this all holds regardless of how we measure it. And it leads to all sorts of weird consequences like quantum tunneling. if you put a barrier between a particle and measure the momentum very precisely, the position can get uncertain enough that it could be on the other side of the barrier.
First off, thank you for taking the time to respond to this! I don’t think your reply was weird or messy at all. If you want weird and messy, keep reading!
This is great and I think it is core to my question. You are talking about how there is a probabilistic way to calculate the frequency, which, if I’m understanding correctly, means you can determine where the particle is as it is traveling down the path of the wave. So this is a model of the behavior of the particle.
However, it makes intuitive sense to me that the actual, objective location of any particular particle isn’t probabilistic. (It might, however, be completely unknowable.) It just exists at some point in space at some particular time. Instead, we are using the model as a way to understand the behavior of a particle. Which is why the quote above sparked my question: are we modeling reality or the way we think about reality?
You also said this:
Which might imply that my intuition is wrong and at the physical level particles fundamentally don’t behave the way I think they should. Or maybe you mean that the information is unknowable because of how we model the problem.
Again, is this a matter with the model or with reality? Are we essentially dealing with mathematical errors in the model that build up, but the particles actually have smooth trajectories in space? Or is there something more fundamentally random occurring?
I know that at a certain point we are dealing with questions so far beyond concrete human abilities that as far as we can tell the model is reality so maybe this is really a philosophical point. Furthermore, I hope that my line of reasoning doesn’t feel totally unreasonable and unsatisfiable.
That’s a neat detail!
That’s the crazy part: when it’s not being interacted with, the particle is a wave. The particle is nowhere and the wave is everywhere. If something, anything, then interacts with the wave, it collapses back into being a particle, with either a more-defined position and a less-defined velocity or a more-defined velocity and less-defined position. It’s weird and wonderful and makes no sense whatsoever, but it’s also what’s borne out in experiment after experiment. The universe just doesn’t care about what makes sense to humans.
@wizeman already mentioned the double-slit experiment, where a series of individual particles diffracts just like waves do. But there’s other, wilder experiments, like quantum bomb-testing. If a live bomb is guaranteed to be set off by a single photon, you can cleverly exploit wave interference to determine whether a bomb is live or not without setting it off (50% of the time).
People have tried to reconcile QM with human intuition, and a lot of them fall under what we call Bell experiments to find more “rational” explanations. In the end the weird unintuitive QM rules always win.
Wouldn’t that contradict the observations of the double-slit experiment? Especially the ones that were performed by firing a single photon/electron/atom/molecule at a time.
I don’t think I made this completely clear before, but I don’t mean that something making intuitive sense to me makes it correct.
This is dealt with by the field ‘foundations of quantum mechanics,’ and the experiments are fascinatingly subtle. It turns out that not only is there no such thing as negligible measurement, there isn’t a value of an observable beforehand. We generally get at that via measurements on entangled systems that we have manipulated in some clever way.
If you really don’t want probability in the underlying theory, the usual route is to go to Everett’s interpretation, which does away with a classical observer entirely. In that case the random chance we see is because we perceive ourselves as on one lobe of a bifurcating wave function.
Hence my fear of not understanding the replies!
Oh, no! My fears have been realized!
My issue isn’t really with having probability in the theory.* It’s that when people talk about quantum physics I can’t tell if they are talking about the model of quantum physics or how the world actually works. I know that might sound crazy: theory is supposed to be about how the world works! But sometimes theory is an incomplete picture of how the world works. It’s a particular view that solves particular problems. That’s not necessarily how theory is always talked about and it may be completely inaccurate here.
You have all sorts of pop sci articles that claim that objective reality doesn’t exist. I can’t tell how seriously I should take such claims. Are they misinterpreting experiments that demonstrate the defects of the model or is there actually a deeper more fundamental question being raised here?
I don’t want to sit here and ignore what might be very fundamental deviations from how I view the world, but I also feel the need to clarify how I should interpret these claims before accepting them.
Given we haven’t (to my knowledge) got any single theory that describes all observed phenomena (from small to large scales) - I assume that anything other than experimental results is “about the model” (which is not to dismiss it!).
I read a bit from the quantum foundations wikipedia article. It seems like all of this is still under debate and I’m not crazy.
From the wiki page:
Thank you @madhadron for giving me the right field to look under. Thank you @hwayne for entertaining my questions. Apologies to @teymour for so thoroughly derailing the conversation.
The pop sci articles are hard to interpret because they’re trying to bring something back to colloquial English that is deeply unfamiliar. But the objective reality does not exist parts are some of the experimental facts that any theory has to account for rather than quirks of a particular theory.
The Art of Computer Programming, Knuth
The Third Manifesto, Date and Darwen
Specifying Systems, Lamport
Types and Programming Languages, Pierce
Homotopy Type Theory, The Univalent Foundations Program
The Calculi of Lambda-Conversion, Church
Out of all these, Specifying Systems really hit me in a way that’s stuck with me deeply. After the first chapter, I felt like I understood software on a whole new level.
Program = Proof, Samuel Mimram
Program Analysis (An Appetizer), Nielson & Nielson
Formal Reasoning about Programs, Adam Chlipala
The other books by Nielson & Nielson are very much worth looking into. They make up a nice sequence, but only this one is free. Likewise, Certified Programing for Dependent Types also by Adam Chlipala. I did not list this one above as I feel is much better known. I feel FRAP is really good for beginners, plus it offers some nice and practical insights which are missing from e.g. Concrete Semantics.
+1 to Program = Proof. Samuel introduces things in a very accessible way. Section 4.4 is especially gold as a reference for implementation of unification-based type systems
Also in this genre, recently read the book The Little Typer which covers dependent types and greatly enjoyed it.
The Little Books are pretty great. Even lesser known ones like A Little Java, A Few Patterns or The Little MLer.
It’s interesting that you say that FRAP is good for beginners. I consider myself pretty intermediate in level wrt formal methods, and the wording and notation in FRAP is still way too dense for me. I feel like while it is an overview of a lot of different areas, it’s way too advanced for most people to grok.
In fact, I really wish there were a great book for beginners that covered like, 75% of FRAP in a more digestible way.
Adam recommends Software Foundations as a much slower paced alternative to FRAP. Perhaps the version that is being written by Wadler using Agda is also worth taking into consideration, as Agda is friendlier than Coq.
I really liked FRAP as it answered lots of questions I had concerning practical usage of theorem provers. I did a whole MSc in Formal Methods and our coverage of theorem provers from that angle was suboptimal.
You are right in the sense that FRAP is very dense, like some mathematics books. So it might take an hour or two per page. Sometimes more. It does not require a lot of background, though.
This other book might be a better suggestion as a more reasonable prequel to FRAP: https://link.springer.com/book/10.1007/978-3-030-05156-3. The epilogue points to Certified Programming with Dependent Types as a possible follow-up read. FRAP is a bit less advanced.
Interestingly I find Software Foundations similarly dense / opaque.
As an example of a book in a similar vein that I was able to align with is Concrete Semantics. That is pretty close to what I’m talking about as being a good beginner book for formal methods. The only thing it leaves to be desired is that it’s very focused on language semantics and not so much of an overview of formal methods in general.
Yes, I have gone through Concrete Semantics too. The issue you mentioned annoyed me as there are no good Isabelle followups on practical usage. So I switched to Coq.
I later on read that Nipkow & Klein just aimed at formalizing Winskel, a very well known semantics book, when writing Concrete Semantics. Most results in Winskel are fairly trivial, so Concrete Semantics is a lot of work that doesn’t take you that far.
I don’t have any alternative suggestions aside from the Nielson & Nielson Springer book I linked, which is way simpler and more practical than Concrete Semantics, perhaps Agda? Some entries here look promising, especially Stump’s book: https://agda.readthedocs.io/en/v2.6.2.1/getting-started/tutorial-list.html. However, I’m not sure Agda is mature enough to be able to do some of the things Chlipala is teaching.
John Carmack recommended me “Computer Architecture: A Quantitative Approach” by John L. Hennessy and David A. Patterson. Good recommendation.
Also recommended by Graydon!
https://graydon2.dreamwidth.org/233585.html
I came up through BASIC, Java, JavaScript, and other mainstream languages. I took as essential all their trappings like native types, OO patterns, and shared mutable state. Starting over with Structure and Interpretation of Computer Programs has been excellent for clarifying what’s essential to computation and abstraction, versus the tradeoffs made when adding concepts like mutating assignment. I chose the 2nd edition in Scheme, though there is a JS edition too.
The two books that most affected me / inspired me directly are:
The first is about how we should think about and describe computation via state machines, and the second is about how to define programming language semantics in a proof assistant. There is nothing more important than our model of computation, because everything else stems from that. And I agree with Leslie Lamport that state machines are the right model to place at the foundation.
Once we have that, our main tools are programming languages, and understanding how they work and what exactly is the semantics of a language is extremely important. Using a proof assistant is just an added benefit, as we can actually have a tool check our work when building complex logical systems. That’s actually invaluable, and I’m extremely regretful that I didn’t learn how to use these tools sooner.
Ideas That Created the Future is a collection of 46 extremely influential computer science papers. My favorites in here include:
Quantum Computing Since Democritus by Scott Aaronson was pretty detailed, funny, and eye-opening in a lot of ways. Scott’s style of writing is, imo, perfect for communicating the insanely technical topics this book introduces. If you’re at all interested in quantum computing but have 0 experience, this is a good introduction to the ideas that brought qc into existence.
The Little Typer is a lovely introduction to dependent type theory. The entire book is framed as a conversation, teaching you how to build programs using a little lisp-like language.