1. -1

    Eventually we will stop investing in chemical rocketry and do something really interesting in space travel. We need a paradigm shift in space travel and chemical rockets are a dead end.

    1. 7

      I can’t see any non-scifi future in which we give up on chemical rocketry. Chemical rocketry is really the only means we have of putting anything from the Earth’s surface into Low Earth Orbit, because the absolute thrust to do that must be very high compared what you’re presumably alluding to (electric propulsion, lasers, sails) that only work once in space, where you can do useful propulsion orthogonally to the local gravity gradient (or just with weak gravity). But getting to LEO is still among the hardest bits of any space mission, and getting to LEO gets you halfwhere to anywhere in the universe, as Heinlein said.

      Beyond trying reuse the first stage of a conventional rocket, as SpaceX are doing, there are some other very interesting chemical technologies that could greatly ease space access, such as the SABRE engine being developed for the Skylon spaceplane. The only other way I know of that’s not scifi (e.g. space elevators) are nuclear rockets, in which a working fluid (like Hydrogen) is heated by a fissiling core and accelerated out of a nozzle. The performance is much higher than chemical propulsion but the appetite to build and fly such machines is understandably very low, because of the risk of explosions on ascent or breakup on reentry spreading a great deal of radioactive material in the high atmosphere over a very large area.

      But in summary, I don’t really agree with, or more charitably thing I’ve understood your point, and would be interested to hear what you actually meant.

      1. 3

        I remember being wowed by Project Orion as a kid.

        Maybe Sagan had a thing for it? The idea in that case was to re-use fissile material (after making it as “clean” as possible to detonate) for peaceful purposes instead of for military aggression.

        1. 2

          Atomic pulse propulsion (ie Orion) can theoretically reach .1c, so that’s the nearest star in 40 years. If we can find a source of fissile material in solar system (that doesn’t have to be launched from earth) and refined, interstellar travel could really happen.

          1. 1

            The moon is a candidate for fissile material: https://www.space.com/6904-uranium-moon.html

        2. 1

          Problem with relying a private company funded by public money like SpaceX is that they won’t be risk takers, they will squeeze every last drop out of existing technology. We won’t know what reasonable alternatives could exist because we are not investing in researching them.

          1. 2

            I don’t think it’s fair to say SpaceX won’t be risk takers, considering this is a company who has almost failed financially pursuing their visions, and has very ambitious goals for the next few years (which I should mention, require tech development/innovation and are risky).

            Throwing money at research doesn’t magically create new tech, intelligent minds do. Most of our revolutionary advances in tech have been brainstormed without public nor private funding. One or more people have had a bright idea and pursed it. This isn’t something people can just do on command. It’s also important to also consider that people fail to bring their ideas to fruition but have paved the path for future development for others.

            1. 1

              I would say that they will squeeze everything out of existing approaches, «existing technology» sounds a bit too narrow. And unfortunately, improving the technology by combining well-established approaches is the stage that cannot be too cheap because they do need to build and break fulll-scale vehicles.

              I think that the alternative approaches for getting from inside atmosphere into orbit will include new things developed without any plans to use them in space.

          2. 2

            What physical effects would be used?

            I think that relying on some new physics, or contiguous objects of a few thousand kilometers in size above 1km from the ground are not just a paradigm shift; anything like that would be nice, but doesn’t make what there currently is a disappointment.

            The problem is that we want to go from «immobile inside atmosphere» to «very fast above atmosphere». By continuity, this needs to pass either through «quite fast in the rareified upper atmosphere» or through «quite slow above the atmosphere».

            I am not sure there is a currently known effect that would allow to hover above the atmosphere without orbital speed.

            As for accelerating through the atmosphere — and I guess chemical air-breathing jet engines don’t count as a move away from chemical rockets — you either need to accelerate the gas around you, or need to carry reaction mass.

            In the first case as you need to overcome the drag, you need some of the air you push back to fly back relative to Earth. So you need to accelerate some amount of gas to multiple kilometers per second; I am not sure there are any promising ideas for hypersonic propellers, especially for rareified atmosphere. I guess once you reach ionosphere, something large and electromagnetic could work, but there is a gap between the height where anything aerodynamic has flown (actually, a JAXA aerostat, maybe «aerodynamic» is a wrong term), and the height where ionisation starts rising. So it could be feasible or infeasible, and maybe a new idea would have to be developed first for some kind of in-atmosphere transportation.

            And if you carry you reaction mass with you, you then need to eject it fast. Presumably, you would want to make it gaseous and heat up. And you want to have high throughput. I think that even if you assume you have a lot of electrical energy, splitting watter into hydrogen and oxygen, liquefying these, then burning them in-flight is actually pretty efficient. But then the vehicle itself will be a chemical rocket anyway, and will use the chemical rocket engineering as practiced today. Modern methods of isolating nuclear fission from the atmosphere via double heat exchange reduce throughput. Maybe some kind nuclear fusion with electomagnetic redirection of the heated plasma could work, maybe it could even be more efficient than running a reactor on the ground to split water, but nobody knows yet what is the scale required to run energy-positive nuclear fusion.

            All in all, I agree there are directions that could maybe become a better idea for starting from Earth than chemical rockets, but I think there are many scenarios where the current development path of chemical rockets will be more efficient to reuse and continue.

            1. 2

              What do you mean by “chemical rockets are a dead end”? In order to escape planetary orbits, there really aren’t many options. However, for intersteller travel, ion drives and solar sails have already been tested and deployed and they have strengths and weaknesses. So there are multiple use cases here depending on the option.

              1. 1

                Yeah right after we upload our consciousness to a planetary fungal neural network.

              1. 4

                I initally thought this would be one of those blog posts that were in vogue a few years ago on HN (brand-concious-minimalism, meditation), but the overriding theme is one of the benefits of eating less and moving more, and the positive effects of this for him and people close to him. I have had a similar experience, though I work in an office, mostly because of getting a sit-stand desk and being a bit more organised and strict with my food shopping and meal prep. It didn’t take much to change the sign of my mass derivative for small +ve to small -ve, and the effect that’s had over 18 months has been transformitive.

                1. 37

                  The “downsides” list is missing a bunch. I mean, I use Makefiles too, probably too much, but they do have some serious downsides, e.g.

                  • The commands are interpreted first by make, then by $(SHELL), giving some awful escaping at times
                  • If you need to do things differently on different platforms, or package things for distros, you pretty quickly have to learn autoconf or even automake, which adds greatly to the complexity (or reinvent the wheel and hope you didn’t forget some edge-case with DESTDIR installs or whatever that endless generated configure script is for)
                  • The only way to safely (e.g. parallelizable) do multiple outputs is by using the GNU pattern match extension, which is extremely limited (rules with multiple inputs to multiple outputs is hard to write without lots of redundancy)
                  • GNU make 4 has different features from macos (pre-GPL3) make 3.8 has different features from the various BSD makes
                  • You really have to understand how make works to avoid doing things like possibly_failing_command | sed s/i/n/g > $@ (which will create $@ and trick make into thinking the rule succeeded because sed exited with 0 even though the first command failed). And do all your devs know how to have multiple goals that each depend on a temp dir existing, without breaking -j?

                  and there’s probably lots more. OTOH, make been very useful to me over the years, I know its quirks, and it’s available on all kinds of systems, so it’s typically the first thing I reach for even though I’d love to have something that solves the above problems as well.

                  1. 14

                    Your additional downsides makes it sound like maybe the world needs a modern make. Not a smarter build tool, but one with less 40-year-old-Unix design sensibilities: a nicer, more robust language; a (small!) handful of missing features; and possibly a library of common functionality to limit misimplementations and cut down on the degree to which every nontrivial build is a custom piece of software itself.

                    1. 7

                      mk?

                      1. 3

                        i’ve also thought of that! for reference: https://9fans.github.io/plan9port/man/man1/mk.html

                      2. 10

                        I think the same approach as Oil vs. bash is necessary: writing something highly compatible with Make, separating the good parts and bad parts, and fixing the bad parts.

                        Most of the “make replacements” I’ve seen make the same mistake: they are better than Make with respect to the author’s “pet peeve”, but worse in all other dimensions. So “real” projects that use GNU Make like the Linux kernel and Debian, Android, etc. can’t migrate to them.

                        To really rid ourselves of Make, you have to implement the whole thing and completely subsume it. [1]

                        I wrote about Make’s overlap with shell here [2] and some general observations here [3], echoing the grandparent comment – in particular how badly Make’s syntax collides with shell.

                        I would like for an expert in GNU Make to help me tackle that problem in Oil. Probably the first thing to do would be to test if real Makefiles like the ones in the Linux kernel can be statically parsed. The answer for shell is YES – real programs can be statically parsed, even though shell does dynamic parsing. But Make does more dynamic parsing than shell.

                        If there is a reasonable subset of Make that can be statically parsed, then it can be converted to a nicer language. In particular, you already have the well-tested sh parser in OSH, and parsing Make’s syntax 10x easier that. It’s basically the target line, indentation, and $() substitution. And then some top level constructs like define, if, include, etc.

                        One way to start would be with the “parser” in pymake [4]. I hacked on this project a little. There are some good things about it and some bad, but it could be a good place to start. I solved the problem of the Python dependency by bundling the Python interpreter. Although I haven’t solved the problem of speed, there is a plan for that. The idea of writing it in a high-level language is to actually figure out what the language is!

                        The equivalent of “spec tests” for Make would be a great help.

                        [1] https://lobste.rs/s/ofu5yh/dawn_new_command_line_interface#c_d0wjtb

                        [2] http://www.oilshell.org/blog/2016/11/14.html

                        [3] http://www.oilshell.org/blog/2017/05/31.html

                        [4] https://github.com/mozilla/pymake

                        1. 6

                          Several more modern make style tools exists - e.g. ninja, tu and redo.

                          1. 2

                            We need a modern make, not make-style tools. It needs to be mostly compatible so that someone familiar with make can use “modern make” without learning another tool.

                            1. 8

                              I think anything compatible enough with make to not require learning the new tool would find it very hard to avoid recreating the same problems.

                          2. 2

                            The world does, but

                            s/standards/modern make replacements/g

                          3. 5

                            Do most of these downsides also apply to the alternatives?

                            The cross platform support of grunt and gulp can be quite variable. Grunt and gulp and whatnot have different features. The make world is kinda fragmented, but the “not make” world is pretty fragmented, too.

                            My personal experience with javascript ecosystem is nil, but during my foray into ruby I found tons of rakefiles that managed to be linux specific, or Mac specific, or whatever, but definitely not universal.

                            1. 5

                              I recommend looking at BSD make as its own tool, rather than ‘like gmake but missing this one feature I really wanted’. It does a lot of things people want without an extra layer of confusion (automake).

                              Typical bmake-only makefiles rarely include shell script fragments piping output around, instead they will use ${VAR:S/old/new} or match contents with ${VAR:Mmything*}. you can use ‘empty’ (string) or (file) ‘exists’.

                              Deduplication is good and good mk fragments exist. here’s an example done with bsd.prog.mk. this one’s from pkgsrc, which is a package manager written primarily in bmake.

                              1. 2

                                Hey! Original author here :). Thanks a bunch for this feedback. I’m pretty much a Make noob still, so getting this type of feedback from folks with more experience is awesome to have!

                                1. 2

                                  You really have to understand how make works to avoid doing things like possibly_failing_command | sed s/i/n/g > $@ (which will create $@ and trick make into thinking the rule succeeded because sed exited with 0 even though the first command failed).

                                  Two things you need to add to your Makefile to remedy this situation:

                                  1. SHELL := bash -o pipefail. Otherwise, the exit status of a shell pipeline is the exit status of the last element of the pipeline, not the exit status of the first element that failed. ksh would work here too, but the default shell for make, /bin/sh, won’t cut it – it lacks pipefail.
                                  2. .DELETE_ON_ERROR:. This is a GNU Make extension that causes failed targets to be deleted. I agree with @andyc that this behavior should be the default. It’s surprising that it isn’t.

                                  Finally, for total safety you’d want make to write to .$@.$randomness.tmp and use an atomic rename if the rule succeeded, but afaik there’s no support in make for that.

                                  So yes, “you really have to understand how make works [to avoid very problematic behavior]” is an accurate assessment of the state of the things.

                                  1. 1

                                    Your temp directories dependency problem makes me think a GUI to create and drag drop your rules around could be useful. It could have “branching” and “merging” steps that indicate parallelism and joining too.

                                  1. 4

                                    I remember asking someone if he used OCaml for a ultra-high-reliability system (with hard deadlines) that he was building, and he said that he preferred to use C, proven when possible. If you use OCaml, you may be less likely to have errors in user code (or, it is more likely to say that some very-low user error rate deemed acceptable is achieved more quickly in C than in OCaml) but you’re also sitting on top of a very good but complex runtime. Often, they don’t even like to use dynamic memory management, and malloc/free is a much simpler runtime than what Ocaml or Haskell provides.

                                    I think that functional languages are very good when you need regular high reliability (e.g. six 9s, not “never goes down”) and your time budget supports a merely-long thorough project, but when you need near-absolute reliability and have an almost unlimited time/resource budget, proven C (or, at least, C that has been checked and discussed and viewed by many pairs of eyes, as I’m not aware of formally-proven numerical algos being a thing beyond trivial cases) still wins.

                                    1. 8

                                      Im not sure about your analysis on functional side for most high-integrity niches. The C one is right but incomplete. The proof is that the vast majority of safety-critical products are done in the C language. A niche of them are done in Ada and embedded Java for more safety or maintainability.

                                      The C preference comes from a combo of available talent, available compilers, tons of tools for verifying C in various ways, and standardized subsets of C like MISRA that work well with top tools. So, it’s actually easy to eliminate almost all coding errors with the amount of talent and tooling they put into those products. Most failures are bad specs or requirements.

                                      Still haven’t learned functional programming but follow what its practitioners say. Ocaml and Haskell are popular in non-real-time apps for easily boosting QA & maintainability. In high-assurance, they’re great for verified, reference implementations since they work well with provers. What seems lacking is predictability of execution patterns (eg real-time), simple/zero runtimes like C/Ada/RT-Java, easy manipulation of bits, easy interrupt handling, tooling for analysis/testing like C/Ada, great IDE’s, or certifying compilers. I’ve seen pieces of each in various work but most of this needs to get intetrated before they’ll get used widely in high-assurance or do six 9’s in real-world situations.

                                      Erlang is closest to goal given its capabilities plus successful deployments in real-time, high-integrity applications.

                                      1. 6

                                        On this topic, I found this video very interesting. It is a talk given by Gerard Holzmann, head of the Lab for Reliable Software at JPL, author of the spin model checker, and head of the team that wrote the software for the Curiosity Mars rover. The talk addresses how they wrote it, with particular emphasis on automated checking and static analysis to help them with code reviews, and he touches on the ‘why C?’ question:

                                        https://vimeo.com/84991949

                                        1. 4

                                          That was a nice vid. The part that jumped out at me, aside from the picture at the end, was the triaging of bugs when they were getting overloaded. That was fine. Then he said when there were hardly any due to the team doing a great job he would turn a knob to hit them with more. This was to keep them from getting too comfortable. It came off as both wise precaution and a cruel reward for progress. ;)