1. 11

    I do this too. I find it really helps me, but I can totally understand how it wouldn’t help others. I like being able to sit just about anywhere, pull up my shell sessions and get lost in writing or design. Sometimes I write code locally using play.js or even iSH.

    I really like how integrated everything in iOS is. I miss having native compilers sometimes, but overall I feel like I really gain a lot more by having a much more minimal device. I get system-wide autocorrect for free. I get a password manager built into the device without having to pay anyone. I can reboot in a minute. I even have a hardware keyboard and can easily type in French, Japanese or Esperanto if I want to. It’s really not for everyone, but I like it all the same.

    1. 1

      Which hardware keyboard do you use? I have not been entirely convinced by the foldable keyboards I tried.

      1. 2

        I use the default smart keyboard. It’s the only one that can handle my typing speed.

        1. 1

          Wow. So you’re touch typing on the touchscreen? Or do I misunderstand?

          1. 2
            1. 1

              Are you saying there exists physical keyboards that can’t react fast enough for squishy human fingers typing?

              1. 1

                Yes, sadly.

                1. 1

                  I’ve sometimes had this issue with the Magic Keyboard while connected to and playing music on a Bose QC35ii. Solution for me is to plug the keyboard in for a short while (which is easy thanks to USB-C on my iPad Pro) and then unplug after a while.

                2. 1

                  Ah, so it is physical. Thanks for the pic.

        1. 2

          So-named because normal Kalman filters stink, according to the creator of the UKF. I don’t know, I use the EKF a lot in state estimation problems and it works gratifyingly well.

          1. 0

            Here’s a much simpler more intuitive way…

            Take a bucket (cylindrical / vertical walls).

            Make a small hole in the bottom.

            Pour water into the bucket, (just don’t overflow it) at a varying rate.

            The rate that water flows out is proportional to the water level in the bucket.

            Low and behold you have a kalman filter.

            If your input is a dirac delta…. (ie, take another bucket, dump the whole bucket all at once into your leaky bucket), the output flow rate is a decreasing exponential.

            When somebody says “Kalman filter” at you, shrug and say “leaky bucket” back at them.

            1. 3

              Not relevant to Kalman filters at all, but I got nerd-sniped by:

              The rate that water flows out is proportional to the water level in the bucket.

              That’s intuitively how it works, but surprisingly not true once you have more than a little bit of water pressure - more pressure will not make the water flow faster.

              The outflow rate is governed by the speed of sound in your medium (water, in this case) and the size of the hole, and no amount of pressure will increase it (until it tears a bigger hole in the bucket).

              As the maximum flow rate is reached, water molecules hitting the side of the outflow hole and bouncing back create a standing wave with an equal (sans the maximum flow pressure), opposite force to that exerted by the internal pressure.

              1. 3

                Certainly it’s a “linear approximation” to reality…. and you can as you observe, easily wander out of that linear region (higher pressures, bucket overflowing, different shaped buckets….)

                …but the main appeal of Kalman filters is their simplicity and the hence their amenability to analysis.

                ie. We don’t necessarily use them because they are an accurate representation of reality, but because they’re sufficiently simple we can reason about them.

                The other very common place they are using is in analyzing resistor / capacitor circuits… but again at too high a voltage or too high a frequency or for low quality components…. you rapidly get out of the linear region and shit happens.

                (Partly that’s a recursive definition… a “Good” component is one that has a “largish” linear region not because it’s better, but because we know how to reason about it.)

                All of which is why I like the “leaky bucket” analogy… it makes it clear that this is not deep magic worthy of arcane jargon. It’s the simplest thing we can make headway reasoning about.

                1. 1

                  Oh yeah, wasn’t trying to argue about kalman filters, just reminded me of a fascinating (to me) bit of physics.

                  1. 1

                    No problem…

                    I remember my fluid mechanics lecturer talking about “dry” water….

                    Incompressible, irrotational laminar flow…..

                    Completely unlike real water… but the stuff we could make some progress on analyzing!

                    Real fluids have all kinds of weird and exotic (and fun) behaviours.

              2. 2

                Post author here, could you elaborate on this ? I fail to see the link between the two.

                1. 2

                  You are right to not see the link, there isn’t one in any meaningful sense, the grandparent has made an analogy to a single-pole low pass filter (an RC filter), not a Kalman Filter. It will only confuse you if you’re trying to understand the Kalman Filter.

                  1. 1

                    From the wikipedia article….

                    https://en.wikipedia.org/wiki/Kalman_filter#Underlying_dynamical_system_model

                    Kalman filter model assumes the true state at time k is evolved from the state at (k − 1) according to

                    x_k = F_k x_(k − 1) + B_k u_k + w_k 
                    

                    where

                    Fk is the state transition model which is applied to the previous state x_(k−1);
                    Bk is the control-input model which is applied to the control vector uk;
                    wk is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution, N
                    

                    Now let’s pare that way way down to the basics….

                    Assume the system we’re looking at isn’t changing, just the inputs and the outputs…

                    So now F doesn’t depend on k.

                    Assume we’re just dealing with one dimension now, so these are just plain values not vectors and matrices. (I’d argue making it multidimensional doesn’t alter the intuition, merely the complexity.

                    Now call the “Bk is the control-input model which is applied to the control vector uk”, “the amount of water we put into the bucket” at step k

                    Now call F x_(k-1) the amount we’d have in the next step assuming we didn’t add any water.

                    If F is 1… we’re steady state the hole is plugged.

                    Obviously if F is > 1 water is coming in the hole and the bucket is going to get exponentially fuller. In dynamical control situations we’d say this thing is unstable “She’s gonna Blow!”.

                    Obviously if F is < 1 water is leaking out the hole and the bucket is going to get exponentially emptier.

                    If F is < 0…. things are pretty weird, oscillating on each step.

                    So how much is leaking out in each step (x_(k-1) - x_k) = delta = x_(k-1) - F x_(k-1) = (1-F) x(k-1)

                    ie. as I said, the amount leaking out of the bucket is proportional to the amount inside the bucket.

                  2. 1

                    Pour water into the bucket, (just don’t overflow it) at a varying rate.

                    Presumably the varying rate bit is there to be an analogy for noise in the system state estimation?

                    1. 2

                      If you thinking in terms of signal processing… the rate at which you pour water in is your input signal, the rate it comes out is your output signal.

                      If your input signal is noisy, it’s going to get smoothed out.

                      1. 2

                        The point of the Kalman filters isn’t smoothing I thought…isn’t it improved modeling of a system with noisy measurements?

                        1. 1

                          Ok, I will note a lack of accuracy in what I said….

                          The leaky bucket describes the Kalman Filter dynamic model. The Kalman Filter would be what you’d choose to use to control a tap to keep the bucket filled to some desired level.

                  1. 9

                    This is a clickbaity summary of Edward Tufte’s [excellent] work which you should read in preference to this (in fairness the author links to it too). Available here:

                    https://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001yB

                    1. 18

                      One man’s “clickbait” is another man’s succinct summary.

                      1. 2

                        If the other man comes away concluding that seven people died because of a powerpoint slide, as stated in the “summary”’s title, I can only hope his succinct engineering lessons are applied to a field where lives don’t matter. Tufte’s original work is more nuanced and therefore more appropriate for actual engineers making actual engineering decisions that can hurt actual people. It’s worth having to read a jpeg or whatever enormous hardship the sibling post complains about.

                      2. 14

                        I disagree. I find Edward Tufte’s website basically unreadable.

                        Problems that I have reading it

                        • Two giant images advertising courses at the very top, taking up most of my screen
                        • Body of content is actually pictures, not text
                        • Body has multiple columns and jumps between layouts making it hard to follow
                        • There are slides embedded with tiny print in the references column
                        • The block quotes are in a smaller font than the surrounding text
                        • Where does the content for “PowerPoint Does Rocket Science–and Better Techniques for Technical Reports” end? This page is so tall that Superman would have trouble leaping over it.
                        1. 4

                          I understand where you’re coming from, but it’s worth pointing out that the webpage here is nothing but a holder of pages of the books - which all the formatting you reference is oriented towards. He spends a lot of thought on his layout and I don’t begrudge him not redoing his book as a webpage (with all the accompanying differences), or just presenting the text by itself.

                          1. 2

                            C’mon now “basically unreadable”, if you don’t care enough to go through the text don’t. I guess that’s why super summarised articles are read more often than the actual papers.

                            1. 2

                              Have you tried reading it on a phone? I imagine desktop is more pleasant, but for mobile, I have to agree that it is basically unreadable.

                          2. 1

                            I’ve now read the original. This summary is far, far better.

                            The original puts the emphasis on a particular way of constructing high information low comprehension slides, with all the nesting layers, and titles only those opposed to decisions could love. That’s not a problem just with PowerPoint.

                            The original cites this way of thinking as the reason PowerPoint always sucks for technical content. He is wrong. PowerPoint “simply” has to be used with low density slides. He is right: I won’t read a dense side now or ever. That doesn’t condemn all PowerPoint presentations.

                            I think this slide alone could have been just 3 bullet points, with no sub points. And it could have made the important point that had to be made.

                            The problem is not with any particular tool. It’s with engineers failing to communicate what they clearly know, and are trying to say. The summary focuses on the slide design and why it sucks, and that’s why I prefer it.

                          1. 5

                            they identified the dongle as a microprocessor, almost as powerful as the Rasberry Pi itself: the nRF52832-MDK

                            If I understand correctly, that’s a Cortex M4-based microcontroller, not a processor almost as powerful as the Raspberry Pi.

                            1. 1

                              Cortex M4 is an armv7-m processor, as opposed to armv7-a in some of Pi models. The biggest performance handicap is Thumb instruction set only, which qualifies it as “almost as powerful” in my view.

                              1. 3

                                The M4 on the dongle runs at 64Mhz, has 64kB of RAM and 512kB of flash. The Pi, depending on which gen, is over 1GHz clock speed, possibly even multi-core, with a 1GB of RAM and as much flash as the SD card you put in. There is an orders of magnitude difference in performance between the Pi and the dongle.

                                mwcampbell’s claim is correct.

                            1. 2

                              It also depends heavily on what kind of software you write. Compilers have few dependencies and will work for a long time. Anything with a GUI probably does not.

                              1. 8

                                Anything with a GUI probably does not.

                                So, interestingly, if you use the Windows APIs, for quite a long time the software you wrote really did last a long time. Like, it’s only the internalized braindamage of the Web and Linux that’ve led people to mistrust their native OS’s GUI offerings.

                                1. 4

                                  I learned a valuable lesson when a few months ago I tried to resuscitate a custom bit of laboratory equipment built for a one-off experiment in 2008. Fearing the worst, I found that the computer (GUI) interface was one giant single python file which someone had saved from the old CentOS box that was scrapped along with the rest of the experiment, using Tk, part of the python standard library.

                                  So anyway, I just typed `python2 $SCRIPT_NAME’ into my macbook and lo and behold up came the gui (ugly but perfectly functional) and it had comms with the box over pySerial and everything just worked first time. I was surprised, and I was so, so grateful for this unfashionable huge mono-file thing, rather than, I don’t know, something built using whichever long-dead PyQT was around at the time, managed with whatever package manager was definitely the right answer that year, like pyenv is now, and so on and so on.

                                  1. 2

                                    Backward compatibility is part of their lockin strategy. So, it makes sense theyd keep things working somehow to keep the cash flowing.

                                    1. 10

                                      I wouldn’t say “lockin strategy” so much as “value proposition”, though that’s very much “you say potato, I say potato”. :)

                                      1. 2

                                        Yeah, two sides of same coin here. :)

                                      2. 2

                                        That makes no sense. Backwards compatibility increases the utility of existing systems relative to the alternative.

                                        And if Microsoft did not value back-compat your indictment would be the “upgrade train”.

                                        In any case, back-compat is extremely valuable and is the prime directive of Linux kernel itself.

                                        1. 1

                                          The negative part comes with them making it hard for others to be backward compatible on top of hard to copy formats and protocols. Then, companies wishing to switch to competitors with better offerings cant without risk of critical stuff going down.

                                          The backward compatibility is one tool of many, esp obfuscated closed-source, that combine to create the lock-in. Also, lets them barely update the product or otherwise piss off users.

                                      3. 1

                                        internalized braindamage of the Web and Linux

                                        You’re going to have to explain this one to me. Linux, in my experience, has been perfectly backwards compatible for me, and I don’t just mean the kernel. Meanwhile every time I upgraded my Windows computer it broke.

                                        1. 1

                                          Think of it as a developer and not as a user–there are numerous competing APIs for doing the same GUI (and sound, christ!) work on Linux…this situation is not nearly so bad on Windows.

                                          1. 0

                                            That doesn’t make it backwards-incompatible. It makes it a good platform that isn’t directed by some horrible company for its own purposes.

                                            1. 3

                                              The web (and linux) are backwards-compatible, but lack an overarching plan for their API design (unlike windows).

                                              Since APIs were added as they were invented but never removed, both are kind of a mess to develop for.

                                              1. 0

                                                I don’t know whether the web is a mess to develop for but Linux certainly isn’t

                                    1. 51

                                      This is a personal update on somebody whose work, quite possibly, everyone reading this has used. By all means let’s normalize this. We should be thinking of our colleagues as people, not just in terms of what they can do for us right now.

                                      (This is my personal opinion, not a site policy, as you can tell from the fact that I’m posting this without my hat. Since it’s an opinion about how the site should be used, I wanted to be extra-explicit about that.)

                                      1. 5

                                        Who determines the threshold for being worthy of sharing personal updates on this site?

                                        1. 12

                                          The voters, I guess. 78% of the people who voted on this article voted it up, so it’s apparently worthwhile.

                                          1. 1

                                            Is this really correct way to interpret vote counts here? I think that in general it’s easier to vote something up than vote it down. In case of this site it is even more so - voting up is just one click, but to vote down you have to provide a reason. This likely skews counts in favour of upvotes.

                                            1. 4

                                              Actually, you’re right, looking at the percentage of votes which were upvotes is not a good way to analyze things here on lobste.rs. We do try to discourage downvotes, and that’s intentional.

                                              Instead, it would be more meaningful to compare the total count of upvotes to the rest of the front page. As I write this, this article has 48 upvotes, and there are only two other articles in the 40s. This is no substitute for a rigorous analysis, but I think it’s at least a good first pass.

                                              1. 4

                                                Actually, you’re right, looking at the percentage of votes which were upvotes is not a good way to analyze things here on lobste.rs. We do try to discourage downvotes, and that’s intentional.

                                                Instead, it would be more meaningful to compare the total count of upvotes to the rest of the front page. As I write this, this article has 48 upvotes, and there are only two other articles in the 40s. This is no substitute for a rigorous analysis, but I think it’s at least a good first pass.

                                                Understanding the bin, sbin, usr/bin, usr/sbin split has 30 upvotes and zero downvotes; at the time of writing this story has 75 upvotes and 26 ‘off-topic’ downvotes.

                                                That’s a good indication that there are stories which have unanimous ‘good for lobste.rs’ vibes, as well stories which are highly contentious like this one.

                                                1. 1

                                                  Ah wow, I’d actually missed the downvote count until you pointed it out (the version of the header that I see is a bit verbose). Thanks for that.

                                                  Yes, there are definitely some stories that are a lot more contentious than others. I don’t think that we should strive for everything to be uncontroversial; sometimes it’s precisely the contentious stuff that leads to the most interesting discussions. It would be surprising and upsetting if everyone agreed all the time.

                                        2. 7

                                          What do you think we should be normalizing?

                                          To be fair, I didn’t exactly explain in great detail either–just complained about an article that contains like two datapoints (so and so is leaving mozilla and is uncertain about future, curl is still being maintained by them) and that has no actionable information (it is highly, highly doubtful anybody here is in a position to change the outcome of the departure, and it is equally unlikely anybody is going to switch away from libcurl if they’re already using it) other than being an update on a colleague.

                                          My concern is that while I totally support the use of the person tag for honoring our dead and sharing historical figures, there are other uses I’m skeptical of.

                                          I’m not sure that it’s useful to let the person tag be used to backdoor in smear campaigns, no matter how richly deserved.

                                          I’m similarly unsure about its use to normalize breathless clickbait, or free advertising.

                                          Finally, though I do enjoy reading them, I think there is something to be had in perhaps separating interviews and just people’s own essays–and yes, I’ve benefited myself from this fuzziness.

                                          1. 5

                                            Personal updates from people who are significant to this community, whether they’re already well-known by name or not, and whether or not there’s anything actionable. I think it’s humanizing, and I think it’s important to make sure there’s space for that.

                                            Debating the merits of the person tag more generally is really a separate question, but I do think it’s a useful question, so here’s my opinions on the concerns you raised. Thank you for asking.

                                            I do generally dislike smear campaigns, but I also see the case that, when the allegations are true or at least credible and well-backed-up, they can be important warnings - as in the case of the Larry Ellison one you linked.

                                            I’m opposed to clickbait and advertising on lobste.rs.

                                            I think personal opinion pieces can be interesting sometimes. If anything, I think they’re more likely to be relevant to a highly technical audience than interviews are, since the journalist is usually trying to orient their piece towards a non-technical audience.

                                            All of these are judgement calls, and I trust the people here to make reasonable decisions on them and to vote accordingly.

                                            1. 3

                                              from people who are significant to this community,

                                              I think this is somewhat circular reasoning. I know of the people because of their work, it’s primarily their work that I’m interested in, and I’m tangentially interested in life changes that affect their work, say for example someone’s pet project got them hired by $WELL_FUNDED_CORP who wish to officialy back their pet project, and therefore I might be more likely to use their pet project because it now has a little more long-term security. But if it’s just life news (‘I am moving jobs, this won’t affect curl’), then I’m not really any more interested in that than I am in the life news of my dentist. Which is to say, I am interested in as far as any compassionate human would be, and I’d be polite and congratulatory in person, but it is absolutely not what I come to lobsters for. To be clear, there is absolutely tonnes of space for that already in other places.

                                              Other people may come to lobsters for that, of course, but they could just as easily get it from twitter or reddit or HN, and I quite like the fact that there’s usually something that improves me as an engineer (e.g. this site got me onto TLA+, mostly through seeing formal-methods related stuff on the front page several times a week), rather than just valley news. I would like it to remain that way, I’m not sure I agree with the ‘well if people upvote it then by definition it’s appropriate for lobsters’ line of thought, because we’d all just be watching gladiatorial combat on tv if that were true, and I fully support the idea that this can be a non ‘just let the market decide’ space for purer, more cerebral, mostly more CS things, even if others accuse me of policing or gate-keeping or whatever, and that’s why I’ve written this comment.

                                              1. 2

                                                Thank you for the good writeup! It’s obviously my position that civil public discussion on community norms like we’re having here is really important to keeping Lobsters functioning correctly. :)

                                                I agree with you on the humanizing aspect, but I’m a little worried that that sort of stuff will tend to clutter up the site–personal updates are, by definition, just another form of (humanitarian, people-focused) news. The fact that it would involve people we’re probably already predisposed to have some sort of emotional opinion on (good or bad) would seem to me to make the potential for abuse and spam even larger than for normal news.

                                                1. 4

                                                  Yes, your point is taken. As always, I’m glad we’re talking about it.

                                          1. 1

                                            Sorry to hear it. Random crazy thought - it’s a shame he died after that Falcon Heavy launch, because wouldn’t it be cool if his ashes were in that Tesla, circling the solar system for ~eternity?

                                            1. 2

                                              I think it’s nicer that he was able to see the Falcon Heavy launch happen, knowing that the world is indeed making progress in space exploration.

                                              1. 4

                                                In Hawking’s lifetime, he witnessed people landing on the Moon, landing probes on Venus, multiple rovers on Mars, a probe on Titan, and visiting all the outer planets, and Pluto, and landing on a comet… It’s nice that some (but not all) missions would be a bit cheaper to launch now if launched on a Falcon Heavy, but I don’t see how a great physicist would see FH as a milestone in space exploration compared to all the above. I don’t think he’d especially yearn for his urn to be part of an elon musk PR stunt either.

                                            1. -1

                                              Eventually we will stop investing in chemical rocketry and do something really interesting in space travel. We need a paradigm shift in space travel and chemical rockets are a dead end.

                                              1. 7

                                                I can’t see any non-scifi future in which we give up on chemical rocketry. Chemical rocketry is really the only means we have of putting anything from the Earth’s surface into Low Earth Orbit, because the absolute thrust to do that must be very high compared what you’re presumably alluding to (electric propulsion, lasers, sails) that only work once in space, where you can do useful propulsion orthogonally to the local gravity gradient (or just with weak gravity). But getting to LEO is still among the hardest bits of any space mission, and getting to LEO gets you halfwhere to anywhere in the universe, as Heinlein said.

                                                Beyond trying reuse the first stage of a conventional rocket, as SpaceX are doing, there are some other very interesting chemical technologies that could greatly ease space access, such as the SABRE engine being developed for the Skylon spaceplane. The only other way I know of that’s not scifi (e.g. space elevators) are nuclear rockets, in which a working fluid (like Hydrogen) is heated by a fissiling core and accelerated out of a nozzle. The performance is much higher than chemical propulsion but the appetite to build and fly such machines is understandably very low, because of the risk of explosions on ascent or breakup on reentry spreading a great deal of radioactive material in the high atmosphere over a very large area.

                                                But in summary, I don’t really agree with, or more charitably thing I’ve understood your point, and would be interested to hear what you actually meant.

                                                1. 3

                                                  I remember being wowed by Project Orion as a kid.

                                                  Maybe Sagan had a thing for it? The idea in that case was to re-use fissile material (after making it as “clean” as possible to detonate) for peaceful purposes instead of for military aggression.

                                                  1. 2

                                                    Atomic pulse propulsion (ie Orion) can theoretically reach .1c, so that’s the nearest star in 40 years. If we can find a source of fissile material in solar system (that doesn’t have to be launched from earth) and refined, interstellar travel could really happen.

                                                    1. 1

                                                      The moon is a candidate for fissile material: https://www.space.com/6904-uranium-moon.html

                                                  2. 1

                                                    Problem with relying a private company funded by public money like SpaceX is that they won’t be risk takers, they will squeeze every last drop out of existing technology. We won’t know what reasonable alternatives could exist because we are not investing in researching them.

                                                    1. 2

                                                      I don’t think it’s fair to say SpaceX won’t be risk takers, considering this is a company who has almost failed financially pursuing their visions, and has very ambitious goals for the next few years (which I should mention, require tech development/innovation and are risky).

                                                      Throwing money at research doesn’t magically create new tech, intelligent minds do. Most of our revolutionary advances in tech have been brainstormed without public nor private funding. One or more people have had a bright idea and pursed it. This isn’t something people can just do on command. It’s also important to also consider that people fail to bring their ideas to fruition but have paved the path for future development for others.

                                                      1. 1

                                                        I would say that they will squeeze everything out of existing approaches, «existing technology» sounds a bit too narrow. And unfortunately, improving the technology by combining well-established approaches is the stage that cannot be too cheap because they do need to build and break fulll-scale vehicles.

                                                        I think that the alternative approaches for getting from inside atmosphere into orbit will include new things developed without any plans to use them in space.

                                                    2. 2

                                                      What physical effects would be used?

                                                      I think that relying on some new physics, or contiguous objects of a few thousand kilometers in size above 1km from the ground are not just a paradigm shift; anything like that would be nice, but doesn’t make what there currently is a disappointment.

                                                      The problem is that we want to go from «immobile inside atmosphere» to «very fast above atmosphere». By continuity, this needs to pass either through «quite fast in the rareified upper atmosphere» or through «quite slow above the atmosphere».

                                                      I am not sure there is a currently known effect that would allow to hover above the atmosphere without orbital speed.

                                                      As for accelerating through the atmosphere — and I guess chemical air-breathing jet engines don’t count as a move away from chemical rockets — you either need to accelerate the gas around you, or need to carry reaction mass.

                                                      In the first case as you need to overcome the drag, you need some of the air you push back to fly back relative to Earth. So you need to accelerate some amount of gas to multiple kilometers per second; I am not sure there are any promising ideas for hypersonic propellers, especially for rareified atmosphere. I guess once you reach ionosphere, something large and electromagnetic could work, but there is a gap between the height where anything aerodynamic has flown (actually, a JAXA aerostat, maybe «aerodynamic» is a wrong term), and the height where ionisation starts rising. So it could be feasible or infeasible, and maybe a new idea would have to be developed first for some kind of in-atmosphere transportation.

                                                      And if you carry you reaction mass with you, you then need to eject it fast. Presumably, you would want to make it gaseous and heat up. And you want to have high throughput. I think that even if you assume you have a lot of electrical energy, splitting watter into hydrogen and oxygen, liquefying these, then burning them in-flight is actually pretty efficient. But then the vehicle itself will be a chemical rocket anyway, and will use the chemical rocket engineering as practiced today. Modern methods of isolating nuclear fission from the atmosphere via double heat exchange reduce throughput. Maybe some kind nuclear fusion with electomagnetic redirection of the heated plasma could work, maybe it could even be more efficient than running a reactor on the ground to split water, but nobody knows yet what is the scale required to run energy-positive nuclear fusion.

                                                      All in all, I agree there are directions that could maybe become a better idea for starting from Earth than chemical rockets, but I think there are many scenarios where the current development path of chemical rockets will be more efficient to reuse and continue.

                                                      1. 2

                                                        What do you mean by “chemical rockets are a dead end”? In order to escape planetary orbits, there really aren’t many options. However, for intersteller travel, ion drives and solar sails have already been tested and deployed and they have strengths and weaknesses. So there are multiple use cases here depending on the option.

                                                        1. 1

                                                          Yeah right after we upload our consciousness to a planetary fungal neural network.

                                                        1. 4

                                                          I initally thought this would be one of those blog posts that were in vogue a few years ago on HN (brand-concious-minimalism, meditation), but the overriding theme is one of the benefits of eating less and moving more, and the positive effects of this for him and people close to him. I have had a similar experience, though I work in an office, mostly because of getting a sit-stand desk and being a bit more organised and strict with my food shopping and meal prep. It didn’t take much to change the sign of my mass derivative for small +ve to small -ve, and the effect that’s had over 18 months has been transformitive.

                                                          1. 37

                                                            The “downsides” list is missing a bunch. I mean, I use Makefiles too, probably too much, but they do have some serious downsides, e.g.

                                                            • The commands are interpreted first by make, then by $(SHELL), giving some awful escaping at times
                                                            • If you need to do things differently on different platforms, or package things for distros, you pretty quickly have to learn autoconf or even automake, which adds greatly to the complexity (or reinvent the wheel and hope you didn’t forget some edge-case with DESTDIR installs or whatever that endless generated configure script is for)
                                                            • The only way to safely (e.g. parallelizable) do multiple outputs is by using the GNU pattern match extension, which is extremely limited (rules with multiple inputs to multiple outputs is hard to write without lots of redundancy)
                                                            • GNU make 4 has different features from macos (pre-GPL3) make 3.8 has different features from the various BSD makes
                                                            • You really have to understand how make works to avoid doing things like possibly_failing_command | sed s/i/n/g > $@ (which will create $@ and trick make into thinking the rule succeeded because sed exited with 0 even though the first command failed). And do all your devs know how to have multiple goals that each depend on a temp dir existing, without breaking -j?

                                                            and there’s probably lots more. OTOH, make been very useful to me over the years, I know its quirks, and it’s available on all kinds of systems, so it’s typically the first thing I reach for even though I’d love to have something that solves the above problems as well.

                                                            1. 14

                                                              Your additional downsides makes it sound like maybe the world needs a modern make. Not a smarter build tool, but one with less 40-year-old-Unix design sensibilities: a nicer, more robust language; a (small!) handful of missing features; and possibly a library of common functionality to limit misimplementations and cut down on the degree to which every nontrivial build is a custom piece of software itself.

                                                              1. 7

                                                                mk?

                                                                1. 3

                                                                  i’ve also thought of that! for reference: https://9fans.github.io/plan9port/man/man1/mk.html

                                                                2. 10

                                                                  I think the same approach as Oil vs. bash is necessary: writing something highly compatible with Make, separating the good parts and bad parts, and fixing the bad parts.

                                                                  Most of the “make replacements” I’ve seen make the same mistake: they are better than Make with respect to the author’s “pet peeve”, but worse in all other dimensions. So “real” projects that use GNU Make like the Linux kernel and Debian, Android, etc. can’t migrate to them.

                                                                  To really rid ourselves of Make, you have to implement the whole thing and completely subsume it. [1]

                                                                  I wrote about Make’s overlap with shell here [2] and some general observations here [3], echoing the grandparent comment – in particular how badly Make’s syntax collides with shell.

                                                                  I would like for an expert in GNU Make to help me tackle that problem in Oil. Probably the first thing to do would be to test if real Makefiles like the ones in the Linux kernel can be statically parsed. The answer for shell is YES – real programs can be statically parsed, even though shell does dynamic parsing. But Make does more dynamic parsing than shell.

                                                                  If there is a reasonable subset of Make that can be statically parsed, then it can be converted to a nicer language. In particular, you already have the well-tested sh parser in OSH, and parsing Make’s syntax 10x easier that. It’s basically the target line, indentation, and $() substitution. And then some top level constructs like define, if, include, etc.

                                                                  One way to start would be with the “parser” in pymake [4]. I hacked on this project a little. There are some good things about it and some bad, but it could be a good place to start. I solved the problem of the Python dependency by bundling the Python interpreter. Although I haven’t solved the problem of speed, there is a plan for that. The idea of writing it in a high-level language is to actually figure out what the language is!

                                                                  The equivalent of “spec tests” for Make would be a great help.

                                                                  [1] https://lobste.rs/s/ofu5yh/dawn_new_command_line_interface#c_d0wjtb

                                                                  [2] http://www.oilshell.org/blog/2016/11/14.html

                                                                  [3] http://www.oilshell.org/blog/2017/05/31.html

                                                                  [4] https://github.com/mozilla/pymake

                                                                  1. 6

                                                                    Several more modern make style tools exists - e.g. ninja, tu and redo.

                                                                    1. 2

                                                                      We need a modern make, not make-style tools. It needs to be mostly compatible so that someone familiar with make can use “modern make” without learning another tool.

                                                                      1. 8

                                                                        I think anything compatible enough with make to not require learning the new tool would find it very hard to avoid recreating the same problems.

                                                                    2. 2

                                                                      The world does, but

                                                                      s/standards/modern make replacements/g

                                                                    3. 5

                                                                      Do most of these downsides also apply to the alternatives?

                                                                      The cross platform support of grunt and gulp can be quite variable. Grunt and gulp and whatnot have different features. The make world is kinda fragmented, but the “not make” world is pretty fragmented, too.

                                                                      My personal experience with javascript ecosystem is nil, but during my foray into ruby I found tons of rakefiles that managed to be linux specific, or Mac specific, or whatever, but definitely not universal.

                                                                      1. 5

                                                                        I recommend looking at BSD make as its own tool, rather than ‘like gmake but missing this one feature I really wanted’. It does a lot of things people want without an extra layer of confusion (automake).

                                                                        Typical bmake-only makefiles rarely include shell script fragments piping output around, instead they will use ${VAR:S/old/new} or match contents with ${VAR:Mmything*}. you can use ‘empty’ (string) or (file) ‘exists’.

                                                                        Deduplication is good and good mk fragments exist. here’s an example done with bsd.prog.mk. this one’s from pkgsrc, which is a package manager written primarily in bmake.

                                                                        1. 2

                                                                          Hey! Original author here :). Thanks a bunch for this feedback. I’m pretty much a Make noob still, so getting this type of feedback from folks with more experience is awesome to have!

                                                                          1. 2

                                                                            You really have to understand how make works to avoid doing things like possibly_failing_command | sed s/i/n/g > $@ (which will create $@ and trick make into thinking the rule succeeded because sed exited with 0 even though the first command failed).

                                                                            Two things you need to add to your Makefile to remedy this situation:

                                                                            1. SHELL := bash -o pipefail. Otherwise, the exit status of a shell pipeline is the exit status of the last element of the pipeline, not the exit status of the first element that failed. ksh would work here too, but the default shell for make, /bin/sh, won’t cut it – it lacks pipefail.
                                                                            2. .DELETE_ON_ERROR:. This is a GNU Make extension that causes failed targets to be deleted. I agree with @andyc that this behavior should be the default. It’s surprising that it isn’t.

                                                                            Finally, for total safety you’d want make to write to .$@.$randomness.tmp and use an atomic rename if the rule succeeded, but afaik there’s no support in make for that.

                                                                            So yes, “you really have to understand how make works [to avoid very problematic behavior]” is an accurate assessment of the state of the things.

                                                                            1. 1

                                                                              Your temp directories dependency problem makes me think a GUI to create and drag drop your rules around could be useful. It could have “branching” and “merging” steps that indicate parallelism and joining too.

                                                                            1. 4

                                                                              I remember asking someone if he used OCaml for a ultra-high-reliability system (with hard deadlines) that he was building, and he said that he preferred to use C, proven when possible. If you use OCaml, you may be less likely to have errors in user code (or, it is more likely to say that some very-low user error rate deemed acceptable is achieved more quickly in C than in OCaml) but you’re also sitting on top of a very good but complex runtime. Often, they don’t even like to use dynamic memory management, and malloc/free is a much simpler runtime than what Ocaml or Haskell provides.

                                                                              I think that functional languages are very good when you need regular high reliability (e.g. six 9s, not “never goes down”) and your time budget supports a merely-long thorough project, but when you need near-absolute reliability and have an almost unlimited time/resource budget, proven C (or, at least, C that has been checked and discussed and viewed by many pairs of eyes, as I’m not aware of formally-proven numerical algos being a thing beyond trivial cases) still wins.

                                                                              1. 8

                                                                                Im not sure about your analysis on functional side for most high-integrity niches. The C one is right but incomplete. The proof is that the vast majority of safety-critical products are done in the C language. A niche of them are done in Ada and embedded Java for more safety or maintainability.

                                                                                The C preference comes from a combo of available talent, available compilers, tons of tools for verifying C in various ways, and standardized subsets of C like MISRA that work well with top tools. So, it’s actually easy to eliminate almost all coding errors with the amount of talent and tooling they put into those products. Most failures are bad specs or requirements.

                                                                                Still haven’t learned functional programming but follow what its practitioners say. Ocaml and Haskell are popular in non-real-time apps for easily boosting QA & maintainability. In high-assurance, they’re great for verified, reference implementations since they work well with provers. What seems lacking is predictability of execution patterns (eg real-time), simple/zero runtimes like C/Ada/RT-Java, easy manipulation of bits, easy interrupt handling, tooling for analysis/testing like C/Ada, great IDE’s, or certifying compilers. I’ve seen pieces of each in various work but most of this needs to get intetrated before they’ll get used widely in high-assurance or do six 9’s in real-world situations.

                                                                                Erlang is closest to goal given its capabilities plus successful deployments in real-time, high-integrity applications.

                                                                                1. 6

                                                                                  On this topic, I found this video very interesting. It is a talk given by Gerard Holzmann, head of the Lab for Reliable Software at JPL, author of the spin model checker, and head of the team that wrote the software for the Curiosity Mars rover. The talk addresses how they wrote it, with particular emphasis on automated checking and static analysis to help them with code reviews, and he touches on the ‘why C?’ question:

                                                                                  https://vimeo.com/84991949

                                                                                  1. 4

                                                                                    That was a nice vid. The part that jumped out at me, aside from the picture at the end, was the triaging of bugs when they were getting overloaded. That was fine. Then he said when there were hardly any due to the team doing a great job he would turn a knob to hit them with more. This was to keep them from getting too comfortable. It came off as both wise precaution and a cruel reward for progress. ;)