Threads for Teckla

  1. 20

    After I learned about “ci” in vim I got hooked. All of the sudden replacing text in quotes became as simple as ci” and now I’m having a hard time to use other editors. Sometimes a little detail is all that it takes.

    1. 8

      This was extremely helpful thanks.

      Just to clarify to others. In vim if you are on a word “c” starts a change and the next keystroke determines what will be changed. For example, “c$” removes text from where the cursor is to the end of the line.

      Now what is new for me is vim has a concept of “inner text”. Such as things in quotes, or inbetween any two symmetric symbols. The text between those two things are the “inner text”.

      For example, in this line, we want to change the “tag stuff” to “anything”.

      <tag style="tag stuff">Stuff</tag>
      

      Move the cursor anywhere between the quotes and type ci then a quote and you are left with

      <tag style="">Stuff</tag>
      
      1. 8

        This is a good example of why to me learning vi is not worth the trouble. In my normal editor, which does things the normal way, and does not have weird modes that require pressing a key before you are allowed to start typing and about which there are no memes for how saving and quitting is hard, I would remove the stuff in the quotes by doing cmd-shift-space backspace. Yes, that technically is twice as many key presses as Vi. No, there is no circumstance where that would matter. Pretty much every neat Vi trick I see online is like “oh if you do xvC14; it will remove all characters up to the semicolon” and then I say, it takes a similar number of keystrokes in my editor, and I even get to see highlight before it completes, so I’m not typing into a void. I think the thing is just that people who like to go deep end up learning vi, but it turns out if you go deep in basically any editor there are ways to do the same sorts of things with a similar number of keystrokes.

        1. 14

          There is not only the difference in the number of keystrokes but more importantly in ergonomics. In Vim I don’t need to hold 4 keys at once but I can achieve this by the usual flow of typing. Also things are coherent and mnemonic.

          E.g. to change the text within the quotes I type ci”(change inner “) as the parent already explained. However this is only one tiny thing. You can do all the commands you use for “change(c)” with “delete(d)” or “yield(y)” and they behave the same way.

          ci”: removes everything within the quotes and goes to insert mode di”: deletes everything within the quotes yi”: copies everything within the quotes

          d3w, c3w, y3w would for example delete, replace or copy the next 3 words.

          These are just the basics of Vim but they alone are so powerful that it’s absolutely worth to learn them.

          1. 3

            Just a small correction; I think you meant “yank(y)” instead of “yield(y)”.

            1. 1

              Haha yes thanks I really got confused :)

            2. 2

              And if you want to remove the delimiters too, you use ‘a’ instead of ‘i’ (I think the logic is that it’s a variation around ‘i’ like ‘a’ alone is).

              Moreover, you are free to chose the pair of delimiters: “, ’, {}, (), [], and probably more. It even works when nested. And even with the nesting involves the same delimiter. foo(bar(“baz”)) and your cursor is on baz, then c2i) will let you change bar(“baz”) at once. You want visual mode stuff instead? Use v instead of c.

              This goes on for a long time.

            3. 6

              One difference is that if you are doing the same edit in lots of places in your editor you have to do the cmd-shift-space backspace in every one, while in vi you can tap a period which means “do it again!” And the “it” that you are doing can be pretty fancy, like “move to the next EOL and replace string A with string B.”

              1. 2

                Sublime Text: ctrl+f search, ctrl+alt+enter select all results, then type your replacement.

                1. 2

                  Yeah I just do CMD-D after selecting a line ending if I need to do something like that.

              2. 3

                I would remove the stuff in the quotes by doing cmd-shift-space backspace

                What is a command-shift-space? Does it always select stuff between quotes? What if you wanted everything inside parentheses instead?

                and then I say, it takes a similar number of keystrokes in my editor, and I even get to see highlight before it completes, so I’m not typing into a void

                You can do it that way in vim too if you’re unsure about what you want, it’s only one keypress more (instead of ci" you do vi"c; after the " and before the c the stuff you’re about replace will be highlighted). You’re not forced to fly blind. Hell, if your computer is less than 30 years old you can probably just use the mouse to select some stuff and press the delete key and that will work too.

                The point isn’t to avoid those modes and build strength through self-flagellation; the point is to enable a new mode of working where something like “replace this string’s contents” or “replace this function parameter” become part of your muscle memory and you perform them with such facility that you don’t need feedback on what you’re about to do because you’ve already done it and typed in the new value faster than you can register visual feedback. Instead of breaking it into steps, you get feedback on whether the final result is right, and if it isn’t, you just bonk u, which doesn’t even require a modifier key, and get back to the previous state.

                1. 2

                  What if you wanted everything inside parentheses instead?

                  It is context sensitive and expands to the next context when you do it again.

                  Like I appreciate that vi works for other people but literally none of the examples I read ever make me think “I wish my editor did that”. It’s always “I know how I would do that in my editor. I’d just make a multiselection and then do X.” The really powerful stuff comes from using an LSP, which is orthogonal to the choice of editors.

                2. 2

                  I do not disagree. For vim, as for your editor, the process is in both places somewhat complex.

                  Like you I feel I only want to learn one editor really well. So I choose the one which is installed by default on every system I touch.

                  For which I give up being able to preview what happens and some other niceties. Everything is a tradeoff in the end

                3. 2

                  In a similar way, if you want to change the actual tag contents from “Stuff” to something else:

                  <tag style="tag stuff">Stuff</tag>
                  

                  you can use cit anywhere on the line (between the first < and the last >) to give you this (| is the cursor):

                  <tag style="tag stuff">|</tag>
                  

                  Or yit to copy (yank) the tag contents, dit to delete them etc.. You can also use the at motion instead of the it motion to include the rest of the tag: yat will yank the entire tag <tag style="tag stuff">Stuff</tag>.

                  Note that this only works in supported filetypes, html, xml etc., where vim knows to parse markup tags.

                4. 2

                  I really like that I keep stumbling on tidbits like this one that continue to improve my workflow even further.

                1. 52
                  • Stream start. (Again, may be lulls in when I can comment.)

                  • Tim. Apple TV+. More TV. Apple’s funding it! And the critics like it! Trailers for movies funded by them.

                  • Sports in TV+? Baseball. Friday Night.

                  • iPhone. Green iPhone 13. Pro one is slightly different shade. Preorders Friday, Avail 18th. Silicon. A15 in… iPhone SE. Same chassis as the SE 2 it seems, no bezelless front. iPhone 13 sold more than projected?

                  • Francesca. Recap on A15 and iOS 15, comparing with older models that someone buying an SE might consider. Dark, light, red colours. 5G. New camera? Or at least, old sensor with new ISP. Flaunting update lifecycles. 429$ base. Preorders Friday, 18th launch.

                  • Tim. Recommended as a smol option or for new iPhone users. Now for iPad. iPad Air.

                  • Angelina. Performance. M1 in iPad Air. 60% faster than A14 in previous model, 2x powerful graphics. Still faster than most PC laptops that are thicker. 500 nits true-tone, pretty nice panel. New front camera, 12MP ultrawide w/ centre stage. 5G, improved Type C connectivity. 2x fast; is this USB 3 now? Supports the keyboards and pencils. Reminder you can develop apps on iPad OS now. iMovie improvements. A lot of 100% recycled materials in many components. (Same for the SE too.) Promo video. Many new colours. Same price, $599. 64 or 256 configs, wi-fi/5G configs. Friday, March 18th like others.

                  • Tim. Mac. Mostly everything is ARM now, and enables new possibilities.

                  • John. More extreme performance. One more M1 family chip. M1 Ultra for desktops.

                  • Johny. Performance, efficiency, UMA. Physical limitations with larger dies. One approach is two chips on the motherboard, but latency/BW/power are concerns. Starts with M1 Max, which was even more capable. A secret? Die-to-die interconnect to connect another M1 Max die with special packaging. UltraFusion is their cute name. Interposer with double density; 10k signals. 2.5TB/s low-latency between the two dies. 4x BW of leading multi-chip solution. Like a single chip to software, preserves UMA. 114B transistors, most in a PC. Higher bandwidth memory; 800 GB/s. 10x leading PC desktop. Double channels means 128 GB of RAM. 20 core CPU, 16 big, 4 little. 64-core GPU. 8x perf of M1. 32 neural, 22T ops/s. Twice the media engines for ProRes and friends. Performance per watt is preserved. Much better performance and efficiency of 10/16-core desktop CPUs. M1 Ultra seems to match the high-end GPUs on the market with much better efficiency. Not sure what they’re comparing to but I’m guessing maybe Alder Lake desktop chips and 3080/3090.

                  • John. OS integration with hardware. Yes, you can run iOS apps. Desktops talking abo Macs live there.ut Ultra. Very much performance, especially for 3D. Where will they put it on it? Studio? They want more power than iMac and mini. Performance, connectivity, and modularity. Promo video. “Mac Studio”. Looks like a tall mini, Max or Ultra. “Studio Display”.

                  • Colleen. Looks like two Type C and maybe SD on the front? Unibody, 7.7in by 2, 3.3in tall? Double-sided blower pulling air in across the circumference of the base. Guides air over the heatsinks and PSU. Rear exhaust with slow fans? Very quiet as a result. I/O. Rear has four TB4 ports. 10G Ethernet. Two Type A, HDMI, and high-impedience headphones. Wi-Fi 6 and BT. As mentioned, SD and two Type C on the front, TB4 on Ultra. Four displays over Type C and one over HDMI. Compared to 27” Intel iMac and Mac Pro… M1 Max version is 2.5X faster than the 27” iMac. 50% than Mac Pro in 16-core config in CPU. 3.4x faster graphics on maxed out 27” iMac. Outperforms W5700X by a lot. CPU w/ Ultra is 3.8x faster than maxed out iMac, 90% than maxed out Mac Pro. With the 28-core Pro, Ultra is 60% faster. Graphics is 4.5x faster the 27” iMac on Ultra, 80% more in W5700X comparison. UMA means more VRAM - 48GB on the current pro cards on the market, but 64/128 for Max/Ultra. 7.4GB/s SSD, up to 8 TB capacity. 18 streams of 8K ProRes 422. Scenarios it can be used for like real-time instrument DAWs, 3D rendering, particle simulations, massive environments, software development, 1000MP photo studios, video editing with real-time grading and multiple previews, etc… Of course, environment. Far more power efficient. Recycled materials.

                  • Studio Display with Nicole. Features. Design. Narrow bezels, Al enclosure. Looks like the 21” iMac, just bigger. Tilt and adjustable stand w/ counterbalancing arm as an option. VESA adapter w/ rotation. 27” active area, 14.7M pixels, 218PPI. 5K Retina. 600 nits, P3 gamut, 1B colours. True tone. Anti-reflective coating. Nano-texture glass option for further anti-glare. A13 inside? For the camera and audio. 12MP ultrawide camera w/ centre stage, first on Mac. Three-mic array. Low noise floor. Six speaker sound system. Four force-cancelling woofers for bass. Two tweeters for mids and highs. Spatial audio, Atmos. Works with other devices. Three type C ports as a hub. Thunderbolt port for single-cable display and hub. 96W of power, for charging a laptop, even with fast charge. Connect three of them up to a laptop. Why not? Silver and black keyboard/mouse/trackpad options. Environment! Plug it into any Mac you want. And probably PCs too. Promo video showing the Studio dinguses in action. It’s SDXC!

                  • John. $2000 base model - 32 GB, 512 GB? Ultra is 4000 and 64 GB of RAM and 1 TB SSD in base$. 1599$ base for display. Both can be ordered right now, ships 18th. Transition is nearly complete - Mac Pro is next, but that’s another day.

                  • Tim. Recap. Fin.

                  1. 8

                    Good grief. Looking forward to seeing some application benchmarks. What on earth is the Mac Pro going to be like?

                    1. 3

                      I was hoping for a sequel from M1, as honestly single threaded performance still matters more for most things and short of higher clock from giant fan I don’t see that happening here

                      1. 2
                        • Apple M1 Ne Plus Ultra
                        • Apple M1X
                        • Apple M1S
                        • Apple M2
                        • 2x Apple M1 Ultra == 4x M1 Max == 8x M1 == 2048x 68040 ==
                        • Surprise! Rosetta 3! Apple Silicon P2 brings the shift to PowerPC! All of the architectures, just to spite the people who dared model macOS architecture decision as a boolean x86_64 or arm64!
                        1. 1

                          Apple should move to pico 8 but extremely overclocked.

                      2. 6

                        @calvin didn’t disappoint yet again. Thanks!

                        1. 2

                          Mac Pro is next, but that’s another day.

                          Definitely the thing that surprised me most in this presentation, I assumed the Mac Studio was the Mac Pro replacement - if they bring out a Mac Pro with, like, two M1 Ultras, I feel like there’d be very few people who could really even make use of that machine, it’d have 128 cores, up to 256GB RAM at 1.2TB/s throughput (boom), some stupid number of thunderbolt ports. like, what is anyone going to be able to do with all that power? It feels like more than you could need for even the highest end video editing workflows (how often are Hollywood studios using 10 8k streams of ProRes in a scene at any one time?).

                          Also, I wonder how long it’ll be before someone starts shucking the Mac Studios into 1U rack, with maybe four of each inside. Could be very interesting to see if that would significantly improve energy usage for ARM workloads on high performance chips.

                          1. 2

                            My take too. The only thing that seems missing from the Studio is expandability — maybe the Pro will be a Studio in a big case with (gasp) slots?

                            1. 2

                              they still “need” a machine with real PCI-e expansion. Hopefully that’ll be the the mac pro. Check out NeilParfittMusic on youtube to see what I mean. He has a fully loaded cheese-grater Mac Pro.

                              1. 1

                                The stuff ashahi linux has been putting out suggests a max of two cpus receiving interrupts on a system.

                                I’d be surprised if two of them could really happen.

                              2. 2

                                Wait, what?

                                You can develop apps on iPadOS now?

                                How?

                                1. 3

                                  The Swift Playgrounds app supports developing and publish apps

                              1. 2

                                Wrote my first contract program on an Apple IIe. Good times.

                                1. 2

                                  I think you mean the Apple //e

                                  ;-) :-)

                                1. 1

                                  Love. It.

                                  Thanks, Lobsters team! <3

                                  1. 1

                                    Would using IPFS be an option for you?

                                    1. 1

                                      Well I need something accessible by normal people. I think you know what I mean.

                                    1. 6

                                      Also a great workaround for paywalls, as long as you click early enough… @frenkel: What’s up with the huge spaces next to the apostrophes? Screenshot at https://imgur.com/K7lF5dU

                                      1. 3

                                        If you do it too late, just reload the page while still in Reader Mode. You usually get the full article.

                                        1. 1

                                          If you’re on firefox, Open in Reader View can be wonderful.

                                          1. 1

                                            Sadly not available for Android Fx either, but yes.

                                        2. 2

                                          Wow, that’s weird, thanks for the screenshot. What browser and OS are you using? It seems a fallback font is used, maybe it’s a font issue.

                                          1. 6

                                            For me it’s because the font-family is defined as Microsoft YaHei,微软雅黑,宋体,STXihei,华文细黑,Arial,Verdana,arial,sans-serif. This looks to be coming from your Jekyll theme. YaHei is a Simplified Chinese font, so it’s not really great for displaying content primarily written using the Roman alphabet.

                                            1. 1

                                              Thank you, I’ve removed YaHei and it fixes the problem indeed. A hard-refresh might be required.

                                            2. 1

                                              Firefox Nightly, Ubuntu Linux. It falls back to sans-serif.

                                              1. 2

                                                That’s weird, the problem was caused by Microsoft YaHei. Is it gone now? A hard-refresh might be required.

                                                1. 1

                                                  confirmed fixed. thanks!

                                              2. 1

                                                Same problem on Chrome and Ubuntu MATE.

                                                1. 1

                                                  Should be fixed! A hard-refresh might be required.

                                            1. 23

                                              What I also find frustrating on macOS is the fact you need to download Xcode packages to get basic stuff such as Git. Even though I don’t use it, Xcode is bloating my drive on this machine.

                                              We iOS developers are also not pleased with the size on disk of an Xcode installation. But you only need the total package if you are using Xcode itself.

                                              A lighter option is to delete Xcode.app and its related components like ~/Library/Developer, then get its command line tools separately with xcode-select --install. Git is included; iOS simulators are not.

                                              1. 7

                                                I’m always surprised when I see people complain about how much space programs occupy on disk. It has been perhaps a decade since I even knew (off the top of my head) how big my hard drive was, let alone how much space any particular program required. Does it matter for some reason that I don’t understand?

                                                1. 20

                                                  Perhaps you don’t, but some of us do fill up our drives if we don’t stay on top of usage. And yes, Xcode is one of the worst offenders, especially if you need to keep more than one version around. (Current versions occupy 18-19GB when installed. It’s common to have at least the latest release and the latest beta around, I personally need to keep a larger back catalogue.)

                                                  Other common storage hogs are VM images and videos.

                                                  1. 4
                                                    $ df -h / /data
                                                    Filesystem      Size  Used Avail Use% Mounted on
                                                    /dev/nvme0n1p6  134G  121G  6.0G  96% /
                                                    /dev/sda1       110G   95G  9.9G  91% /data
                                                    

                                                    I don’t know how large XCode is; a quick internet search reveals it’s about 13GB, someone else mentioned almost 20GB in another comment there. Neither would not fit on my machine unless I delete some other stuff. I’d rather not do that just to install git.

                                                    The MacBook Pro comes with 256GB by default, so my 244GB spread out over two SSDs isn’t that unusually small. You can upgrade it to 512GB, 1TB, or 2TB, which will set you back $200, $400, or $800 so it’s not cheap. You can literally buy an entire laptop for that $400, and quite a nice laptop for that $800.

                                                    1. 6

                                                      $800 for 2TB is ridiculous. If I had to use a laptop with soldered storage chips as my main machine, I’d rather deal with an external USB-NVMe adapter.

                                                      1. 2

                                                        I was about to complain about this, but actually check first (for a comment on the internet!) and holy heck prices have come down since I last had to buy an ssd

                                                      2. 1

                                                        I guess disk usage can be a problem when you have to overpay for storage. On the desktop I built at home my Samsung 970 EVO Plus (2TB NVMe) cost me $250 and the 512GB NVMe for OS partition was $60. My two 2TB HDDs went into a small Synology NAS for bulk/slow storage.

                                                      3. 4

                                                        It matters because a lot of people’s main machines are laptops, and even at 256 GB (base storage of a macbook pro) and not storing media or anything, you can easily fill that up.

                                                        When I started working I didn’t have that much disposable income, I bought an Air with 128GB, and later “upgraded” with an sd card slot 128gb thing. Having stuff like xcode (but to be honest even stuff like a debug build of certain kinds of rust programs) would take up _so much space. Docker images and stuff are also an issue, but at least I understand that. Lots of dev tools are ginoromous and it’s painful.

                                                        “Just buy a bigger hard drive from the outset” is not really useful advice when you’re sitting there trying to do a thing and don’t want to spend, what, $1500 to resolve this problem

                                                        1. 1

                                                          I don’t know. Buying laptops for Unix and Windows (gaming) size hasn’t really been an issue since 2010 or so? These days you can buy at least 512GB without make much of a dent in the price. Is Apple that much more expensive?

                                                          (I’ll probably buy a new one this year and would go with at least a 512GB SSD and 1TB HDD.)

                                                          1. 3

                                                            Apple under-specs their entry level machines to make the base prices look good, and then criminally overcharges for things like memory and storage upgrades.

                                                            1. 1

                                                              Not to be too dismissive but I literally just talked about what I experienced with my air (that I ended up using up until…2016 or so? But my replacement was still only 256GB that I used up until last year). And loads of people buy the minimum spec thing (I’m lucky enough now to be able to upgrade beyond my needs at this point tho)

                                                              I’m not lying to prove a point. Also not justifying my choices, just saying that people with small SSDs aren’t theoretical

                                                        2. 1

                                                          Yup, it’s actually what is written on the homebrew website and what I used at first.

                                                        1. 9

                                                          I find the answers to the question “What language critical functions do you need that are not available in Go?” weird. Most of these seem to be convenience features (with the exception of Null Safety). I have the feeling this is because people want to force an existing style of programming onto a different language, instead of learning how to write idiomatic code in Go itself.

                                                          1. 5

                                                            I’m not sure. You can get far without generics (and I hope we’ll continue to write primarily without them), but I don’t see, say, data structures, as unidiomatic.

                                                            Better error handling could or could not mean it’s idiomatic. For instance personally I’d be very fine with allowing one line if-statements. One line functions are allowed, so why not one line error check? You might disagree, but I would not consider that unidiomatic despite the loss of indentation.

                                                            Stronger type system is something that’s pushed for in every language I’ve seen, and I’m not surprised to find it here too. I actually find that to be the biggest weakness of Go. Out of all the languages I use, Go has the weakest type system.

                                                            1. 2

                                                              I think you’re right, better error handling and stronger type safety are probably seen in the same way as null safety.

                                                            2. 4

                                                              Another possibility, and I’m not saying this is a majority of the responses or anything, is that people now have large systems written in Go that need maintenance and moving away from what is considered “idiomatic” Go is, for them, just pragmatism.

                                                              I worked for a company that had a lot of Go code and most of the developers had pretty much decided they didn’t like Go because of the experiences they’d had writing and maintaining that software. Most new stuff was being written in Java and Kotlin instead, but the Go code still needed to be maintained. I could see those folks asking for new features that they feel will help ease their maintenance burden (whether they will or not, who knows).

                                                              That being said, are those really the people the Go team should be listening to? Probably not. But I don’t know how to tell who is who.

                                                              1. 1

                                                                How is it pragmatic? I’m genuinely curious what this would mean, as I haven’t had that much experience with large Go code-bases yet.

                                                                1. 3

                                                                  It’s pragmatic to want to change a language if you don’t like working in that language but you also can’t NOT work in that language (because you’ve got legacy code to maintain). That’s all I meant. My first sentence was a little unclear, in retrospect.

                                                                  1. 2

                                                                    Go is refactoring-hostile. Java is refactoring-friendly. In my opinion.

                                                              1. 11

                                                                This seems like a kind of arbitrary list that skips, among other things, iOS and Android, and that compares a list of technologies invented over ~40 years to a list that’s in its twenties.

                                                                1. 7

                                                                  I noticed that Go was mentioned as a post-1996 technology but Rust was not, which strikes me as rather a big oversight! Granted at least some of the innovations that Rust made available to mainstream programmers predate 1996, but not all of them, and in any case productizing and making existing theoretical innovations mainstream is valuable work in and of itself.

                                                                  In general I agree that this is a pretty arbitrary list of computing-related technologies and there doesn’t seem to be anything special about the 1996 date. I don’t think this essay makes a good case that there is a great software stagnation to begin with (and for that matter, I happened to be reading this twitter thread earlier today, arguing that the broader great stagnation this essay alludes to is itself fake, an artifact of the same sort of refusal to consider as relevant all the ways in which technology has improved in the recent past).

                                                                  1. 2

                                                                    It’s also worth noting that Go is the third or fourth attempt at similar ideas by an overlapping set of authors.

                                                                    1. 1

                                                                      The author may have edited their post since you read it. Rust is there now in the post-1996 list.

                                                                    2. 3

                                                                      I find this kind of casual dismissal that constantly gets voted up on this site really disappointing.

                                                                      1. 2

                                                                        It’s unclear to me how adding iOS or Android to the list would make much of a change to the author’s point.

                                                                        1. 3

                                                                          Considering “Windows” was on the list of pre-1996 tech, I think iOS/Android/touch-based interfaces in general would be a pretty fair inclusion of post-1996 tech. My point is that this seems like an arbitrary grab bag of things to include vs not include, and 1996 seems like a pretty arbitrary dividing line.

                                                                          1. 2

                                                                            I don’t think the list of specific technologies has much of anything to do with the point of how the technologies themselves illustrate bigger ideas. The article is interesting because it makes this point, although I would have much rather seen a deeper dive into the topic since it would have made the point more strongly.

                                                                            What I get from it, and having followed the topic for a while, is that around 1996 it became feasible to implement many of the big ideas dreamed up before due to advancements in hardware. Touch-based interfaces, for example, had been tried in the 60s but couldn’t actually be consumer devices. When you can’t actually build your ideas (except in very small instances) you start to build on the idea itself and not the implementation. This frees you from worrying about the details you can’t foresee anyway.

                                                                            Ideas freed from implementation and maintenance breed more ideas. So there were a lot of them from the 60s into the 80s. Once home computing really took off with the Internet and hardware got pretty fast and cheap, the burden of actually rolling out some of these ideas caught up with them. Are they cool and useful? In many cases, yes. They also come with side effects and details not really foreseen, which is expected. Keeping them going is also a lot work.

                                                                            So maybe this is why it feels like more radical ideas (like, say, not equating programming environments with terminals) don’t get a lot of attention or work. But if you study the ideas implemented in the last 25 years, you see much less ambition than you do before that.

                                                                            1. 2

                                                                              I think the Twitter thread @Hail_Spacecake posted pretty much sums up my reaction to this idea.

                                                                          2. 2

                                                                            I think a lot of people are getting woosh’d by it. I get the impression he’s talking from a CS perspective. No new paradigms.

                                                                            1. 3

                                                                              Most innovation in constraint programming languages and all innovation in SMT are after 1996. By his own standards, he should be counting things like peer-to-peer and graph databases. What else? Quantum computing. Hololens. Zig. Unison.

                                                                              1. 2

                                                                                Jonathan is a really jaded guy with interesting research ideas. This post got me thinking a lot but I do wish that he would write a more thorough exploration of his point. I think he is really only getting at programming environments and concepts (it’s his focus) but listing the technologies isn’t the best way to get that across. I doubt he sees SMT solvers or quantum computing as something that is particularly innovative with respect to making programming easier and accessible. Unfortunately that is only (sort of) clear from his “human programming” remark.

                                                                            2. 2

                                                                              It would strengthen it. PDAs - with touchscreens, handwriting recognition (what ever happened to that?), etc. - were around in the 90s too.

                                                                              Speaking as someone who only reluctantly gave up his Palm Pilot and Treo, they were in some ways superior, too. Much more obsessive focus on UI latency - especially on Palm - and far less fragile. I can’t remember ever breaking a Palm device, and I have destroyed countless glass screened smartphones.

                                                                              1. 3

                                                                                The Palm Pilot launched in 1996, the year the author claims software “stalled.” It was also created by a startup, which the article blames as the reason for the stall: “There is no room for technology invention in startups.”

                                                                                They also didn’t use touch UIs, they used styluses: no gestures, no multitouch. They weren’t networked, at least not in 1996. They didn’t have cameras (and good digital cameras didn’t exist, and the ML techniques that phones use now to take good pictures hadn’t even been conceived of yet). They couldn’t play music, or videos. Everything was stored in plaintext, rather than encrypted. The “stall” argument, as if everything stopped advancing in 1996, just doesn’t really hold much water to me.

                                                                                1. 1

                                                                                  The Palm is basically a simplified version of what already existed at the time, to make it more feasible to implement properly.

                                                                          1. 1

                                                                            A microkernel.

                                                                            1. 16

                                                                              Having recently introduced a “please explain to me what how a | is used in a bash shell” question in my interviews, I am surprised by how many people with claimed “DevOps” knowledge can’t answer that elementary question given examples and time to think it out (granted, on a ~60 sample size).

                                                                              Oh, this is a gem! It will go right next to “why stack the memory area has the same name as stack the data structure” into the pile of most effective interview questions.

                                                                              1. 12

                                                                                Do these questions even work? seriously. I remember interviewing someone who didn’t have the best concepts of linux, shell,etc but he knew the tools that were needed for the DevOps role and he gets the job done; knowing things like what a shell pipeline doesn’t factor in for me.

                                                                                In terms of the article itself, like I said above, people know AWS and know how to be productive with the services and frameworks for AWS. that alone is a figure hard to quantify. Sure I could save money bringing all the servers back internally or using cheaper datacenters, but I worked at a company that worked that way. You end up doing a lot of busy work chucking bad drives, making tickets to the infrastructure group and waiting for the UNIX Admin group to add more storage to your server. WIth AWS I can reasonably assume I can spin up as many c5.12xlarge machines as I want, whenever I want with whatever extras I want. It costs an 1/8 of a million a year, roughly. I see that 1/8 of a million that cuts out a lot of busy work I don’t care about doing and an 1/8 of a million that simplifies finding people to do the remaining work I don’t care about doing. The author says money wasted, I see it as money spent so i don’t have to care, and not caring is something I like; hell it isn’t even my money.

                                                                                1. 4

                                                                                  I remember interviewing someone who didn’t have the best concepts of linux, shell,etc but he knew the tools that were needed for the DevOps role and he gets the job done

                                                                                  I have to admit, I’ve never interviewed devops, only engineers. And in my experience, it’s more important for an engineer to dig into fundamental processes that he’s working with, and not just to know ready-made recipes to “get the job done”.

                                                                                  1. 7

                                                                                    I agree completely with this statement, and I think this is exactly what the article mentions as one of the lock-in steps. The person can “get the job done” because “they know the tools” is exactly the issue - the person picked up the vendor-specific tools and is efficient with them. But in my experience, when shit hits the fan, the blind copy-pasting of shell commands starts because the person doesn’t undersand the pipe properly.

                                                                                    Now, I don’t mean by that that the commenter above you is wrong. You may be still saving money in the long run. I’m just saying that it also definitely increases that vendor lock in.

                                                                                  2. 3

                                                                                    I feel like saving your company of whatever scale $15,000 a year per big server worthwhile, as long as it doesn’t end up changing your working hours. I know that where I work, if I found a way to introduce massive savings, I would be rewarded for it. Shame SIP infrastructure is so streamlined already…

                                                                                    1. 2

                                                                                      It is optimized for accuracy, not recall. This question may have some positive correlation with good devOps. It may just have positive correlation with year-of-experience, hence, good devOps. Hard to quantify.

                                                                                    2. 2

                                                                                      Too bad the author didn’t specify how many is “many”. I would expect some of the interviewees not answering because of interview stress, misunderstanding the question etc.

                                                                                      1. 25

                                                                                        This is not an answer in vogue, but I don’t want ops people who get too stressed to be able to explain shell pipelines.

                                                                                        1. 12

                                                                                          In my experience, a lot of people that get stressed during interviews don’t have any stress problems when on the job.

                                                                                          1. 6

                                                                                            Indeed. I once interviewed an engineer who was completely falling apart with stress. I was their first interview, and I could tell within minutes they had no chance whatsoever of answering my question. So I pivoted the interview to discuss how they were feeling, and why they were having trouble getting into the problem. We ended up abandoning my technical question entirely and chatting for the rest of the interview.

                                                                                            Later, in hiring review, the other interviewers said the candidate nailed every question. Strong hire ratings across the board. Had I pressed on with my own question instead of spending my hour helping them de-stress and get comfortable, we likely never would have hired one of the best I’ve ever worked with.

                                                                                          2. 7

                                                                                            I quite disagree with this, perhaps because I’m the type of person that gets very stressed out by interviews. What you’re saying makes sense if we assume that all stressors are uniform for all people, but that doesn’t really match reality at all.

                                                                                            For me, social situations (and interviews count as social situations) are incredibly, sometimes cripplingly stressful. At worst, I’ve had panic attacks during interviews. However, throughout my entire ops career I’ve worked oncall shifts, and had incidents with millions of dollars on the line, and those are not anywhere near the same. I can handle myself very well during incidents because it’s entirely a different type of stressor.

                                                                                            1. 4

                                                                                              Same in my company. All engineering is on-call for a financial system and it’s very hard to hire someone that get stressed out during the interview when this person would have to respond to incidents with billions in transit.

                                                                                              1. 4

                                                                                                Yep. I have a concern that in our push to improve interviewing we are overcorrecting.

                                                                                            2. 5

                                                                                              I’m helping my company interview some people in that area. We have a small automated list of questions around 10 to 12) that we send to candidates that apply, so nobody loses time with things that we’ve agreed interviewees should know.

                                                                                              Less than 10% manage to answer questions like “Which command can NOT show the content of a file?” (with a list having grep/cat/emacs/less/ls/vim).

                                                                                              When candidates pass this test, we interview them, and less than 5% can answer questions like the author mentions, at least in a

                                                                                              1. 3

                                                                                                Kinda unrelated to the article was just an anecdote to say “there’s a load of people that can’t really use a classic server and need more modern IaaS to operate”.

                                                                                                For the sake of defending my practices though, I did give people 5 minutes to think of the formulation and gave examples via text of how one would use it (e.g. ps aux | grep some_name). I think the amount of people that couldn’t answer was ~2/5. As in, I don’t think I did it in an assholish “I want you to fail way”.

                                                                                                It’s basically just a way to figure out if people are comfortable~ish working with a terminal, or at least that they used one more than a few times in their lives.

                                                                                                1. 5

                                                                                                  On the other hand, I can operate a “classic server”, but struggle with k8s and, to some degree, even with AWS. Although I’m sure I can learn, I simply never bothered to do so as I never had a reason or interest. I suppose it’s the same with many who were raised on AWS: they simply never had a reason to learn about pipes.

                                                                                                  1. 1

                                                                                                    I didn’t imply malpractice, rather statistical error. That said, anywhere close to 2/5 in conditions you described… It’s way higher than what I would expect. I didn’t hire any DevOps recently tho, so maybe I’m just unaware how bad things got.

                                                                                                  2. 1

                                                                                                    This is always true for interviews, but this is a measurement error that would be present for any possible interview question.

                                                                                                    1. 1

                                                                                                      Yeah that was my point.

                                                                                                1. 4

                                                                                                  The article reminds me of how we ended up with <> for generics and how newer languages are still copying this mistake.

                                                                                                  That some languages got it right in case of & precedence is certainly good news; it feels like there is an increasing number of some languages lately that divest from <> too.

                                                                                                  1. 4

                                                                                                    Can you please elaborate on why you believe <> is a mistake for generics?

                                                                                                    1. 12

                                                                                                      There’s nothing wrong with using < and > for generics, the problem is using < and > for generics and using < and > for greater-than and less-than in the same language. If I parse identifier < identifier, am I parsing a generic or an expression. In C++, if you encounter a {, (, or a [, then you know that the program is only syntactically well-formed if (after macro expansion) it has a matching }, ), or ] as the next close-some-kind-of-grouped-thingy character. If you parse a < then you have a lot more that you need to parse to get it right.

                                                                                                      This can lead to some very weird things like std::conditional<a > b, int, float>. This is not syntactically valid but you need to parse the definition of std::conditional and resolve the template instantiation before you know if it’s valid syntax. That makes any other tooling that wants to interact with C++ code (highlighting and so on) is painful. In that example, you can tell that float > is not an expression, but there are more complicated variants.

                                                                                                      In contrast, imagine if the language had used []: now it would be std::conditional[a > b, int, float]. Something with no awareness of the surrounding context could figure out this is a generic expression.

                                                                                                      In Verona, we are using square brackets for generics. We’re also requiring brackets for sequences of different operators, so a + b * c is syntax error, a + b + c is allowed and trivially applied left to right, (a + b) * c or a + (b * c) is also allowed and has explicit precedence. This simplifies parsing if you allow user-defined infix operators (e.g. a and b and c is an unambiguous parse) and means that you are never bitten by not understanding the precedence rules (which really don’t make sense to have if you have operator overloading).

                                                                                                      1. 1

                                                                                                        Sounds like a very reasonable language!

                                                                                                        I have also investigated some approaches to cut down operator complexity to the bare minimum required, but interestingly ended up exploring other options:

                                                                                                        • not having unary pre-/post-fix operators (++, --, +, -, !, ~)
                                                                                                        • not having compound operators (+=, -=, &=, …)
                                                                                                        • reducing the number of binary operators (<<, >>, >>>)

                                                                                                        (Though I would argue that there are two things wrong with < and > on its own:

                                                                                                        • Even if < and > weren’t used for comparisons, that would not undo people spending a decade in schools using < and > with that meaning.)
                                                                                                        • < and > being “lowercase” in most fonts leads to poor readability compared to [ and ], which is “uppercase” in practically all fonts.)
                                                                                                      2. 4

                                                                                                        We have superior solutions that avoid the issues of <> (no sensible allocation of brackets, hard to read, hard to parse¹, …).

                                                                                                        “C++ ran out of usable brackets” should not be the standard of language design these days; fortunately Eiffel, Scala, Python, Nim and Go seem to agree.


                                                                                                        ¹ 30 years after its introduction in C++, no language has managed to parse templates/generics without minor or major absurdities: required whitespace, unlimited look-ahead, intentional syntactic inconsistencies, ::<>, …

                                                                                                    1. 2

                                                                                                      I think this is interesting, and should perhaps be applied to programming languages as well.

                                                                                                      Hyper-inefficient programming languages like Ruby, Python, Haskell, etc. produce far more CO2 than, for example, C.

                                                                                                      1. 5

                                                                                                        Hyper-inefficient programming languages like Ruby, Python, Haskell

                                                                                                        I hope you realize that Haskell’s performance is much closer to C than Python. Haskell usually ranks around the likes of Java in language benchmarks. Either way you look at it, it doesn’t deserve being called “Hyper-inefficient”…

                                                                                                        1. 3

                                                                                                          I think it depends on how much energy is used developing and compiling code vs energy used during all times the program is run. I expect that equivalent C and Haskell programs take similar amounts of energy to run, and that the Haskell one takes a lot more energy to compile, but less time (and therefore less idle-time energy) to develop. This would make them similarly energy-expensive for most use-cases.

                                                                                                          Scripting languages may require less develop-time energy, but more run-time energy. If run only a few times, they’d use less energy than would be spent writing, compiling, debugging, and running a C program. Run many times, they would lose out to the finished C program.

                                                                                                          1. 3

                                                                                                            That’s actually a very relevant point. My first reaction to your comment was, “but who cares, you build only once”, but that’s not true. I have a beefy laptop that I’ve bought specifically to support a comfortable Haskell development experience. The IDE tooling continuously compiles your code behind the scenes. Then my team also has a very beefy EC2 instance that serves as a CI environment, it builds all the branches all the time. Then we’re also employing various ways of deploying the application and that also means it gets built in various ways per release image. All of that probably adds up to an energy consumption amount that’s comparable to a significant number of users running the application.

                                                                                                            1. 2

                                                                                                              Then we should include maintenance cost as well. I believe that in a lifetime of a program the energy put into the initial development is only a part, most probably a smaller part, of the energy needed to maintain it: bug fixing, updates, etc. In this case, theoretically, Haskell should have an advantage, because the language, due to its type safety restrictions, will force you to make less mistakes, in design and in terms of bugs. I don’t have any numbers to support these claims, it’s just gut feeling, so don’t take it too serious.

                                                                                                          2. 4

                                                                                                            There actually have been studies on that question, eg: https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sleFinal.pdf

                                                                                                            1. 1

                                                                                                              I love that paper. If you’re looking for a quick heuristic, energy efficiency strongly correlates with performance. Compare those numbers to these: https://benchmarksgame-team.pages.debian.net/benchmarksgame/which-programs-are-fastest.html

                                                                                                          1. 2

                                                                                                            Does anyone know if WSL 2 will work if Windows itself is already running inside a virtual machine (e.g., VirtualBox)?

                                                                                                            1. 2

                                                                                                              VirtualBox supports nested virtualisation so it should work, you’ll just need to Enable Nested VT-x/AMD-V in your VM settings.

                                                                                                              1. 1

                                                                                                                I’d be surprised if it did given that they explicitly say it won’t in their FAQ, but hey! Somebody should give it a shot and report back :)

                                                                                                                1. 2

                                                                                                                  Hrm, I think we’re reading the same FAQ differently :)

                                                                                                                  Can I run WSL 2 in a virtual machine?

                                                                                                                  Yes! You need to make sure that the virtual machine has nested virtualization enabled.

                                                                                                                  From the same FAQ that you linked to earlier.

                                                                                                              2. 2

                                                                                                                Most likely it won’t :(

                                                                                                                See the FAQ for details.

                                                                                                                You can run WSL 1 in a VM though. I run it regularly in my Amazon Windows Workspace.

                                                                                                                1. 2

                                                                                                                  Thank you for the link – it answered some additional questions I had as well.

                                                                                                              1. 12

                                                                                                                This is an insightful article, and its lessons can be applied to far more than just JavaScript…

                                                                                                                1. 4

                                                                                                                  I guess there will always be the group of people learning this. I wrote such an article myself years ago:

                                                                                                                  https://ecc-comp.blogspot.com/2016/10/a-few-thoughts-on-ben-northrops.html

                                                                                                                1. 33

                                                                                                                  I used to write Free software for commercial, proprietary operating systems. Then I realised: why on Earth would I donate my precious time to enriching Microsoft shareholders?

                                                                                                                  Not that I have anything against corporations, shareholders, or the profit motive in general - quite the contrary in fact. But if I’m going to be spending my time on someone else’s commercial ecosystem, I’d like to be paid for it, thanks.

                                                                                                                  1. 4

                                                                                                                    You make a very insightful point.

                                                                                                                    I can think of one potentially good reason to write Free Software for commercial, proprietary operating systems:

                                                                                                                    People on those commercial, proprietary operating systems might use that software… and then realize it’s also available on free operating systems, such as Linux, making it easier for them to make the switch.

                                                                                                                    1. 4

                                                                                                                      This only works if people understand what they’re getting (user freedom) and that the software they like is available on freedom-respecting platforms. If they see it as just another piece of $0 windows software, then all you do is help entrench current platform dominance.

                                                                                                                      1. 1

                                                                                                                        Only if their existing platform is free-as-in-beer. I switched from Windows 2000 to FreeBSD by first replacing all of the programs that I used on a daily basis with cross-platform ones that worked on both systems. Then by using a FreeBSD machine and remote X11 on the Windows system, with most things running on the FreeBSD box displayed on the Windows machine and a few things running locally. Then by switching the last things over to FreeBSD. These days, you could do the second step with a VM.

                                                                                                                        1. 3

                                                                                                                          I think the cost of the operating system is “$0” in the consumer’s mind. It’s already rolled into the cost of the hardware in most cases.

                                                                                                                        2. 1

                                                                                                                          Besides free (as in beer) and free (as in libre), another reason people switch to Linux is because Windows is a buggy mess and/or Mac is extremely expensive. I’m one of those people, having switched to Linux on the desktop 2-3 years ago for exactly those reasons.

                                                                                                                    1. 22

                                                                                                                      This is a classic Microsoft move. It’s been done exactly the same with other pieces of software in the .NET Core community recently.

                                                                                                                      Microsoft won’t embrace any outsider technology. Instead, they’ll build their copycat, and expect the rest of the world to embrace them.

                                                                                                                      1. 7

                                                                                                                        Microsoft won’t embrace any outsider technology. Instead, they’ll build their copycat, and expect the rest of the world to embrace them.

                                                                                                                        There was a long discussion about similar situation with Autofac strangled, I expect ImageSharp to get competition soon, etc. (Btw. ImageSharp has a pretty nice API, I really liked working with it), and probably others as well.

                                                                                                                        The really bad part is the rest of the world automatically flocks around MS tech, even if it is technologically inferior (eg. System.ServiceModel.SyndicationFeed with a useless common abstraction for RSS/Atom feeds with terrible API, vs. the CodeHollow.FeedReader which was a breeze. This is just one example from the top of my head)

                                                                                                                        I try to use 3rd party tech on .net, because I have bad experience with MS APIs. Often badly desiged, and constantly in flux, needing constant rewrites, while I usually can get better APIs wit lover maintenance burden for the cost of some performance (or not), and usually with less (useless) features.

                                                                                                                        1. 4

                                                                                                                          Can relate to this. Worked 4 years with Microsoft technologies. Using Microsoft’s own libraries was always painful experience, and it was better to use some other third party library. Every single time.

                                                                                                                          In four years had to learn how to start a new net core project like 5 times.

                                                                                                                          1. 1

                                                                                                                            If you don’t mind me asking, where did you go after MS? I moved to Ruby on Rails myself (nine years ago now!).

                                                                                                                            1. 1

                                                                                                                              After that, like a year ago, moved to the Java world. The company I’m working for has a Vert.x+Groovy monolith, being split up in Spring Boot+Java microservices. Not my dream stack but can be productive on it, and I’m starting to understand the Spring mindset, so, happy with it :)

                                                                                                                        2. 5

                                                                                                                          And yet people constantly insist that Microsoft is different now… a good, benign Microsoft that embraces open source!

                                                                                                                          And they are still pulling these kinds of shenanigans all the time.

                                                                                                                          1. 2

                                                                                                                            It’s in their DNA, it seems. Little has changed since the days of The Halloween Documents.

                                                                                                                            http://www.catb.org/~esr/halloween/

                                                                                                                          2. 5

                                                                                                                            And to Stac Electronics with their Stacker compression software in 1993. MS was looking to acquire, then didn’t, and released MS-DOS 6.0 with DoubleSpace compression, developed in-house.

                                                                                                                            This leopard hasn’t really changed its spots.

                                                                                                                            1. 3

                                                                                                                              Just like when Apple stole Duet Display and F.lux.

                                                                                                                              1. 3

                                                                                                                                Duet feels very different to me; using iPad displays as secondary Mac displays had been a major feature request since literally the very first ones came out. I remember talking with people about this when the iPad was new, way back at Fog Creek at latest, which would put these discussion at least as far back as 2014. Duet didn’t even launch until 2015. I’m not saying they shouldn’t be upset, but that one felt a bit obvious to me. And at any rate, the way screen sharing works on recent iPadOS versions is honestly pretty different from what Duet does. There’s overlap, of course, but I don’t feel (for better or worse) that Duet got directly cloned, nor do I feel like it was such an innovative concept that the authors can say “no one coulda thought of this!”.

                                                                                                                                F.lux, and AppGet, and (just to throw in an oldie) Sherlock, feel very different to me. F.lux was, at least as far as I’m aware, a brand-new concept that didn’t have precedents and certainly didn’t feel “obvious” to me; Apple integrating it was a big deal, and felt like a rip-off. This is a case where an app did something people hadn’t been asking for, and Apple cloned the concept fairly directly.

                                                                                                                                AppGet and Sherlock are different from F.lux, but end up feeling stolen because they’re both examples where there’s a clear need for something in that space, but the relevant companies directly cloned the competitor. Apple had been working on better search since the abandoned Copland project, but the level to which Sherlock copied Watson in both appearance and name just felt gross to me at the time. Likewise, copying the entire way AppGet works, and calling the result WinGet, just feels…well, duplicitous, at best.

                                                                                                                                I’m not trying to be an Apple apologist here, but I think lumping Duet into this discussion starts to miss the point a bit.

                                                                                                                            1. 28

                                                                                                                              Stephen Wolfram is considered a crank by professional scientists; see for example Scott Aaronson’s review of Wolfram’s A new kind of science.

                                                                                                                              The “buyer beware” signal from this breathless celebration of Wolfram’s genius (by Wolfram) is strong.

                                                                                                                              1. 11

                                                                                                                                Also this one about him suing people for writing math proofs: http://bactra.org/reviews/wolfram/

                                                                                                                                1. 3

                                                                                                                                  Thanks so much for sharing that. It’s sent me down a rabbit hole for the past two hours, in the course of which I have discovered many fascinating things. Like Natalie Portman having an Erdös-Bacon-Sabbath number of 11.

                                                                                                                                2. 3

                                                                                                                                  I approach these kinds of things from a philosophical perspective so for me reading Stephen Hawking and reading the Bhagavad Gita are both valid forms of investigation of the universe. The idea that ‘professional scientists don’t approve your choice of reading material’ has always been something that I find very annoying and unhelpful. It is essentially none of their business what I read.

                                                                                                                                  Even if I were about to spend money or time building an experimental setup based on the information in that article, that would still be valid science if I did it rigorously and correctly. Any result achieved by the setup would be valid science and either falsify or fail to falsify the tested hypothesis.

                                                                                                                                  If thinkers always listened to what ‘professional scientists’ said we would have no plate tectonics, no theory of evolution, and no germ theory, to name just a few.

                                                                                                                                  I think I phrased this more harshly than I meant to, your comment is relevant and may be of use to some readers. But I can’t let it go unchallenged as I find there are a lot of impressionable science enthusiasts that read something like that and take it as an invitation to start sending hate mail and flaming people on forums for discussing this kind of idea. I feel like blindly clinging to accepted dogma is extremely detrimental to the practice of good science, whereas wild imaginative ideas with little supporting evidence have proven to be quite beneficial to it in the past.

                                                                                                                                  I will also admit that the breathless celebration of one’s own genius, as shown by Wolfram and others, is also annoying and unhelpful. Nevertheless I found this article fascinating and I am glad it was shared here.

                                                                                                                                  1. 2

                                                                                                                                    I feel like blindly clinging to accepted dogma is extremely detrimental to the practice of good science, whereas wild imaginative ideas with little supporting evidence have proven to be quite beneficial to it in the past.

                                                                                                                                    I agree and I’m going to explain how at length.

                                                                                                                                    The scientific method is a way of establishing consensus. It is a tool made by humans for other humans. Over time a culture and a way of doing things has developed around it.

                                                                                                                                    The two main products of science are theories that explain observed phenomena and observations and experiments that can confirm or refute theories. Because of the values of the culture of science (squishy humans like the things they know), any new theory has to explain existing observations as well or better than the incumbents. A more complex, technically difficult, or simply innovative can only replace existing theories if it explains more known phenomena, or predicts new phenomena that can be verified.

                                                                                                                                    This is how, for example, the Copernican view of the solar system won out, or how general relativity replaced Newtonian gravity, a vastly simpler theory. As far as I know, wild imaginative ideas (quantum mechanics, relativity, Newtonian physics, the Copernican view, …) have won out because they have been supported by evidence (the double-slit experiment, the Michelson-Morley experiment, elliptical orbits, phases on Venus, …).

                                                                                                                                    When Einstein proposed general relativity, he was literally an Einstein in the current cultural meaning of the term. Nonetheless, he had to provide an extensive list of predictions (gravity lenses, the orbit of Mercury, gravitational redshift) that were not explained by Newton’s theory for his extremely technically complicated new theory to be accepted.

                                                                                                                                    This brings us to Wolfram. He has done none of those things. He has not shown how his framework explains existing phenomena. He has not made new predictions that can be verified experimentally. What he has done is make some models and proclaimed himself a genius. He doesn’t want to do the work needed to be accepted within the scientific system, and just new ideas are not enough to move the consensus.

                                                                                                                                    The thing is, his ideas are out there. If he explains them well enough, they can be picked up by people who are willing to work within the squishy system of science. If they have genuinely new things to offer, they will be accepted, because they’ll offer a competitive advantage to the people who accept them, who will be able to figure out new things faster.

                                                                                                                                    Wolfram has been proclaiming himself a genius since the 80’s, and no one has yet taken him up on his offerings.

                                                                                                                                    I approach these kinds of things from a philosophical perspective so for me reading Stephen Hawking and reading the Bhagavad Gita are both valid forms of investigation of the universe.

                                                                                                                                    Hey, whatever works for you. Historically, we’ve tried similar approaches to building knowledge consensus (shamanism, ancient Greek philosophy, religion, etc) and when they work they end up much more dogmatic and hostile to outside challenges than the scientific method. If you’re just doing things for yourself you don’t need any consensus.

                                                                                                                                    1. 1

                                                                                                                                      This brings us to Wolfram. He has done none of those things. He has not shown how his framework explains existing phenomena.

                                                                                                                                      Isn’t that what the Physics project is about? Also in this article, he explains several existing phenomena, at least in broad outlines with links to details. I am not in position to verify his claims, but the article offers some explanations and link to details.

                                                                                                                                  2. 3

                                                                                                                                    I read Wolfram always with some kind of hope. I am kind of biased in that what he aims for most likely won’t work / fail, but on the other hand, I think sometimes there are interesting ideas in between. I’m looking there sometimes for smaller ideas that I can comprehend.

                                                                                                                                    But thanks for the critique! I needed that

                                                                                                                                    1. 4

                                                                                                                                      He definitely has a habit of claiming original insights and not citing prior work. But I think “crank” means something different – it’s someone who doesn’t actually understand the field, has no background in it, but claims original insights (that usually fall down really quickly).

                                                                                                                                      I think Scott Aaronson has a good page about how to tell if someone’s a crank without doing a full reading.

                                                                                                                                      https://www.scottaaronson.com/blog/?p=304

                                                                                                                                      Wolfram goes beyond factual claims and “sells” his stuff pretty hard, and this post seems like a good example of that.

                                                                                                                                      I would put him in a different category than “crank” though. He’s more like talented guy who became an “outsider scientist”, and I would say there’s nonzero probability that he’s right about something interesting, even if he is going about it in a very ham-fisted way…

                                                                                                                                      It’s definitely impressive he managed to build a profitable company and work on improving the same software for decades… Cranks don’t do that. You can’t be totally “out of it” and accomplish that.

                                                                                                                                      1. 2

                                                                                                                                        If you read through the article, it covers many of Scott Aarsonson’s concerns.

                                                                                                                                      1. 1

                                                                                                                                        I’m not a programming language expert, but I think the fact that Go and D have non-deterministic garbage collection means they can’t really fully “replace” C, C++, and Rust.

                                                                                                                                        I’m not a Rust expert either, but it seems to be following the same path of C++ into crippling complexity.

                                                                                                                                        These things suggest to me there’s still no real replacement for C, at least not one with any real “traction.”

                                                                                                                                        1. 2

                                                                                                                                          These things suggest to me there’s still no real replacement for C, at least not one with any real “traction.”

                                                                                                                                          Maybe zig?

                                                                                                                                          1. 1

                                                                                                                                            The problem is that–and somebody who has more up-to-date information, by all means correct me!–even in unmanaged langauges without GC, you can still totally have lag in freeing memory.

                                                                                                                                            Depending on the standard library and malloc implementation, you can’t even be guaranteed uniform behavior. Consider the case where you trigger a scan and coalesce of the underlying free block list for malloc or similar–I don’t think there are any APIs that conveniently say “okay, please fill this and do a bit of work coalescing blocks because I know I have more allocations of some size coming up.” or “okay, please immediately coalesce everything” or even “okay, fuck, just turn coalescing off because I Know What I’m Doing”.

                                                                                                                                            If people really need deterministic memory behavior, they need to put in extra work in every language.

                                                                                                                                            1. 4
                                                                                                                                              1. Set a deadline of end of year 2020 to have the COBOL system totally replaced, and offer a $2 million dollar pooled bonus to any employees that can pull it off and that stay to pull it off. Those employees from #1 would still get their retirement, and sweet bonus.

                                                                                                                                              Yeah, a hastily built system built in a matter of months to get a cash bonus will surely be well designed and stand the test of time 😒

                                                                                                                                              1. 2

                                                                                                                                                It’s not that much different from current religion.

                                                                                                                                                Gotta get your stories done by the end of the sprint!

                                                                                                                                                Hurry, hurry, hurry!

                                                                                                                                              2. 2

                                                                                                                                                Wow, Zed is still at it? Missed this guys crazy rants.

                                                                                                                                                1. 1

                                                                                                                                                  I wonder if he still uses Python 2