1. 4

    I love the Raspberry Pi, but it seems like this use case would be better served by Docker or Vagrant

    1. 5

      I agree, but I think there’s a lot of value in the isolation aspect. If you use multiple machines to develop around the house, you can push them all to this device. And if you leave the device configured the same as your production website it’s isolated from any wacky changs you might do on your development machines.

      For enterprise work, I’d never consider this, but for a casual blogger or developer I think it’s a neat and simple solution.

      1. 2

        And if you leave the device configured the same as your production website it’s isolated from any wacky changs you might do on your development machines.

        Pin your transitive dependencies and use sandboxed builds. Maybe it’s because I am using Rust, C++, et al., but doing builds on a Pi would just be painful, compared to a modern many-core development machine.

        1. 1

          Plus what smaddox said, dockerizing the whole thing practically guarantees the same setup as prod.

        2. 1

          The thing I always run into is the wacky stuff I do to the RPI that I forget how to redo when I need to upgrade, the disk fails, etc. I think “Ansible” is the solution, but I need to take the time to learn it.

      1. 3

        Interesting read. Not a particularly practical attack, since you need to be able to encrypt arbitrary messages (which for a pen-and-paper cypher implies you have the key), but they point out that the susceptibility to this slide attack suggests there are probably other more practical attacks without that requirement.

        1. 2

          I don’t know about that hello world… https://www.beeflang.org/docs/getting-start/

          1. 2

            As much as it is silly to need so many lines to write hello world, is it really a problem? You wouldn’t want to use that language to teach people programming, but in any program of sufficient size, this boilerplate is going to be dwarfed by the amount of actual code you’ll have.

            That being said I still think needing that much code to say hello world is really silly..

            1. 2

              It’s less how much code is required, and more that you need to create a class and have multiple levels of nesting.. for the entrypoint… Very Java like.

          1. 1

            You could probably beat both with a bloom filter based approach.

            1. 2

              I guess that could work, probe for an item. If its bits are not set, add the item to a list, otherwise skip it. This would however give inexact answers. There is a certain probability of collisions, which may get a false positives (where a the Bloomfilter indicates that an item is present, whereas it’s not). So, your final list of unique items may have items missing.

              If you want an approximate answer to the number of unique items (as opposed to getting the actual items), HyperLogLog would be very small and efficient.

              1. 2

                Generally you use a bloom filter as in addition to some other structure, as an optimisation. So here you’d do something like

                1. Check bloom filter
                  a. it's there: actually check the underlying set
                  b. it's not: add it to the underlying set, then add it to the bloom filter
                

                Of course this only saves you if checking the bloom filter is much cheaper than checking the underlying set. This might be the case if you distribute the bloom filter to edge nodes but your set is in a database that needs to be queried (this is how Chrome’s SafeSearch works, your browser has a copy of the bloom filter and you only query Google if there’s a hit). And of course now you have race conditions between them. It depends a lot on the cardinality of your data too, because if you’re mostly seeing the same few records all of the time then the bloom filter won’t really save you anything.

                For just finding the unique items when you’ve already got an array that fits in memory, it’s pretty unlikely that a bloom filter beats more naive approaches.

                1. 1

                  because if you’re mostly seeing the same few records all of the time then the bloom filter won’t really save you anything.

                  Yes, but this is the case where the hash is already good, so I’d expect bloom+hash to work well for low-uniqueness and high-uniqueness.

            1. 8

              I frowned when I saw that one character difference. I’m glad that’s now fixed. It’s really annoying when a standard library gives you a foot gun like that.

              1. 4

                Agreed. Rust’s stdlib containers do not allocate by default. This makes a lot more sense to me.

                1. 2

                  Assuming “Java 1.3” meant JSE1.3, it was released in 2000. I’m not sure if initializing a HashMap in that way is really a footgun. We don’t know the context. It’s possible it was used in the application in a non-expected way, by someone who was not familiar with Java.

                1. 5

                  I was reminded of this short story while having a conversation with a close friend a couple days ago. I don’t remember where I first learned of it, but it’s a great and insightful read, and as far as I can tell it’s never been linked here before.

                  1. 2

                    I’ve been wanting so see more material like this, I.e. making Casey’s API talk more digestible to non game devs, and have considered taking a stab at it myself. Great to see some others had the same idea.

                    It’s good that they link Casey’s API talk. It would be better if they made it more clear that this is all just repeating and expanding on his talk.

                    1. 1

                      We need more things like this to show how absurd it is that encryption is considered munitions and export controlled.

                      I hope some (more?) crypto experts take a look at this and see if they can crack and/or improve it.

                      1. 1

                        I don’t think this would be considered particularly strong cryptography—although it’s possible that the use of such a large secret key would be sufficient to protect the case of a small number of messages (or more, but longer messages.)

                      1. 2

                        Great read.

                        We usually have a type called State defined at the top of each component. For example:

                        type State = {
                            state: "searching",
                        } | {
                            state: "failed",
                            error: str | null,
                        } | { 
                            state: "results_found",
                            results: []any,
                        }
                        
                        1. 2

                          This is closer to how I do it. But I split out the definition of each variant, and define helper functions to construct the variant from the data. The result is close to algebraic data types, except for the lack of exhauativeness checking. I’ve seen some articles that try to use the type system to get exhauativeness checking, but I haven’t had any luck with it so far.

                          1. 1

                            We implement an “assert never” function to help with exhauativeness checking with this type of state definition and it’s been working pretty well (catching missing cases, alerting when unknown values are used).

                            Found this similar project on GitHub:

                            https://github.com/aikoven/assert-never/blob/master/README.md

                            1. 2

                              Yeah, that’s what I tried, but I think it only works if you’re returning from all the branches, which did not apply for me.

                        1. 7

                          Spend a few minutes learning LaTeX.

                          Markdown is fine for READMEs and the like, but whenever I’m writing a design document, I write it in LaTeX. Most folks use Google Docs, but if I’m writing something substantial, I find I’m better off writing the first draft in LaTeX, then export some HTML, and put it into Google Docs for everyone else to edit/comment/etc

                          There’s a /lot/ to learn about typesetting that’s largely ignored, but the fact you can simply compile some beautifully rendered documents is a useful tool.

                          1. 8

                            LaTeX is great but the language itself has a steep learning curve. I’ve written papers in it and to this day maintain my résumé using it.

                            I’ve spent the last few months designing a workflow around the pandoc ecosystem and really like what I’ve got out of it. My team has now authored a few internal papers using the workflow, which is comprised of pandoc and a few filters, a Markdown file for each chapter, some graphics in various formats, and a Makefile with several relevant tasks that glue it all together. We’re producing professional-looking mathematical documentation in no time with folks focusing on writing and not typesetting. I’ve got other teams lining up get educated on how to use it. It’s overengineering at its finest but the workflow has proven worth every minute spent.

                            1. 2

                              +1 for LaTeX resume/CV. It makes it SO much easier to customize for each application.

                              1. 1

                                Is one of these papers public? Or do you have a post on your process and outcome?

                                1. 3

                                  None are public. You may have inspired me to write a post about the process, though!

                              2. 1

                                I prefer markdown and a static site generator (Jekyll) for design docs because I want others to contribute to, build on, and maintain. It’s hard enough getting contribs from analysts and pms and whatnot in markdown, I’m not brave enough to try latex source.

                              1. 2

                                Well that’s unfortunate… It looks like it requires somewhat contrived code, though. So I guess it’s still safer than C/C++, where this kind of unsoundness is ubiquitous.

                                Hopefully the community can find a good solution that minimizes breaking changes.

                                1. 13

                                  Hopefully the community can find a good solution that minimizes breaking changes.

                                  I’d rather prefer having the issue fixed the best, most principled way people can come up with. The feature was stabilized less than a year ago, even if it were necessary to unstabilize it and go back to the drawing board, I’d be in favor of that.

                                  Rust is way too young to be accreting technical debt at such a fundamental level. Putting band-aids around the issue will only compound further issues in the future.

                                  1. 4

                                    Soo… Where the hell do I go next, once Mojave is out of support? Linux laptops just don’t compare, and don’t get me started on Windows.

                                    1. 7

                                      Linux on a desktop is dirt cheap (something like 4x the performance per dollar).

                                      If you mostly use the laptop in two places (eg work desk and home desk), you can fit both sites with top quality desktop equipment for a fraction the price of a new macbook, and have cash left over for a chromebook or similar (for portable browsing/email).

                                      I only have one work location (home office), so I’m working on the fastest machine I’ve ever had, running ubuntu (24 cores, 64g ram). Best computing environment I’ve had in years.

                                      I miss one or two things from OSX (liceCAP, preview.app and mail.app alternatives are not as good) but having a working package manager etc is pretty great.

                                      1. 2

                                        I disagree with this. I switched to Mac originally because it was the cheapest way to get a decent spec laptop with a metal case and ssd. I don’t believe that has changed.

                                        1. 6

                                          Word 4 of my response: “Desktop”.

                                          I’m not aware of a good linux laptop either. My point is that for the price of a macbook pro, you can get 4 desktop machines, each of which is faster than the macbook.

                                          1. 1

                                            What about https://system76.com/ ? I’ve heard positive reviews, but am not a customer (still running an old 2013 macbook pro).

                                            1. 1

                                              I haven’t heard anything bad about them, but IME reviews for an expensive niche provider tend strongly positive since the reviewers have looked up that particular niche, so it’s hard to know.

                                          2. 3

                                            If those are your only requirements, there are other metal laptops (Razer) and otherwise well built ones (Dell XPS) with good specs. Price should be around the same (but I haven’t actually done a good comparison).

                                            1. 1

                                              My fully specced Dell Precision (Enterprise XPS with Xeon instead of i7) was almost couple thousand Euros cheaper than fully decked MacBook Pro. With addition of having 32gb ram when Macs had only 16gb maximum.

                                              1. 1

                                                Hows the battery life? IIRC power consumption was one of the key arguments in favor of choosing laptop hardware that could only run 16gb.

                                                1. 2

                                                  Two part answer:

                                                  IIRC RAM has least energy draw of all components in computer, so I didn’t give any thought on that when picking configuration. I think that “RAM drastically affects battery life” idea became from Apple marketing as answer why they don’t have more than 16gb in their machines (and nowadays they also have 32gb laptops).

                                                  I rarely use my work laptop (which that machine is) without peripherals, so I haven’t got too clear idea about battery life while working. But it does manage half work day (so around 3.5 - 4 hours) without problems. OTOH system I mostly develop is half a dozen “microservices” and I dev against integration tests which:

                                                  • run non-trivial test interactions in browser (average suite transfers 0.5 - 2.5gb data with caches enabled).
                                                  • there is also have heavy calculations happening in postgres and backend servers

                                                  So the load during those battery times haven’t been exactly light.

                                                  1. 1

                                                    IIRC RAM has least energy draw of all components in computer, so I didn’t give any thought on that when picking configuration. I think that “RAM drastically affects battery life” idea became from Apple marketing as answer why they don’t have more than 16gb in their machines (and nowadays they also have 32gb laptops).

                                                    By my understanding, RAM has almost no energy draw. Available motherboard chipsets capable of running more than 16gb (even if you only install 16gb in them), however…

                                                    Half a day is pretty good; it means you can sit in the park in good weather.

                                                    1. 1

                                                      You’re right and wrong. On a high end desktop computer a CPU can use 150+ watts. But a MacBook processor running at full power only uses ~35W, and only a couple watts idle. RAM on the other hand uses a constant amount of power, constantly refreshing memory cells as long as it’s powered on.

                                                      I don’t know about RAM in MacBooks, but IIRC full size DDR4 DIMMs use about 0.375W per GB. So 6W for 16GB, 12W for 32GB.

                                                      Assuming some casual use drawing 10W CPU power, 10+6W vs 10+12W makes a pretty big difference in battery life, that’s 37% more power. Assuming 3W / 16GB, that’s still a 23% increase in power consumption to power 32GB RAM.

                                                      These numbers are all approximate / from memory, but nevertheless you can see the huge difference between desktops and laptops when it comes to power economy.

                                          3. 1

                                            Exactly what I’m thinking :( Probably just install Linux

                                          1. 9

                                            Definitely fascinating but why do they say

                                            Machine learning algorithms read the data back by decoding images and patterns that are created as polarized light shines through the glass.

                                            ?

                                            Is this a best-effort storage medium where the data that’s read back is just the best guess as to what’s there? Or does it store a bit-for-bit copy? Was the optimal mechanism discovered by ML but now simply used directly? Or did the writers use the words “machine learning algorithms” as buzzwords?

                                            1. 5

                                              ALL storage is just the best guess as to what’s there. You need error correcting code since no underlying storage is perfect, but enough simultaneous errors can defeat any error correcting code.

                                              As for machine learning algorithms, modern error correcting codes use belief propagation, and belief propagation is usually described as a machine learning algorithm. It is not a buzzword.

                                              1. 4

                                                ALL storage is just the best guess as to what’s there.

                                                Of course, I guess I meant…more analog-y than digital.

                                                As for machine learning algorithms, modern error correcting codes use belief propagation, and belief propagation is usually described as a machine learning algorithm. It is not a buzzword.

                                                Interesting. I’m not up on the nomenclature. I guess I’d feel weird if someone said “this SATA drive has state-of-the-art machine learning algorithms to read your data.” Not saying you’re wrong, just that I’m apparently out of touch.

                                                1. 3

                                                  Not saying you’re wrong, just that I’m apparently out of touch.

                                                  Machine learning is the new buzzword, so it gets shoehorned in everywhere.

                                                  Just remember, linear regression is machine learning.

                                                  1. 3

                                                    I happened to study error correcting code well before current deep learning craze. It was still machine learning then, so it is not shoehorning.

                                                    1. 3

                                                      “I trained a neural network to solve this problem.”

                                                      “You mean you trained Joe the intern to do it?”

                                                      “….yes”

                                                    2. 2

                                                      As far as I can tell, Project Silica storage is a bit-for-bit digital storage.

                                                  2. 3

                                                    This is nothing more than a guess, but the algorithms required to determine a 3D structure from scattered light are extremely compute intensive. I’m guessing they use machine learning to approximate the algorithm with a much cheaper learned function.

                                                  1. 8

                                                    I honestly don’t expect to choose C++ over C for any project again. C99 has proven itself to be much more useful for my general projects.

                                                    I guess these projects don’t deal with strings a lot…

                                                    The defer keyword. […] That’s the premise behind smart pointers, RAII, and similar features. Zig’s method blows all of those out of the water.

                                                    How does manual scope based destruction “blow out of the water” the automatic one (RAII)?!?

                                                    In Rust, C++ and D (when you use std.typecons.scoped), you literally cannot forget the defer free(something), because it’s implicit.

                                                    1. 4

                                                      i think there’s merit in having defer instead of RAII, but it only covers a third of RAII (the part where you don’t have to drop at every exit point)

                                                      if you had a second feature, where the compiler made sure every variable was moved before its scope ended, that’d get you the second part of RAII. the memory safety part. the most important part.

                                                      the third part is basically generics/traits. there’s a common “drop” interface. i think it’s reasonable to skip that part if you’re going for something small and c-like (and there’s some added flexibility with pool allocators and such, if you want a drop with a params)

                                                      1. 3

                                                        I personally dislike having to define a class for everything that needs to be cleaned up. To be honest, I wonder if my ideal would perhaps be a hybrid of zig and rust, where its manual, but automatically checked.

                                                        1. 1

                                                          You can already do this in Rust, by using the #[must_use] attribute, and defining a free(self) method (or whatever) instead of implementing Drop. You do have to define a struct, though. But that’s necessary, anyways, in order to not expose memory unsafety to the caller.

                                                          1. 4

                                                            Or you can have a struct that holds a lambda and you literally have defer :) So defer is, in a way, a special case of RAII.

                                                            e.g. https://docs.rs/scopeguard/

                                                      1. 3

                                                        This can be considered a feature request to improve Rust’s support for LLVM’s Fast-Math Flags. Rust in fact provides access to this LLVM functionality, see https://doc.rust-lang.org/std/intrinsics/fn.fadd_fast.html (it enables all fast-math flags), but does not provide a command line flag to turn normal floating point operations to fast-math floating point operations. It is debatable whether such flag is a good idea.

                                                        1. 3

                                                          Agreed. A more meaningful chart would be wall clock time to non-flawed result. fast-math has its place, but it doesn’t mean the non-fast math is slow math. It is fast for a reason.

                                                          1. 3

                                                            As pointed out in the article, though, fast math isnt necessarily lower precision. Fused multiply and add will actually improve your precision. For scientific computing, which is what IEEE floating point was designed for, you need exact reproducibility, though. For other things, speed is more important. There needs to be a way to tell the compiler what you want, at the function or scope level.

                                                            Edit: I think I misread your comment at first. My response is not really related. But I’ll leave this because it’s still true, IMHO.

                                                            1. 2

                                                              There was a discussion about optimizing FMA in Rust, and argument against it was that if x*x - y*y computes one side with higher precision, then the expression overall may end up with noticably worse result (e.g. wrong sign in x==y case).

                                                              Personally I’d love a “fast float without NaN” type.

                                                              1. 1

                                                                True, there are edge cases where accuracy/precision can be worse by introducing FMA. To be fair, though, if you care about accuracy/precision and you’re calculating x*x - y*y where x ~= y, then you’re doing it wrong.

                                                        1. 20

                                                          What are the alternatives? Any other CDN offering free services for open source projects?

                                                          1. 12

                                                            What exactly do you need?

                                                            1. 20

                                                              ziglang.org is a static site with no JavaScript and no server-side code. The home page is 22 KB data transferred with cold cache. The biggest service that CloudFlare offers is caching large files such as:

                                                              These are downloaded by the CI server for every master branch push and used to build & run tests. In addition to that, binaries are provided on the download page. These are considerably smaller, but as the number of users grows (and it is growing super-linearly), the cost of data transferred was increasing fast. My AWS bill was up to $20/month and doubling every month.

                                                              Now these assets are cached by CloudFlare and my AWS bill stays at ~$15/month. Given that I live on donations, this is a big deal for me.

                                                              1. 13

                                                                You might consider having a Cloudflare subdomain for these larger binaries so that connections to your main website are not MITM’d. Then you could host the main website wherever you please, and keep the two concerns separable, allowing you to change hosting for the binaries as necessary.

                                                                1. 4

                                                                  If I were in this situation I would be tempted to rent a number of cheap (~€2.99/month) instances from somewhere like Scaleway each with Mbps bandwidth caps rather than x GB per billing period and have a service on my main server that would redirect requests to mirror-1.domain or mirror-2.domain, etc depending on how much bandwidth they had available that second.

                                                              2. 17

                                                                Fastly does: https://www.fastly.com/open-source

                                                                Amazon also has grant offerings for CloudFront.

                                                                1. 9

                                                                  Avoid bloating your project’s website, and use www. Put all services on separate subdomains so you can segregate things in case one of them gets attacked. If you must use large media, load them from a separate subdomain.

                                                                  edit: Based on the reply above, maybe IPFS and BitTorrent to help offload distributing binaries?

                                                                  1. 5

                                                                    I use Dreamhost for all of oilshell.org and it costs a few dollars a month, certainly less than a $15/month AWS bill.

                                                                    I don’t host any 300 MB binaries, but I’d be surprised if Dreamhost couldn’t handle them at the level of traffic of Zig (and 10x or 100x that).

                                                                    10 or 15 years ago shared hosting might not be able to handle it, but computers and networks got a lot faster. I don’t know the details, but they have caches in front of all their machines, etc. The sys admin generally seems very competent.

                                                                    If I hosted large binaries that they couldn’t handle, I would either try Bittorrent distribution, or maybe create a subdomain so I could easily move only those binaries somewhere to another box.

                                                                    But I would bet their caches can handle the spikes upon release, etc. They have tons of customers so I think by now the industry learned to average out the traffic over all of them.


                                                                    BTW they advertise their bandwidth as unmetered / unlimited, and I don’t believe that’s a lie, as it was in the 90’s. I think they can basically handle all reasonable use cases and Zig certainly falls within that. The only thing you can’t do is start YouTube or YouPorn on top of Dreamhost, etc.

                                                                    FWIW I really like rsync’ing to a single, low latency, bare metal box and rather than using whatever “cloud tools” are currently in fashion. A single box seems to have about the same uptime as the cloud too.

                                                                    1. 7

                                                                      A single box seems to have about the same uptime as the cloud too.

                                                                      That’s… unpleasantly true. Getting reliability out of ‘the cloud’ requires getting an awful lot of things exactly right, in ways that are easy to get wrong.

                                                                      1. 2

                                                                        yup

                                                                      2. 5

                                                                        I’ll add I was a DreamHost customer because they fight for their users in court. The VPS’s are relatively new. The customer service is hit and miss according to reviews.

                                                                        Prgmr.com I recommend for being honest, having great service, and hosting Lobsters.

                                                                        One can combine such hosts with other service providers. The important stuff remains on hosts dedicated to their users more than average.

                                                                      3. 4

                                                                        Free just means you aren’t paying for it. This means someone else is paying the cost for you. Chances are their $$$‘s spent is going to do something good for them, and not so good for you. Perhaps the trade off is worth it, perhaps it isn’t.

                                                                        Assuming the poster is accurate, and Cloudfare is a front for US intelligence, does it matter for what you are using it for?

                                                                        Of course, should the US Government be able to spy on people through companies like this is an entirely different question, and one that should see the light of day and not hide in some backroom somewhere.

                                                                        1. 13

                                                                          Free just means you aren’t paying for it. This means someone else is paying the cost for you. Chances are their $$$‘s spent is going to do something good for them, and not so good for you. Perhaps the trade off is worth it, perhaps it isn’t.

                                                                          In Cloudflare’s case, one fairly well documented note is that free accounts are the crash test dummies:

                                                                          https://blog.cloudflare.com/details-of-the-cloudflare-outage-on-july-2-2019/

                                                                          The DOG PoP is a Cloudflare PoP (just like any of our cities worldwide) but it is used only by Cloudflare employees. This dogfooding PoP enables us to catch problems early before any customer traffic has touched the code. And it frequently does.

                                                                          If the DOG test passes successfully code goes to PIG (as in “Guinea Pig”). This is a Cloudflare PoP where a small subset of customer traffic from non-paying customers passes through the new code.

                                                                          I’d be curious if those customers are rebalanced and how often!

                                                                          1. 30

                                                                            Using free tier customers as limited guinea pigs is honestly a brilliant way to make them an asset without having to sell them to someone else. Whatever else cloudflare is doing with them, that one’s a really cool idea.

                                                                        2. 3

                                                                          Netlify is an option if your site is static, and it offers free pro accounts for open source projects (there’s also a tier that’s free for any project, open source or not, which has fewer features).

                                                                          Disclaimer: I work there.

                                                                          1. 1

                                                                            Not free, but Digital Ocean Spaces (like S3) is 5$/month for up to something like 5GB, and includes free CDN.

                                                                          1. 3

                                                                            Just released the front-end rewrite, now using Svelte, of my multiplayer, web-based, endless arcade game https://SneakySnake.io! Relatively minor changes to the actual functionality, but the code is much, much easier to maintain and modify, now!

                                                                            Next up is user-specified player names. I just need to decide how to handle blacklisting.

                                                                            1. 2

                                                                              That’s a really fun game! FYI: when you get eliminated, nothing happens for a few seconds (you can still move around and whatnot) and then you finally turn into food for the other snakes. That’s the only bug I could find!

                                                                              1. 2

                                                                                Thanks!

                                                                                Could be a connection/latency issue. Currently, the only server is currently in SFO USA. Elim’s don’t get registered until the server confirms them, which can take up to 250ms if someone else has a bad connection, or double that if you have a bad connection. I eventually want to improve on that, but it’s a surprisingly hard problem. First step will probably be geo-grouping players. Next step will probably be latency-grouping players.

                                                                                1. 2

                                                                                  That’s probably it. I tried it on LTE first, now I’ve gotten home and it’s much better.

                                                                                  1. 1

                                                                                    Thanks for the feedback! Maybe I should add some kind of graphic depicting the connection quality.

                                                                                    1. 1

                                                                                      Another thing I discovered (not sure if you’re aware of): it doesn’t connect to the server on any device running iOS 13. Not sure what the exact issue is since I’m away from my computer, but I’ll try using the developer tools when I have some time to fire up the Hackintosh.

                                                                                      1. 2

                                                                                        Hmm.. that’s concerning. I should probably start testing in iOS emulator. Or buy an iOS device.

                                                                                        1. 1

                                                                                          Okay, I had to upgrade to Mojave in order to use the iOS 13 simulator. I’m able to reproduce, now. It looks like iOS 13 Safari does not have WebAssembly? ReferenceError: Can't find variable: WebAssembly

                                                                                          I’m very confused how this is possible.

                                                                                          There might also be an issue with the WebRTC offer process. I seem to be getting a 400 error when trying to POST to the endpoint. But only for iOS 13 Safari… More investigation needed.

                                                                                          1. 1

                                                                                            Apparently iOS simulator does not support WebAssembly… https://bugs.webkit.org/show_bug.cgi?id=191064

                                                                                            And the reason for the 400 error during the WebRTC offer process is a webkit bug in generating the SDP offer: https://bugs.webkit.org/show_bug.cgi?id=203190. I’m making my SDP parser less strict in response, so it should be working as soon as I release a new version in a few minutes.

                                                                                            Edit: If you could test it on your iOS 13 device and let me know if it’s working now, I would greatly appreciate it!

                                                                              1. 4

                                                                                Sure you went from 800 lines of C to 10 lines of Python, but how many lines of C is that Python running? And now you have an implicit dependency on the system Python.

                                                                                1. 10

                                                                                  You can amortize Python’s lines of code by reimplementing more commands with Python.

                                                                                  1. 3

                                                                                    This is getting off-topic, but I wish someone would actually do this rather than just say you could, switch one command over, and YOLO out stage right. Then someone else does the same thing with Ruby, and so on.

                                                                                    It takes a lot of commands to compensate for a Python dependency. Just because you could, theoretically, doesn’t make this a good idea.

                                                                                    I still like OP. But it is in spite of treating Python partly as a liability.

                                                                                    I’m also still curious to hear if anyone’s tried to amortize a HLL throughout a distribution.

                                                                                    1. 5

                                                                                      You might know this but right before Oil worked on Rob Landley’s toybox project, which is a fork of his busybox code.

                                                                                      He’s basically trying to compress coreutils + other Linux utils into C. (I guess even more so than busybox does, plus a license change.)

                                                                                      After working on it for awhile I realized the code was pretty sloppy embedded C. Here are some bugs I fixed that fell out of ASAN trivially – there were others:

                                                                                      http://lists.landley.net/pipermail/toybox-landley.net/2016-March/004853.html

                                                                                      http://lists.landley.net/pipermail/toybox-landley.net/2016-March/004852.html

                                                                                      So I think there still needs to be a language that is better than C for writing this kind of stuff.

                                                                                      Interestingly, Python started out exactly in this space – it was for user space utilities for the Amoeba operating system, which Guido said were not particularly performance sensitive.

                                                                                      But after writing Python for many years, I think it’s somewhat suitable for this domain, but not clearly better than C. Startup time, the module system, and memory usage are some of the significant downsides. And perhaps the Python/C API could be nicer and not have 10 different wrappers.

                                                                                      I’m not sure if Oil will ever be better for this use case, but I will try to make it that way :) A shell is somewhat different than a language for writing sed, but there is some overlap. I believe it should translate to C or C++, not be interpreted.

                                                                                      Like Landley, I also care about build dependencies. He was known for removing Perl build dependencies from the Linux kernel, which I generally view as a good thing. However the replacement was C, and Perl rewritten in C is likely not very clean (e.g. C’s worst attribute is string processing).


                                                                                      Also, the funny thing is that Oil is shell + Python’s data structures (literally, as in using the same code now :-) ), so the algorithmic improvements in the OP would apply if it were written in Oil.

                                                                                      1. 2

                                                                                        This is a great place for Rust to shine. No runtime dependencies, other than libc. Memory safety by default. First-class utf-8 support, along with many string methods in the standard library. String processing in Rust is almost as easy as in Python, but it’s much easier to get high performance in Rust.

                                                                                        See ripgrep, for an example.

                                                                                        Edit: Ahh, just saw that someone else linked a project for rewriting coreutils in Rust: https://github.com/uutils/coreutils

                                                                                        1. 2

                                                                                          Why not write the core utilities in C++? C++’s standard library has a rich selection of data structures and algorithms, and aside from exceptions and RTTI (both of which Herb Sutter is working on fixing), it all follows the zero overhead principle.

                                                                                          1. 2

                                                                                            C++ would be a very logical choice now, maybe the best one from an engineering perspective (especially for embedded systems).

                                                                                            But I’m talking about future hypothetical systems, not real ones :) I’m writing C++ for Oil write now and can’t wait to not write it. The header files continue to annoy. Now that I know more about compilers, they annoy even more – i.e. it’s really easy to fix in a new language.

                                                                                            Metaprogramming is tortured. C++14 and 17 and 20 made me lose the plot … especially anything related to move semantics. Now that I know a bit more about Rust, C++’s resource management seems anemic by comparison. Resource management in C is dangerous, but at least simple. Resource management in C++ is probably just as complex as Rust, but less powerful and less safe.

                                                                                        2. 2

                                                                                          That’s what my work project is doing. Most of the runtime is written in itself, with less and less dependency on host language as time goes on. And no circularity yet!

                                                                                    1. 1

                                                                                      Great read!

                                                                                      Tip: using Cell on larg(ish) arrays isn’t a very good idea. It’s going to trigger a lot of memcpy’s. I’d bet that’s your biggest bottleneck right now. You can trivially change those to RefCell and do the modifications in-place, though.