1. 3

    it’s unreadable, barely usable, and unmaintainable.

    The author should try harder to read it. I bet if they put their mind to it, they could make progress.

    Showing an example above the regex is the ultimate documentation.

    I prefer verbose syntax, but this doesn’t seem to be a great example of why it’s effective. (I like the explicit nature of verbose syntax, not the ability to add comments more easily.)

    1. 1

      Confession: I can read it, and that line was for effect. However I would not impose that expectation on others in code that I wrote.

    1. 3

      Panicking from coronavirus. (I’m in Seattle.)

      1. 3

        I’m in Bellevue (practically on the border with Redmond). I’m not really panicking, but trying to be prudent. I work for Microsoft, and AFAIK everyone on my team is working from home. I’m tempted to go out and do some karaoke this weekend, but the King County public health recommendations say to avoid gathering large numbers of people together if at all feasible. So I think I’ll be in my apartment for a while.

      1. 7

        MIT is still in a holding pattern. I suspect they won’t close offices (and campus in general?) until there’s community transmission in the Boston area… which is also what I’m waiting on personally.

        My current bet is roughly March 15 for the first report, although I don’t think I’d actually stake money on it. (I mean, that would feel immoral. On the other hand, I’m already staking my health on it, so whatever.)

        1. 10

          2D C99 game engine. 3800 LOC now. I’m finally through UTF-8 grapheme -> glyph mapping, now onto font drawing, so I’m writing texture atlas generation and rendering for bitmap fonts.

          1. 4

            Everyweek, I see your update. One day I hope to see the game engine in action.

            1. 4

              I’ve only missed commits on 3 days since January 4th. 42 commits and somewhere around 800 LOC last weekend. I’m… getting there. I’m going to have something to show at some point.

              1. 3

                Keep at it man, it’s just nice to see these updates.

            2. 3

              Happy to see at least one person is still doing this.

              Write a software rasterizer too, if you get the chance. Not only is it cool, but I learned an incredible amount from it. https://chrishecker.com/Miscellaneous_Technical_Articles#Perspective_Texture_Mapping

            1. 1

              This is way cool! I’m blown away that this is possible in C. Well done.

              1. 14

                There’s a couple of people on Lobsters who have used Self or are involved in it. Self is still a (slowly) ongoing project, available for MacOS and Linux. I guess I’m the main Self guy at the moment, so happy to answer any questions.

                1. 3

                  Whoaaaa. I just wanted to say: SUCH A COOL FUCKING PROJECT! THANK YOU for working on it.

                  I know I’m not supposed to yell like that, but… whatever, it’s deserved.

                  The reason I’m so happy that something like Self exists is that it serves as an example of “graphical programming”, which is a concept that somehow got lost between the 90’s and now. Project Oberon is another excellent example.

                  Basically, you’re able to interactively explore a system and link arrows together. It’s a flow chart, not a text file. (It can be a text file, of course, but a text file doesn’t help you examine runtime state.)

                  Not even React / Redux tooling is as advanced. You can inspect state, but you can’t really do anything using their tooling.

                  1. 4

                    I just help keep it ticking over, but thanks.

                    There is such a large amount of computing history which has been effectively forgotten. It’s amazing given how short the history of computing is! And every so often people reinvent something :)

                  2. 1

                    Do you have a text summary of what this even is, for those of us who don’t want to invest time in watching videos?

                    1. 4

                      Self is a research programming system first developed at Sun in the 80s, comprising a prototype based language, GUI and VM. The VM was for its time groundbreaking for its use of generational garbage collection and JIT compilation and is an ancestor of the HotSpot JVM. The language is clean and simple, “like Smalltalk but more so” - everything is a message send including local variable access, control structures and arithmetic. The GUI focuses on immediacy and concreteness and allows for multiple independent developers to collaboratively interact with objects on a shared canvas.

                      1. 1

                        Thank you! I’ll make sure to add this to my list of things to watch!

                        1. 1

                          I’ve written about self this Series of articles: http://blog.rfox.eu/en/Series_about_Self.html

                  1. 9

                    It’s submissions like this that make me love lobsters. I can’t think of anywhere else I’d find this, other than randomly stumbling on it via twitter or something (which is rare).

                    Related, and worth a read: Organizing Programs Without Classes http://bibliography.selflanguage.org/_static/organizing-programs.pdf

                    1. 2

                      Interesting read, thanks.

                      Do you think it would be accurate to say this is the “mix-in pattern”? I’m playing with implementing mixins in one of my languages and not sure who does it best. Honestly it seems like the simple way CSS/HTML can approximate mixins may be the best I’ve seen.

                      1. 2

                        It’s Complicated™


                        Chew on this over the course of several days. It’s worth it!

                    1. 5

                      I’m continuing to write a research paper on using hundreds of TPUs to accelerate ML training. https://www.docdroid.net/faDq8Bu/swarm-training-v01a.pdf

                      Also attempting to understand how Jax talks to TPUs at a low level.

                      1. 2

                        I see this is marked as a show. Does that mean you wrote magic wormhole?

                        If so, could you talk a little bit about what it was like to design and deploy the system? Not how it works, but more along the lines of how it came into being, when/why you decided to do it, and how you got the resources to deploy it.

                        1. 2

                          No. The description never said that it needed to be (that’s what the author bit is for). Did I misunderstand the tag?

                          1. 11

                            IIUC “author” typically means “I wrote this [article/blog/prose]” whereas “show” means “I created this [usually software]”. It’s generally understood that applying the “show” tag means you’re the creator.

                        1. 2

                          I simply have to plug this YouTube channel here: https://youtu.be/vcvU6UMYRHM John Michael Godier

                          Before this channel, I wasn’t aware of anyone who specialized in being speculative but who also took science seriously enough not to let their speculation come at the cost of what we know to be true. There is apparently a word for this type of person: a futurist.

                          It’s pop sci, but it’s highly entertaining for someone who usually shakes their head at the nonsense speculation typically found on YouTube.

                          All of his videos are great. Even if you’re not into this one in particular, it’s worth subscribing to him, because he often reports on scientific news of the day. For example, I remember someone on HN telling me that the Wow signal was probably a comet. Yet apparently the truth is the opposite: https://youtu.be/RAZaRYcDFEM

                          Anyway. Apologies for the shilling. I just love “speculative science” like the simulation argument.

                          1. 2

                            Attempting to achieve 100% utilization across 110 TPUs simultaneously.

                            An impossible goal, but I’ll get as close as I can.

                            1. 1

                              Recompilation for extra functionality? Where’s the bait on this one?

                              1. 11

                                Yeah. It sounds a bit strange, sure. But I added a lisp evaluator to my nano and it turned out to be quite convenient. I suspect that lisp might make a decent extension language for a text editor. The downside is that there’s no lexical scope yet and you have to use global variables everywhere, but that can be fixed later.

                                Also I changed it so that if you press a key, it doesn’t directly type anything. By default, pressing a key issues a command. You have to press a special key to start actually typing. It seems weird, but I feel like I can type a lot faster.

                              1. 4

                                This is clearly promotional – flagged as spam.

                                1. 1

                                  Why? It’s not spam. They’re a member of the community.

                                  I haven’t looked into whether their work is good or not, but it seems unfair to judge it solely on the basis that they’re asking for money. More devs should charge for their work.

                                  It seems like self promotion is ok as long as it serves a community interest.

                                  1. 3

                                    Six of his last ten submissions were for this specific book, with what looks to be the same page. It’s not flagged as “already posted” because he’s changing the url parameter.

                                1. 2

                                  bash. It’s productive.

                                  1. 2

                                    I figured experienced Lisper’s like you would’ve replaced it, embedded it, auto-generated it, or some other wild stuff by now.

                                    1. 2

                                      I’ve thought about it. Unfortunately, you’d end up writing code that compiles to bash. At that point you might as well write a python script.

                                      I do have a script to make it easy to generate new bash and python command line scripts, though. Whenever I run across a pattern I keep typing in terminal, I run “mkbin foo”, which spits out ~/bin/foo with a little boilerplate like “if no args are passed in, print usage info and exit.”

                                      mkpy is similar, except it generates boilerplate to parse arguments / stdin, and run main().


                                      1. 2

                                        Yeah, I figured there’s two approaches:

                                        1. Generate native executable using fast compile. Optionally, it’s optimized later.

                                        2. Generate bash with templates. Optionally, type/argument checks and privilege minimization.

                                        The first option is what I’d think someone would do with an optimizing Lisp. You code for the shell like you would anything else. The program can use any benefits of Lisp. It also runs on any supported platform. That might include Windows and Mac depending on what you’re doing.

                                        For the second, I figured I’d use a command that dropped me into an editor with boilerplate (esp includes) pre-appended. Then, type in whatever I wanted done. Then run and/or compile it.

                                  1. 2

                                    Why not both? I wrote a Lisp for Python: http://github.com/shawwn/pymen

                                    (It’s a fork of Lumen and I never got around to updating the README, which is why it seems like it’s only for Lua and JS. But run bin/pymen and it’ll drop you into a REPL.)

                                      1. 1

                                        Like with Nim, I think a Python-to-X converter could be a good idea to leverage its massive amount of libraries. CL is another option there.

                                    1. 8

                                      I appreciate the detail you’ve gone into to present the concepts. But, some honest feedback: I found it very confusing. I kept waiting for “why shouldn’t I bump from the bottom? I’ve been doing it for years just fine” and when I got to assembly code, I stopped reading:

                                      Can bumping downwards do better?

                                      Do better than what? I don’t understand what the problem is. A few conditional branches? Modern CPUs are incredibly efficient, with very deep pipelines that do speculative execution. A couple conditional branches is nothing.

                                      But again, maybe I’m misreading or misunderstanding. But in general, unless your conditional branches are in the inner loop of an N^2 operation that happens every frame of a simulation, it just isn’t worth worrying about. Even if you allocate millions of objects (tens of millions? hundreds?) you still probably couldn’t measure the difference.

                                      I got hopeful when I reached the benchmarks section. Aha, now you’ll discover that this doesn’t really affect the overall execution time! But… no, you’re measuring the improvement for each allocation. Sure, that’s nice, but allocations in general don’t happen enough to impact overall execution time, unless you’re talking about large-scale allocations like string allocations throughout an entire codebase. And in those cases, the performance gain from switching from malloc to a small block allocator is so great that the timing difference of a few conditionals might be a rounding error.

                                      Not trying to be negative here, fwiw.

                                      1. 5

                                        An author of a bump allocation library found that bumping downwards improves allocation throughput by 19% compared to bumping upwards. Why is that not significant? Bump allocator may not be a bottleneck for you, but it is a bottleneck for some workloads.

                                        1. 2

                                          Of course it’s significant. I’m not trying to minimize the contribution. I am saying, as someone who has experience with a variety of large codebases, when you profile to find out where the time is spent, the allocator itself is almost never the issue except for strings. So a 19% throughput in the allocator would equate to almost no overall time savings.

                                          To put it differently, pick any random section of your codebase. You can improve that by 19%. But unless that section is actually the bottleneck at runtime, there will be no difference.

                                          That doesn’t mean it’s a waste of time. For example, I learned more about Rust from reading this.

                                          1. 2

                                            Also, I didn’t realize that he was the author of a library. Yes, I agree that library authors should generally try to make their libraries more efficient if it’s easy to do. I was just surprised that this particular workload would make a difference at all. It sounded like it was picking this code out of thin air, but that was likely where I misread.

                                      1. 23

                                        For some spooky Halloween times, take a midnight stroll through Google’s graveyard!

                                        There’s a lot of hidden terrors in there that time has forgotten.

                                        1. 4

                                          This list is a really neat blast from the past. It’d be cool to see a category for companies that were literally killed by google (e.g. Kiko, a calendar app made just before Google Calendar came out, which Google squashed like a bug).

                                          1. 8

                                            I don’t think even Google can get away with literally killing competitors. Yet.

                                            1. 4

                                              Depends on the country and if they use third parties that distance their brand from the act. See Betchel Corp vs Bolivian citizens that wanted drinking water as an example. Maybe Coca Cola in Colombia vs union people.

                                              If anything, I’m surprised at how civil things normally are with rich companies. Probably just because they can use lobbyists and lawyers to get away with most stuff. The failures are usually a drop in their bucket of profit.

                                              1. 4

                                                Perhaps not competitors, but certainly people who get in the way of profits get killed, eg see the case of Shell in Nigeria: http://news.bbc.co.uk/2/hi/africa/8090493.stm

                                                Hundreds of activists are killed every year, we just don’t hear about it much.

                                            2. 1

                                              You joke but I recall there was (is?) a “storage graveyard” in their Chicago office filled with CDs, casette tapes, floppies, and other physical media.

                                            1. 2

                                              Maybe it’s just me, but TFA takes a weird angle on this. I’d think after the Python 2/3 debacle, most Python devs can figure out for themselves when they can upgrade to a new version.

                                              And honestly, I’d expect something like linter support for new syntax isn’t going to be very big concerns - many projects haven’t even dropped 2.x support yet, so using incompatible syntax is a no-go for a while anyway.

                                              1. 9

                                                The other day in /r/python over on reddit, I had to deliver the “you probably shouldn’t be upgrading to 3.8 yet” advice to someone who had a 3.7 environment working just fine, and then ran into significantly worse performance by rushing to upgrade to 3.8. Turns out their work depends on lxml, which didn’t yet have a published compiled wheel package for Python 3.8, so they’d ended up with a far slower pure-Python version due to some misadventures trying to install from a source package.

                                                1. 2

                                                  Oof. What do you do in that situation? Try to downgrade to 3.7?

                                                  1. 2

                                                    Go back to 3.7 until lxml releases a 3.8 wheel, or set up a build environment that can compile a fast lxml for 3.8.

                                              1. 12

                                                We cloned Faceapp! https://twitter.com/theshawwn/status/1182208124117307392

                                                Our app is called Faceweave. It’s similar to Faceapp in that you can make yourself look older or younger, but the key distinction is that you can apply multiple effects simultaneously. You use sliders and machine learning to morph your face over time.

                                                Here’s an example of making James Bond from N64 Goldeneye: https://twitter.com/theshawwn/status/1185272469046939648

                                                You can see the whole editing process in that video. It’s very fast.

                                                It uses StyleGAN with latent vectors. That’s machine learning mumbo jumbo for “holy shit this looks cool.” There are some neat unexpected discoveries; for example, if you set “baldness” to -300%, it gives you a mullet! Which kinda makes sense. https://twitter.com/theshawwn/status/1178167072448233472

                                                I have also experimented with a skin tone classifier. I’m trying to be sensitive with this work for obvious reasons, but I’m hoping that if it looks good enough, people will be ok with it. Here’s an example of making someone look asian: https://twitter.com/theshawwn/status/1184074334186414080

                                                The skin tone classifier was my first experience with underrepresentation in machine learning. Black people are underrepresented in the FFHQ dataset (which is what our StyleGAN model was trained on), and hence it was much harder to come up with a good black skin tone modifier. Asian was comparatively easy. We’re lucky our product is visual, because we could see immediately that the black skin tone wasn’t working as well as the asian skin tone. Usually when you run into bias in machine learning, you only discover it after the fact. I’d like to go into more detail with our experiences correcting for this bias.

                                                Our app is built using Expo, and most of the work was done by the illustrious Emily Kolar. We were surprised to find that ~90% of our users are using Android! For every 9 people who wanted to try the alpha, only 1 of them were on iOS. So React Native ended up being key for our product.

                                                The backend is worth doing a writeup about too. I wrote a custom Lisp implementation in Python, which I’ve been calling Pymen (it’s a fork of Lumen). This turned out to be very important for rapid prototyping. I got the initial app working on a simple webpage using canvas and basic javascript, and then the unmodified server was sufficient to power our react native app. We basically send raw code expressions to the server. This gives us the ability to do things like “add two slider effects together” by sending a form like (+ (* weight1 slider1) (* weight2 slider2)), which gets compiled to weight1*slider1 + weight2*slider2 and passed to python’s exec function. Since the images are based on PIL with numpy, the math ends up very natural.

                                                Most of our alpha testers came from the dota 2 subreddit. They turned out to be very interested in transforming dota 2 heroes into “real life” people! https://www.reddit.com/r/DotA2/comments/dfv0z3/im_making_a_neural_network_photo_editor_i_used_it/

                                                We have a discord for FaceWeave too, if you want to come hang out or try out the alpha on Android: https://discord.gg/xsbxKPK

                                                This week I’ll be getting a TestFlight build up for our iOS users, and setting up another Google Compute Engine server.