1. 2
    1. 10

      You might also like alas if in squire. :)

      journey sq_dump(args) {
          arg = run(args.a1)
          if kindof(arg) == 'string' { proclaim('String(' + arg + ')') }
          alas if kindof(arg) == 'number' { proclaim('Number(' + string(arg) + ')') }
          alas if kindof(arg) == 'boolean' { proclaim('Boolean(' + string(arg) + ')') }
          alas if kindof(arg) == 'unbenknownst' { proclaim('Null()') }
          alas { dump(arg) }
          reward arg
      1. 1

        Haha amazing. ‘kindof’ could also have been ‘natureof’

      2. 7

        Ada, Perl, Ruby and a couple of languages inspired by them.

        When I was way younger and jumping programming languages a lot that felt like the main thing I always got wrong, elif, elsif, elseif and else if.

        The last one despite being the most to type feels the most logical to me, being a combination of what’s already there with else and if, but is also the closest to a natural language/English.

        1. 1

          It should really be else, if or else; if to be even more like English and to really make it hard for parsers.

          1. 3

            x equals thirty-three or else... if your hat is green, of course. Good luck, parser! o7

          2. 1

            After discovering cond in Lisp, I wished every language had it instead of if, else and the various combinations of those two.

          3. 4


            1. 3

              Ada uses elsif. I wish all these elifs, elsifs, elseifs and else ifs keywords were interchangeable.

            1. 10

              the past couple months of posts here and on HN have prepared me for this - 50ms has become quite a popular inexplicable latency.

              1. 3

                interesting solution is to consider “winning states” as strides over the 1d array

                1. 2

                  Strides and starting points: [[0,1], [3,1], [6,1], [0,3], [1,3], [2,3], [0,4], [2,2]]

                1. 2

                  Any examples of finished art?

                  I think the interesting thing about emoji pixel art is that you could represent the art simply as unicode strings, monospaced and preformatted. It could potentially take a lot less space than a traditional pixel art PNG, but then the art itself would be highly dependent on the rendering font. If you didn’t attach the text to a specific font, it would look totally different on varying platforms.

                  1. 3

                    Example art: https://twitter.com/s_han_non_lin/status/1372937642946535428

                    you could represent the art simply as unicode strings

                    The “copy” link in the tool copies the unicode string to your clipboard with new lines. It only works well if you’ve used emoji in every square, as that preserves the spacing.

                  1. 6

                    The problem with this argument is that it is entirely possible to do that with a non-proof-of-work system as well. In fact, a blockchain may not be necessary at all.

                    I don’t think anyone would deny that centralized databases are more performant than distributed ones, pretty much across the board. The key tradeoff Bitcoin makes here is trustless immutability. Gold was our previous trustless money, and over the past couple hundred years all credit monies and fiat currencies have massively depreciated against it.

                    Centralized ledgers do work, but they cannot provide an ironclad guarantee that the rules of the game will remain fixed into the future. We don’t even know what the supply of dollars will be six months from now. Bitcoin’s supply is predictable decades into the future.

                    The root problem with conventional currency is all the trust that’s required to make it work. The central bank must be trusted not to debase the currency, but the history of fiat currencies is full of breaches of that trust.

                    — Satoshi Nakamoto

                    1. 9

                      Gold was our previous trustless money, and over the past couple hundred years all credit monies and fiat currencies have massively depreciated against it.

                      I recently read Valerie Hansen’s The Silk Road and she makes the point that gold was not the trustless money. It was more accepted than coinage, but you need steady, reliable trading partners to make it useful as currency. The real universal currency in the oasis kingdoms was bolts of cloth.

                      1. 4

                        The real universal currency in the oasis kingdoms was bolts of cloth.

                        Bolts of what cloth, from which producer, what quality, at what time was it produced?

                        Meanwhile a chunk of gold is a chunk of gold.

                        1. 2

                          [Warning: speculation ahead]

                          I’m imagining the bolts to be silk.

                          Gold can be alloyed with base metals in ways that are hard to detect using technology known to the merchants of the Silk Road. Silk can be more easily assayed.

                          1. 1

                            I’m sure the oasis kingdoms would have been convinced by your brilliant analysis.

                          2. 2

                            Nice book, I will have to add that to my list.

                            Certainly, different commodities have served as trustless money at different times. Shells and pelts are two other examples. Gold eventually won out for global trade, but it wasn’t universal until fairly late in history.

                          3. 2

                            You’re falsely equating the space of “not proof-of-work” and “not blockchain” with “centralized.”

                            The claim is that one can achieve similar decentralized feats (depending on the goal) without requiring the planet killing compute power of a proof-of-work blockchain.

                            1. 3

                              Hmm, well it comes up twice. First they claim that FedWire can provide properties such as transaction finality and Sybil resistance. Which is true, it can!

                              This entire kludge is negated in FedWire because all participants are known: it is permissioned.

                              With 25 core nodes FedWire has a degree of replication, but it is definitely not permissionless. Most importantly, it can’t provide the guarantee I highlighted about the rules remaining fixed.

                              Second, near the end of the article they mention proof-of-stake, but it’s a bit of a throwaway line.

                              Through the usage of either permissioned systems (like an RTGS) or a proof-of-stake chain, the energy consumed by PoW chains did not need to take place at all. In fact, PoS chains can provide the same types of utility that PoW chains do, but without the negative environmental externalities.

                              They mention that the “transition to proof-of-stake is beyond the scope of this article” and don’t really dive into how PoS achieves any of these goals.

                              The fatal flaw with proof-of-stake is that 51% attacks are unrecoverable. If one entity ever manages to get more than half the PoS coins, they forever control the rules of the network. PoS networks can be decentralized, but they can never be permissionless. In order to get new coins, you have to buy them from someone who already owns them.

                              In contrast, PoW is both decentralized and permissionless. Anyone can participate in the mining process without a prior investment. 51% attacks can temporarily interrupt a PoW chain, but an attack is ultimately recoverable.

                              So to clarify my position I would add that it’s not just decentralization which is important, but permissionlessness.

                            2. 2

                              Centralized ledgers do work, but they cannot provide an ironclad guarantee that the rules of the game will remain fixed into the future. We don’t even know what the supply of dollars will be six months from now. Bitcoin’s supply is predictable decades into the future.

                              Being able to reinterpret or change the rules is a feature, since it makes it possible to fix mistakes that were made at the inception of the rules. Generally speaking, if you had a traditional contract where a random participant can just set every other participant’s stake on fire, you can probably convince a legal entity that wasn’t the intention and roll back the contract without affecting everyone else using that currency. If the use of the system goes from “currency you can use to buy pizza, drugs, fake IDs, or murder” to “thawing the tundra and flooding my neighborhood so a few really rich guys get even richer” maybe the rules should change, and in a way that doesn’t require the few really rich guys’ consent.

                            1. 1

                              I haven’t written much code, but I’ve been scaffolding out a browser-based decentralized data-stream idea. The concept would function like BitTorrent but for streaming data on top of WebRTC. There’d be no storage, so I believe the only thing required is fast routing and packet signing.

                              The idea is almost exclusively motivated by not wanting to spin up servers every time I build a silly little interactive app.

                              1. 1

                                I’m not a web dev by profession, but I was under the impression that the issue with generating HTML was the cost on the server. I thought it was a lot easier to scale out by just using a CDN for static assets

                                1. 7

                                  Generating HTML dynamically is of course more expensive than static HTML, but this is how every site in the world worked as of ~5-10 years ago: eBay, Amazon, Yahoo, Google, Wikipedia, slashdot, reddit, dating sites, etc.

                                  There is a newer crop of apps (that seem to have unified mobile apps) that generate more HTML on the client, and seem to be slow and want me to look at spinny things for long periods. I’m thinking of Doordash and Airbnb, but I haven’t looked in detail at how they work.

                                  But all the former sites still generate HTML on the server of course, and many new ones do too. This was done with 90’s hardware and 90’s languages. It’s essentially a “solved problem”.

                                  1. 3

                                    The venerable C2 Wiki (a.k.a. the original wiki) switched to being all client-side.💔

                                    1. 2

                                      And that’s terrible.

                                      1. 1

                                        Heartbreakingly so.

                                    2. 1

                                      Only some of the former still render on the server - for example, Google is a mix (Search being partially server-side, pretty much everything else is client-side), new Reddit is all client-side.

                                      Everything is slowly trending towards the newer, slower, client-side apps - I guess in an attempt to uphold Page’s Law :-)

                                    3. 3

                                      That’s sometimes true but usually isn’t. Web dev blogs though often focus on esoteric problems though and gives the illusion that they are more common than they really are.

                                      1. 3

                                        For Phoenix’s LiveView (and I suspect Hotwire), they don’t actually generate the HTML, but instead pass some minimal data to the frontend which generates the HTML. It acts a bit like a client side application, but the logic is being driven by the backend and the developer doesn’t need to write much Javascript. It’s primarily aimed at replacing client side rendering.

                                        You can read this blog post for some details on the underlying data transfer in LiveView.

                                        Caveat emptor: I haven’t worked with this tech. I’ve just read a bit about it.

                                        1. 2

                                          Hotwire explicitly say that they generate the HTML and the client side logic just replaces nodes in the tree. This is why this can technically be used without any specialised backend support.

                                          1. 2

                                            Thank you for the correction.

                                      1. 2

                                        “Core i7 Crushes M1 in AI”

                                        What is “Topaz Lab’s Gigapixel AI and Denoise AI”? This is a strange thing to highlight instead of Apple’s own AI offerings (Accelerate/CreateML/CoreML). Apple has a bunch of undocumented ARM ISA extensions that enable accelerated matrix multiplications (https://gist.github.com/dougallj/7a75a3be1ec69ca550e7c36dc75e0d6f) – is it clear that Topaz is using these?

                                        1. 4

                                          I’m surprised this wasn’t implemented with the higher order function fold (non-associative reduction). The idea is that you’re accumulating into a list and either need to append a new element or increment the count of the last element.

                                          Below is an example using Haskell:

                                          rle s = reverse (foldl (\ a b -> case a of
                                              [] -> [(b, 1)]
                                              (z, k):zs -> if b == z then [(b, k+1)] ++ zs else [(b, 1)] ++ a
                                              ) [] s)
                                          main = putStrLn $ show (rle "aaaabbbcca")
                                          1. 1

                                            Nice, I had basically the same logic but much less idiomatic Haskell (didn’t use the reverse and foldl primitives, so I had to do much more ‘plumbing’):

                                            f str = f' [] str
                                            f' tuples "" = tuples
                                            f' [] (l : ls) = f' [([l], 1)] ls
                                            f' tuples (l : ls)
                                                | [l] == fst (last tuples) = f' (init tuples ++ [(fst (last tuples), 1 + snd (last tuples))]) ls
                                                | otherwise = f' (tuples ++ [([l], 1)]) ls

                                            Probably also should have used ‘where’. But hey, I haven’t touched Haskell for years and I was just curious if I was able to do this.

                                          1. 2

                                            I’m continuing work on my online video editor. I’m quite happy with the stability but I have a single Chrome bug I cannot for the life of me figure out how to get around (exporting to webm has a broken codec, but it still plays fine).

                                            I’ve added the ability to share “templates” (stored metadata referencing the underlying assets by URL) to make it easy to share things, but this weekend I’ll need to update the logic to also support sharing more advanced editing (splitting clips/audio).

                                            1. 2

                                              In the example launchShip, the variable ship goes out of scope at the end of the function, which means armada will pretty much immediately have an empty reference. If the program were refcounted, the armada would still have a valid ship. The latter certainly seems more like expected behavior.

                                              1. 2

                                                You’re right, to a certain extent. We often have “deinitialize” functions (whether they be destructors like in C++, or regular Java methods like dispose, close, destroy, drop, etc), which perform some operation when an object reaches the end of its expected “logical” lifetime. If the object is kept alive, and accidentally dereferenced, it’s memory safe but still results in bugs. (See also side-note at https://vale.dev/blog/raii-next-steps#simplification) So it’s not quite as clear-cut as “keeping an object alive is the correct thing to do”, but sometimes it is.

                                                In the end, this approach is faster than ref-counting, and one should consciously decide whether they want the behavior you describe, or more speed.

                                              1. 2

                                                I don’t know anything about the problem domain so maybe this is a silly question, but is the Euclidean metric especially meaningful here? Since the space is finite-dimensional all norms on it are equivalent and at a glance the l_1 or l_\infty norms look like they’re easier to compute.

                                                1. 1

                                                  I wonder if the constant needed to “convert” to an L1/Linf norm would cause overflow in the uint16 implementation. I don’t think the engineer in this post had access to the underlying hash algorithm (which might be able to absorb that change).

                                                  Regardless this is a good idea

                                                1. 3

                                                  I found this problem interesting and took a shot at simplifying the math. It turns out you can view this problem as an argmax after a matrix multiplication. The result is pretty easy to implement in floating point math with a library. A V100 could do 1500 queries in 0.070 seconds

                                                  I’d imagine (after fiddling with the precision and overflows) it’d be a nice way of rephrasing the problem for optimization on a CPU.

                                                  writeup linked on this submission https://lobste.rs/s/pti6im/optimizing_euclidean_distance_queries

                                                  1. 1
                                                  1. 2

                                                    I’m playing with NVidia’s PTX ISA. I was hoping there’d be some utilities like xbyak/asmjit for it, but I’m not aware of any. I’m just using PyCuda for now but will need a C++ solution eventually

                                                    1. 2

                                                      You accidentally multiplied in a factor of 24. 72 years is only ~26k days.

                                                      1. 1

                                                        Wow, that’s a huge mistake. I’ll update it. Thanks!

                                                      1. 5

                                                        Just finished Capitalist Realism by Mark Fisher and am about halfway through Culture by Terry Eagleton.

                                                        1. 4

                                                          I’ve been working on a drop in plugin to compile PyTorch models with TVM into native CPU code. The results have been great with a trivial two line addition to a classic resnet model giving a 2-3x speed up over the regular backend (which suffers from thread contention)


                                                          1. 2

                                                            That is neat, bwasti… It’s MORE neat because I’ve seen it before! https://tilde.town/~login/writo/

                                                            That’s the web interface, read only AFAIK. The console interface is invoked by executing writo in the shell on tilde.town.

                                                            1. 1

                                                              This is a cool little site!

                                                            1. 6

                                                              I didn’t understand the point of the Rust borrow checker until I started using modern c++.

                                                              1. 1

                                                                Could you expound on that?

                                                                1. 3

                                                                  the Rust borrow checker is effectively compile time move semantic checks – something that C++ doesn’t have.


                                                                  MyClass a;
                                                                  func2(a); // this is undefined behavior, but the compiler does not catch it
                                                                  1. 4

                                                                    The standard only says that standard library objects (unless otherwise specified) are placed in an “unspecified state” after moving. You can still use the object after moving, e.g. query a size or reassign it. For custom objects, you could do whatever you want (or nothing), so it’s not strictly undefined behavior.

                                                                    Hence the compiler can’t really complain about it on a per-translation unit level, especially if functions/constructors/assignment operators are forward declared. A linter should definitely alert about that, though.

                                                                    Rust puts more constraints on what it means to move an object, so it can more effectively check if a name is bound to live object.

                                                                    1. 1

                                                                      Is there ever a case where you would want to write code like the above?

                                                                      I’ve run into use-after-move bugs in C++ code, and I think the compiler could have easily caught them and issued a warning (in my case, but not in every case). I expect this to become a commonly-enabled warning in the future.

                                                                      1. 1

                                                                        No! I’ve run into the same bugs :P. I guess if you see func fails in some way, which then puts a back in its previous state, and then call some other function, but that sounds incredibly smelly. It’s probably just this way because lifetimes are handled by scope and they didn’t want to change that too dramatically.

                                                                        1. 1

                                                                          It is not possible to undo the move (unless the moved-to object is a temporary), because the state of the moved-to object was destroyed by the move.

                                                                          I think I see what you are getting at, though. Since a is a valid object after the move, it is legal in C++ to use a after the move (e.g. replace func2(a) with the invocation of ~MyClass() as a goes out of scope).

                                                                          So I know it’s impossible to catch every case of use-after-move without flagging some valid code, but I still feel that the compiler can and should warn about some cases (in my case the compiler could have easily proved that the moved-from object held a null pointer and that the member function that was later invoked resulted in UB).

                                                                          1. 1

                                                                            I was watching this Meeting C++ talk and couldn’t help but think of this exchange :P


                                                              1. 2

                                                                “Cosine Similarity tends to determine how similar two words or sentence are, It can be used for Sentiment Analysis, Text Comparison and being used by lot of popular packages out there like word2vec.”

                                                                Wouldn’t any distance metric do? As long as you choose the right vector space?

                                                                I was under the impression cosine similarity makes it easy to batch computation with highly tuned matrix multiplications

                                                                1. 4

                                                                  Cosine similarity is not actually a metric and I think that is why people use it. Showing it is not a metric is easy because for metric spaces the only points that are zero distance away from another point are the points themselves. Cosine similarity in that sense fails to be a metric because for any given vector there are infinitely many vectors orthogonal to it and hence “zero” distance away. (But I just realized it’s even simpler than that because cosine similarity also gives negative values so it fails the positivity test as well.)

                                                                  The relation to matrices that you mentioned are about positive definite bilinear forms and those give rise to dot products that are expressible as matrix multiplications and in vector spaces there is a way to define a metric based on dot products by defining the metric to be the dot product of the difference between two vectors with itself. Following through the logic the positive definite condition ends up being what is required to make this construction a metric.

                                                                  1. 3

                                                                    This is not really the problem. People convert cosine similarly into a pseudo distance by taking 1 - cos(u,v). This solves the problems that you mention.

                                                                    The true problem is that the cosine is a non-linear function of the angle between two vectors, this violates the triangle inequality . Consider the vectors a and c with an angle of 90 degrees. Their cosine pseudo-distance is 1. Now add a vector b that has an angle of 45 degrees of both a and c. The cosine pseudo-distances between a and b, b and c are 0.29 rounded. So, the distance from a to c via b is shorter than the distance from a to c.

                                                                    1. 2

                                                                      Even with the pseudo-distance you still have the same problem with zero distances, as 1 - cos(u,v) is zero whenever the two points are tau radians apart.

                                                                      1. 2

                                                                        In most vector space models, these would be considered to be vectors with the same direction, so 0 would be the correct distance. Put differently, the maximum angle between two vectors is 180 degrees.

                                                                      2. 1

                                                                        Good point. I didn’t think about the triangle inequality failure.

                                                                      3. 1

                                                                        Thanks! Why would people want that (not actually being a metric)?

                                                                        1. 1

                                                                          The Wikipedia article has a good explanation I think. When working with tf-idf weights the vector coefficients are all positive so cosine similarity ends up being a good enough approximation of what you’d want out of an honest metric that is also easy to compute because it only involves dot products. But I’m no expert so take this with a grain of salt.

                                                                          So I take back what I said about using it because it’s not a metric. I was thinking it has something to do with clustering by “direction” and there is some of that but it’s also pretty close to being a metric so it seems like a good compromise when trying to do matching and clustering types of work where the points can be embedded into some vector space.

                                                                      4. 3

                                                                        I was under the impression cosine similarity makes it easy to batch computation with highly tuned matrix multiplications

                                                                        The same applies to Euclidean distance when computed with the law of cosines. Since the dot product is the cosine of the angle between two vectors, scaled by the vector magnitudes, the squared Euclidean distance between two points is:

                                                                        |u|^2 + |v|^2 - 2u·v

                                                                        (|u|^2 is the squared L2 norm of u, equations are a bit hard to do here)

                                                                        Computing the euclidean distance in this way especially pays off when you have to compute the euclidean distance between a vector and a matrix or two matrices. Then the third term is a matrix multiplication UV, which as you say, is very well-optimized in BLAS libraries. The first two terms are negligible (lower order of complexity).

                                                                        One of the reasons that cosine similarity is popular in information retrieval is that it is not sensitive to vector magnitudes. Vector magnitudes can vary quite a bit with most common metrics (term frequency, TF-IDF) because of varying document lengths.

                                                                        Consider e.g. two documents A and B, where document B is document A repeated ten times. We would consider these documents to have exactly the same topic. With a term frequency or TF-IDF vector, the vectors of A and B would have the same direction, but different lengths. Since cosine similarity measures just the angle between the vectors, it correctly tell us that the documents have the same topic, whereas a metric such as Euclidean distance would indicate a large difference due to the different vector magnitudes. Of course, Euclidean distance could be made to work by normalizing document (and query) vectors to unit vectors.

                                                                        I don’t want to bash this article too much, but it is not a very good or accurate description of the vector model of information retrieval. Interested readers are better served by e.g. the vector model chapter from Manning et al.:


                                                                        1. 1

                                                                          I must have been confusing the two