1. 2

    I’m playing with NVidia’s PTX ISA. I was hoping there’d be some utilities like xbyak/asmjit for it, but I’m not aware of any. I’m just using PyCuda for now but will need a C++ solution eventually

    1. 2

      You accidentally multiplied in a factor of 24. 72 years is only ~26k days.

      1. 1

        Wow, that’s a huge mistake. I’ll update it. Thanks!

      1. 5

        Just finished Capitalist Realism by Mark Fisher and am about halfway through Culture by Terry Eagleton.

        1. 4

          I’ve been working on a drop in plugin to compile PyTorch models with TVM into native CPU code. The results have been great with a trivial two line addition to a classic resnet model giving a 2-3x speed up over the regular backend (which suffers from thread contention)

          https://github.com/pytorch/tvm

          1. 2

            That is neat, bwasti… It’s MORE neat because I’ve seen it before! https://tilde.town/~login/writo/

            That’s the web interface, read only AFAIK. The console interface is invoked by executing writo in the shell on tilde.town.

            1. 1

              This is a cool little site!

            1. 6

              I didn’t understand the point of the Rust borrow checker until I started using modern c++.

              1. 1

                Could you expound on that?

                1. 3

                  the Rust borrow checker is effectively compile time move semantic checks – something that C++ doesn’t have.

                  e.g.

                  MyClass a;
                  func(std::move(a));
                  func2(a); // this is undefined behavior, but the compiler does not catch it
                  
                  1. 4

                    The standard only says that standard library objects (unless otherwise specified) are placed in an “unspecified state” after moving. You can still use the object after moving, e.g. query a size or reassign it. For custom objects, you could do whatever you want (or nothing), so it’s not strictly undefined behavior.

                    Hence the compiler can’t really complain about it on a per-translation unit level, especially if functions/constructors/assignment operators are forward declared. A linter should definitely alert about that, though.

                    Rust puts more constraints on what it means to move an object, so it can more effectively check if a name is bound to live object.

                    1. 1

                      Is there ever a case where you would want to write code like the above?

                      I’ve run into use-after-move bugs in C++ code, and I think the compiler could have easily caught them and issued a warning (in my case, but not in every case). I expect this to become a commonly-enabled warning in the future.

                      1. 1

                        No! I’ve run into the same bugs :P. I guess if you see func fails in some way, which then puts a back in its previous state, and then call some other function, but that sounds incredibly smelly. It’s probably just this way because lifetimes are handled by scope and they didn’t want to change that too dramatically.

                        1. 1

                          It is not possible to undo the move (unless the moved-to object is a temporary), because the state of the moved-to object was destroyed by the move.

                          I think I see what you are getting at, though. Since a is a valid object after the move, it is legal in C++ to use a after the move (e.g. replace func2(a) with the invocation of ~MyClass() as a goes out of scope).

                          So I know it’s impossible to catch every case of use-after-move without flagging some valid code, but I still feel that the compiler can and should warn about some cases (in my case the compiler could have easily proved that the moved-from object held a null pointer and that the member function that was later invoked resulted in UB).

                          1. 1

                            I was watching this Meeting C++ talk and couldn’t help but think of this exchange :P

                            https://youtu.be/9-_TLTdLGtc?t=3736

              1. 2

                “Cosine Similarity tends to determine how similar two words or sentence are, It can be used for Sentiment Analysis, Text Comparison and being used by lot of popular packages out there like word2vec.”

                Wouldn’t any distance metric do? As long as you choose the right vector space?

                I was under the impression cosine similarity makes it easy to batch computation with highly tuned matrix multiplications

                1. 4

                  Cosine similarity is not actually a metric and I think that is why people use it. Showing it is not a metric is easy because for metric spaces the only points that are zero distance away from another point are the points themselves. Cosine similarity in that sense fails to be a metric because for any given vector there are infinitely many vectors orthogonal to it and hence “zero” distance away. (But I just realized it’s even simpler than that because cosine similarity also gives negative values so it fails the positivity test as well.)

                  The relation to matrices that you mentioned are about positive definite bilinear forms and those give rise to dot products that are expressible as matrix multiplications and in vector spaces there is a way to define a metric based on dot products by defining the metric to be the dot product of the difference between two vectors with itself. Following through the logic the positive definite condition ends up being what is required to make this construction a metric.

                  1. 3

                    This is not really the problem. People convert cosine similarly into a pseudo distance by taking 1 - cos(u,v). This solves the problems that you mention.

                    The true problem is that the cosine is a non-linear function of the angle between two vectors, this violates the triangle inequality . Consider the vectors a and c with an angle of 90 degrees. Their cosine pseudo-distance is 1. Now add a vector b that has an angle of 45 degrees of both a and c. The cosine pseudo-distances between a and b, b and c are 0.29 rounded. So, the distance from a to c via b is shorter than the distance from a to c.

                    1. 2

                      Even with the pseudo-distance you still have the same problem with zero distances, as 1 - cos(u,v) is zero whenever the two points are tau radians apart.

                      1. 2

                        In most vector space models, these would be considered to be vectors with the same direction, so 0 would be the correct distance. Put differently, the maximum angle between two vectors is 180 degrees.

                      2. 1

                        Good point. I didn’t think about the triangle inequality failure.

                      3. 1

                        Thanks! Why would people want that (not actually being a metric)?

                        1. 1

                          The Wikipedia article has a good explanation I think. When working with tf-idf weights the vector coefficients are all positive so cosine similarity ends up being a good enough approximation of what you’d want out of an honest metric that is also easy to compute because it only involves dot products. But I’m no expert so take this with a grain of salt.

                          So I take back what I said about using it because it’s not a metric. I was thinking it has something to do with clustering by “direction” and there is some of that but it’s also pretty close to being a metric so it seems like a good compromise when trying to do matching and clustering types of work where the points can be embedded into some vector space.

                      4. 3

                        I was under the impression cosine similarity makes it easy to batch computation with highly tuned matrix multiplications

                        The same applies to Euclidean distance when computed with the law of cosines. Since the dot product is the cosine of the angle between two vectors, scaled by the vector magnitudes, the squared Euclidean distance between two points is:

                        |u|^2 + |v|^2 - 2u·v

                        (|u|^2 is the squared L2 norm of u, equations are a bit hard to do here)

                        Computing the euclidean distance in this way especially pays off when you have to compute the euclidean distance between a vector and a matrix or two matrices. Then the third term is a matrix multiplication UV, which as you say, is very well-optimized in BLAS libraries. The first two terms are negligible (lower order of complexity).


                        One of the reasons that cosine similarity is popular in information retrieval is that it is not sensitive to vector magnitudes. Vector magnitudes can vary quite a bit with most common metrics (term frequency, TF-IDF) because of varying document lengths.

                        Consider e.g. two documents A and B, where document B is document A repeated ten times. We would consider these documents to have exactly the same topic. With a term frequency or TF-IDF vector, the vectors of A and B would have the same direction, but different lengths. Since cosine similarity measures just the angle between the vectors, it correctly tell us that the documents have the same topic, whereas a metric such as Euclidean distance would indicate a large difference due to the different vector magnitudes. Of course, Euclidean distance could be made to work by normalizing document (and query) vectors to unit vectors.

                        I don’t want to bash this article too much, but it is not a very good or accurate description of the vector model of information retrieval. Interested readers are better served by e.g. the vector model chapter from Manning et al.:

                        https://nlp.stanford.edu/IR-book/pdf/06vect.pdf

                        1. 1

                          I must have been confusing the two

                      1. 4

                        I’ve got a 9360 model with the QHD screen, after years of hating on laptops because of the sub-par characteristics. Very satisfied with it.

                        My criteria is pretty much just:

                        • A screen resolution that isn’t 1377x768 (It’s 2018! Why are these still shipping?)
                        • Screen must not have adaptive brightness, or at least have the ability to completely disable this
                        • WiFi/Bluetooth must not be Broadcom. Linux support is a crapshoot when it comes to these chips.
                        • User upgradeable components (Thankfully, Dell provides an user manual describing how to replace the M2 hard drive and wireless chip)
                        • Standard UEFI implementation (i.e. ability to disable Secure Boot)
                        • Standard keyboard layout (funky layouts are a pain)

                        Unfortunately, laptop vendors are only in the game to make a buck, and their offerings aren’t acceptable. At least Apple does this right.

                        1. 2

                          Screen must not have adaptive brightness

                          Isn’t that a completely software thing? Are there implementations of adaptive brightness directly in firmware now?

                            1. 1

                              As bwasti has commented, it was enabled implemented in firmware. However, you were unable to enable/disable the functionality until a recent firmware update made it available. I find that crazy!

                            2. 2

                              Standard UEFI implementation (i.e. ability to disable Secure Boot)

                              Any laptop that ships with Windows (or can ship with Windows) and is using x86 architecture must have “disable Secure Boot” option:

                              (…) Intel-based systems certified for Windows 8 must allow secure boot to enter custom mode or be disabled

                              Source: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface#Secure_boot_criticism

                              1. 2

                                Or even better, add your own key to the chain of trust, remove the ones that aren’t yours, and sign your own kernel.

                                1. 2

                                  Exactly. That’s what I do (with sbupdate), additionally booting directly kernel as an EFI application (so kernel acts as a bootloader, no GRUB necessary!).

                              2. 2

                                A screen resolution that isn’t 1377x768 (It’s 2018! Why are these still shipping?)

                                I bought one of these recently for development. Since my environment is command line based, extra pixels don’t really help much; they can make everything smaller, but there are practical limits on what’s comfortable/what my eyes can do. Probably my “ideal” resolution would be a little higher, but 1080p on a 13” laptop really requires software scaling to be usable, which introduces its own share of bugs and quirks, so it’s not as simple as “more is better.”

                                The unexpected/unplanned bonus is that it can run VMs at 1024x768 which allows it to run legacy systems without scaling, making them much nicer. I don’t spend that much time emulating legacy systems, but I do use them to test my software on older environments, and it’s kind of a feature to have them work so well. It also needs less GPU/uses less power/has better battery life.

                                Not saying it’s the right thing for everyone, but maybe the answer to “why?” is because some of us actually want them.

                                1. 1

                                  Maybe they (1377x768) are shipping because people still like them? I’d prefer my x230 for work on the go over both of the T4x0p I’ve used any day. So yeah, it’s a bit apples to oranges, but afaik the XPS 13 has a similar resolution compared to what the T4x0p has on 14” - I find it cramped and small.

                                  1. 2

                                    The X200s isn’t really any bigger than the X230 but has a much better 1440x900 display.

                                    1. 2

                                      Not saying stuff can’t be improved, but this was re: QHD ( 2560 × 1440 ) - and I’m absolutely not a fan of huge resolutions. At least until the support in Linux (with external screens) is on the level of OSX. Not that I’m a huge fan of OSX, but I’ve never seen any problems with Retina MBP + Normal screen vs Linux on HiDPI + normal screen.

                                    2. 1

                                      1377x76

                                      Sorry for being nitpick here, but it’s actually 1366×768. It still exists because it’s much cheaper than a FHD (1920x1080) screen. In a extremely low margin laptop business, leveling the parts cost is major revenue optimization strategy (even for Apple). Same reason why low-powered crappy netbooks still exist.

                                  1. 2

                                    The thesis of this article focuses quite narrowly on a very specific set of GUI related technology. Just looking at the literature today, it’s clear that much of the development in computing is no longer related to that. I believe this makes sense, as there is little technical background required to think of a lot of these ideas (not to discount the creativity and genius required). Nowadays, most research and development in technology is built on top of well developed theories that require a fair amount of training or experience. I’d fathom the deltas there are much larger than micro-optimizations to user interfaces.

                                    I’m also curious what makes the goals of these folks particularly important. Is it just a dogma thing?

                                    1. 5

                                      I’m also curious what makes the goals of these folks particularly important. Is it just a dogma thing?

                                      I’d personally much rather live in the world Alan Kay is trying to build than the one Steve Jobs built. Double that for Ted Nelson. (As a developer, I already get the experience Engelbart wanted everybody to have, so long as I avoid collaboration: developer tools that haven’t been productized tend to actually be very good, and Engelbart’s ideas about learning curves – that serious people don’t mind putting in some effort to learn to use a serious system – is in line with how many developers think today.)

                                      However, the main point of the essay is that, by misrepresenting their plans as beta versions of the present day, we deny people an opportunity to imagine alternatives they might prefer. (We also insult them, of course, by pretending that they had so little imagination that the world we live in was the best they could think of.)

                                      1. 4

                                        Swift for TensorFlow is one attempt to close the prototype/production gap. It’s a modified version of the Swift compiler. IMO, Swift is about as easy to write as Python, but catches shape mismatches and other things at compile-time, so maybe it’ll catch on.

                                        1. 2

                                          shape mismatch is caught on first run in most frameworks – that certainly isn’t much of a value add.

                                        1. 1

                                          “Unfortunately, using FHE is currently very complicated, and a great deal of expertise is required to properly implement nontrivial homomorphic computations.”

                                          I was under the impression that typical FHE enables AND and OR operations, giving you the ability to bootstrap any language. What’s complicated about this? I’m guessing there are some subtle ways to misuse that

                                          1. 3

                                            Don’t they own the software? Multi-licensing is common practice. Copyright isn’t meant to protect the recepients of software, it’s meant to protect the creators. There are different regulatory means to protect consumers from potential issues involving malicious activity in Microsoft software.

                                            1. 7

                                              Copyright isn’t meant to protect the recepients of software, it’s meant to protect the creators.

                                              Open source, unlike copyright, was born out of a movement meant to protect the recipients of software too.

                                              Whilst (in the US) copyright is meant to promote the progress of science and useful arts, the article was discussing how it seems reasonable to infer the binaries you’re downloading are under an open source license. They aren’t.

                                              1. 13

                                                Open source, unlike copyright, was born out of a movement meant to protect the recipients of software too.

                                                The idea of protecting users freedom came from the Free software movement and copyleft licensing:

                                                https://en.wikipedia.org/wiki/Free_and_open-source_software

                                                1. 2

                                                  “Open source” was born from rebranding “Free software.”

                                                  https://en.wikipedia.org/wiki/History_of_free_and_open-source_software#The_launch_of_Open_Source

                                                  Regardless, the article focuses on free software.

                                                2. 1

                                                  AFAIK, even if the binaries were provided under the same license, with the MIT license it’s perfectly acceptable (if misleading) to provide a binary built from sources modified from the “original”. The license merely states you have to acknowledge the authors’ copyright and add the no warranty clause.

                                                  1. 4

                                                    AFAIK, you’re right and the point of the article was how Microsoft misleads people into unknowingly agreeing to conditions that are in direct conflict with free software.

                                                    The author raised compiling a binary yourself as one mechanism to ensure you’re running a free VSC binary.

                                              1. 1

                                                I wonder how much space do you have on your hosting :D would be really nice if you can show size of SQLite db files on site as well. I love SQLite due to the it’s robust nature and solid code base!

                                                1. 1

                                                  not much space, which is why I have a small disclaimer about jott.live not being used for important things. I’m wondering if there are any security concerns associated with displaying the storage being used by sqlite, but I’ll work on adding that now

                                                  1. 2

                                                    Just to help you out :)

                                                    pragma page_size;
                                                    pragma page_count;
                                                    

                                                    Multiply first with second and you will have a safe file size :D

                                                1. 6

                                                  In terms of minimalist pastebins, I’m fond of http://sprunge.us, which doesn’t need a dedicated client tool. This one is quite featureful by comparison, with editing of notes and TeXdown formatting!

                                                  1. 2

                                                    this is nice, I’ve gone ahead and copied the API partially (you still need to provide a name for the script)

                                                    echo "test" | curl -F 'note=<-' https://jott.live/save/raw/<name>/
                                                    
                                                  1. 8

                                                    Turn off JS then? Isn’t this what a modern browser is by definition? A tool that executes arbitrary code from URLs I throw at it?

                                                    1. 7

                                                      I am one of those developers whom surfs the web with “javascript.options.wasm = false” and NoScript to block just about 99% of all websites from running any Javascript on my home-machine unless I explicitly turn it on. I’ve also worked on various networks where Javascript is just plain turned off and can’t be turned on by regular users. I’ve heard some, sadly confidential, war-stories that have led to these policies. They are similar in nature to what the author states in his Medium-post.

                                                      If you want to run something, run it on your servers and get off my laptop, phone, tv or even production-machines. Those are mine and if your website can’t handle it, then your website is simply terrible from a user experience viewpoint, dreadfully inefficient and doomed to come back hunting you when you are already in a bind because of an entirely different customer or issue. As a consequence of this way of thinking, a few web-driven systems I wrote more than a decade ago, are still live and going strong without a single security incident and without any performance issues while at the same time reaping the benefits of the better hardware they’ve been migrated to over the years.

                                                      Therefore it is still my firm belief that a browser is primarily a tool to display content from random URLs I throw at it and not an application platform which executes code from the URLs thrown at it.

                                                      1. 3

                                                        That’s a fine and valid viewpoint to have, and you are more than welcome to disable JS. But as a person who wants to use the web as an application platform, are you suggesting that browsers should neglect people like myself? I don’t really understand what your complaint is.

                                                        1. 2

                                                          But as a person who wants to use the web as an application platform, are you suggesting that browsers should neglect people like myself?

                                                          I don’t think so. But using Web Applications should be opt-in, not opt-out.

                                                          1. 3

                                                            Exactly.

                                                            There are just to many issues with JavaScript-based web-applications. For example: Performance (technical and non-technical). Accessibility (blind people perceive your site through a 1x40 or 2x80 Braille-character-display matrix, so essentially 1/2 or 2 lines on a terminal). Usability (see gmail’s pop-out feature which misses from by far most modern web-applications and you get it almost for free if you just see the web as a fancy document-delivery/viewing system). Our social status as developers as perceived by the masses: They think that everything is broken, slow and unstable, not because they can make a logical argument, but because they “feel” (in multiple ways) that it is so. And many more.

                                                            However the author’s focus is on security. I totally get where the author is coming from with his “The web is still a weapon”-posts. If I put off my developer-goggles and look through a user’s eyes it sure feels like it is all designed to be used as one. He can definitely state his case in a better way, although I think that showing that you can interact with an intranet through a third-party javascript makes the underlying problems, and therefore the message too, very clear.

                                                            It also aligns with the CIA’s Timeless tips for sabotage which you can read on that link.

                                                            We should think about this very carefully, despite the emotionally inflammatory speech which often accompanies these types of discussions.

                                                            1. 1

                                                              He can definitely state his case in a better way

                                                              I sincerely welcome suggestions.

                                                        2. 1

                                                          by the same stretch of logic you could claim any limited subset of functionality is the only things computers should do in the name of varying forms of “security.”

                                                          perhaps something like: “The computer is a tool for doing computation not displaying things to me and potentially warping my view of reality with incorrect information or emotionally inflammatory speech. This is why I have removed any form of internet connectivity.”

                                                        3. 7

                                                          This is not a bug and it’s not RCE. JavaScript and headers are red herrings here. If you request some URL from a server, you’re going to receive what that server chooses to send you, with or without a browser. There’s a risk in that to be sure, but it’s true by design.

                                                          1. 3

                                                            Turn off your network and you should eliminate the threat. Turn your computer off completely for a safer mitigation.

                                                          1. 1

                                                            There’s always going to be faulty argumentation and statistical reasoning is the new trend. That doesn’t meant there’s something better about deductive reasoning, it’s just easier for most folks to validate. I’m guessing that statistical literacy will increase with time.

                                                            1. 1

                                                              good point, deductive reasoning is not per-se better. But with statistical literacy, in many everyday situations where empirical arguments are made, the answer will more likely be: We need more experiments. And those are often expensive and time-consuming.

                                                            1. 2

                                                              I’d reply to this, but I’m really busy doing dot products on big piles of vectors… they’re really big vectors…

                                                              1. 2

                                                                Computationally, I don’t think they’re that big. Typical vision models have matrix multiplications that are quite small (on the order of 1000x1000 matrices for the huge ones) relative to the problems solved for computational chemistry (a couple of orders of magnitude bigger).

                                                                1. 2

                                                                  I’m doing it by hand, and I’m not good at math.

                                                                  1. 1

                                                                    I guess that’s why you are procrastinating on Lobste.rs then?

                                                              1. 4

                                                                https://en.m.wikipedia.org/wiki/PPAD_(complexity) This was posted on hacker news a while back and someone mentioned that finding Nash equilibrium is equivalent and that is known NP hard

                                                                1. 8

                                                                  A big part of your post is how these protocols don’t even run or survive network conditions. In the centralized model, one of my favorites was FoundationDB. The testing section shows their development process was so rigorous that the guy who runs Jepsen testing on databases didn’t bother testing it since he’d be throwing less at it. In this recent presentation, one of our Lobsters applies similarly rigorous approaches in their work with memory-safe language on top of it.

                                                                  In comparison, these math coins often look like they’re not even trying. The activities I described found lots of failures in distributed software with benign components. Many more can happen when some parties are malicious intelligently crafting bad inputs. What’s their excuse for decentralized stuff not being as rigorous by default as centralized stuff like I cited? I doubt they’ll have a good one.

                                                                  1. 5

                                                                    When these huge and glaring problems are raised, most of the counterargument is “number go up”. Certainly that’s what the IOTA cult comes out with, in between the harassment and legal threats.

                                                                    The big promise is doing the Bitcoin magical flying unicorn pony tricks without wasting an Ireland of electricity. This feels to me like there’s gotta be some sort of no free lunch effect in play, though I wouldn’t claim to be able to prove it.

                                                                    At least a non-zero number of these guys are more or less sane. They might be wrong or do something dumb, but at least it’ll be an informed wrong.

                                                                    I was in polite mode for this post, but mathcoin white papers should mostly be read as mad scientist villain monologues. “They said I was mad - but I’LL SHOW THEM ALL!!” edit: actually, I’ll just go add that!

                                                                    1. 1

                                                                      RE the no free lunch:

                                                                      For RaiBlocks, the trade-off is that it’s so decentralized that you can’t have a full view of the state of the universe.

                                                                      Each user has its own blockchain, so you only need blockchains related to transactions that interest you. (Transactions are movements between blockchain, starting with a genesis chain that everyone knows about)

                                                                      Basically the system works in partial information, but that means that the system will only be partially visibile depending on where you stand

                                                                    2. 2

                                                                      Empirical fault tolerance testing (as used for simpler and totally solved problems like in your examples) is insufficient for analyzing malicious party actions. Unfortunately, though, no one has come up with a clean mathematical framework for capturing all the intricacies of the economics of the system, so the math is still pretty ugly.

                                                                      1. 4

                                                                        Empirical testing in that model is just about seeing if any inputs can throw off specified behavior. One tool among many but pretty good at finding problems.

                                                                        In the traditional models, the builders ensure actions are traceable and revocable with boundary conditions or sanity checks built into the system. Then, they monitor for abuse. They block or reverse what they detect. This system works pretty well in practice. Pieces of the underlying methods have been formally verified as well. The math might be ugly for blockchain-based methods but not for the core of traditional models with decentralized checking and incident response. It’s why I favor designing around the latter.

                                                                        1. 2

                                                                          No amount of empirical testing is convincing if the mathematical model is not sound. In central authority systems the math is quite straightforward and the testing reveals implementation inefficiencies and bugs. In new and complex trustless systems the math is still pretty suspect, so care and effort need to be spent there instead. This is why the two look very different at the moment.

                                                                          1. 4

                                                                            Again, Im saying empirical testing can find problems in protocols or other software, not that it should replace math. Also, that it finds problems the math might not find. It can also be used to test math or its instantiation, though.