1. 3

    Ok, breathe. Let’s work the problem.

    At this point, not a moment later, I would have taken a low-level image of the device. Just ’cause.

    1. 3

      I want a physical button on my keyboard named “LOCAL”. It’s a modifier. And I want there to be no mechanism for sending any “local” codes to a remote machine. In fact, I want the specifications for any interstitial infrastructure to explicitly forbid the transmission of such codes. They’d be of extremely limited use, because of that. Perfect.

      1. 3

        do elaborate, why would you want this?

        1. 9

          I want a key that is just mine. I don’t want RET ~ . to terminate a ssh session. That strange invocation is required because all the other keys are being sent to the remote. I want LOCAL+c (like ctrl+c, but guaranteed to be received and handled locally). I bet I could find more uses for it, too.

          edit: fix typo: terminal –> terminate

          1. 2

            ah I see, thought it was something in the more eccentric side where similar phrasing has been used, but there the meaning was about input modes that was used to taint tag input to prevent it from crossing certain boundaries (oops I pasted the password thing into the wrong window that happened to be networked).

            1. 1

              That’s in the ballpark, I guess. (I have a passing familiarity with ‘taint mode’ from perl?)

              The first example I was thinking of was something like “7-bit ascii” vs “8-bit ascii”.. (See https://en.wikipedia.org/wiki/8-bit_clean I guess.) But, the control codes are all represented within the first five bits so, not the example I was thinking it was!

              The second example is Private Address Spaces. 192.168.*.* and friends don’t get passed between routers. Sure, some manufacturer or crazy admin could break that rule, but they’d be “wrong, wrong, wrong!”, and everybody would know it.

              I want a button reserved for private use. Now that I thought it through, my idea is clearly not about terminals. Sorry OhhhhYeaaaaah. :)

      1. 2

        Yanking both lower control arms off the front of my Nissan and replacing them. If you’re near Bellevue, Nebraska, check out DIY Garage. In short, you rent their hydraulic lift and tools for a flat hourly rate. It’s kinda awesome.

        1. 1

          I think there’s a bug in the javascript for Catholic salvation. Obviously, part of the implementation has been left out (our booleans get modified in another thread?). Assuming that the missing code is merely straightforward calls to change the value of those booleans during the course of life events, rather than some complex logic… then it appears that you only have to get Reconciled once. The rest of your mortal sins are “free”, I guess.

          (:

          1. 0

            There is one major benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.

            I stopped reading here. The author has posited that SNN are 12 orders of magnitude more power efficient than typical neural networks.

            A line of zeros that long requires either a citation or a correction.

            1. 4

              Not a problem at all, thanks for your response! Here the citation

              Rozenberg, M. J., O. Schneegans, and P. Stoliar. “An ultra-compact leaky-integrate-and-fire model for building spiking neural networks.” Scientific reports 9.1 (2019): 1-7.

              1. 2

                Ok, this is a hardware implementation wherein each “neuron” is made out of two transistors and a thyristor. That is dramatically different than the software implementation I thought we were talking about! Carry on, forget my comment! :)

                1. 1

                  An ultra-compact leaky-integrate-and-fire model for building spiking neural networks

                  I don’t know AI or EE at all, so this might be a dumb question, but: I read the paper and it looks like they’re manually encoding the weights and connections as circuit components. Is that correct? If so, wouldn’t most of the energy savings be from using an analog computer?

                  1. 3

                    That is correct, with ‘normal’ computational everything should be exact, and as precise possible, but ANNs can have a 98% accuracy. Therefore accuracy and precision in calculations, has a less prior need. Therefore they can calculate with analog signals. Normal CPUs or GPUs have many functions, so to calculate integrations the CPU need many cycles to compute this (many transistor need to do stuff). In analog circuits this is ‘easy’ been done using a few passive components.

                    1. 2

                      The submission probably needs hardware tag.

              1. 2

                The author (on hack-a-day) basically just links to a repo on github.

                That repo doesn’t contain much… Surprisingly little. I skimmed both source files (different versions of the same thing) and the readme and I still couldn’t figure out how it was actually reading the keypad matrix with GPIO…

                Oh, there it is: https://github.com/witnessmenow/arduino-switcheroonie/blob/master/switcheroonie/switcheroonie.ino#L30-L35

                #include <Keypad.h>
                // This library is for interfacing with the 4x4 Matrix
                // 
                // Can be installed from the library manager, search for "keypad"
                // and install the one by Mark Stanley and Alexander Brevig
                // https://playground.arduino.cc/Code/Keypad/
                

                …Ok, that link in the comment is pretty old, and it leads here: https://github.com/Chris--A/Keypad

                And here we go, this contains the magic I was looking for: https://github.com/Chris--A/Keypad/blob/master/src/Keypad.cpp

                That said… The witnessmenow repo DOES contain very useful information. I am glad it is here.

                Back to hack-a-day, well, they did provide that cool video, by Brian Lough. …Who turns out to be witnessmenow on GitHub. Cool.

                Summary: this content is really cool, but I’m not sure the hack-a-day post was the right introduction to it. But, it was better that, than just a link to the youtube video. I always skip SLYT posts.

                  1. 4

                    How about oss or foss as shorter alternative?

                    1. 2

                      No thank you. See milesrout’s comment. Anyway, the last s is ‘software’. But there’s libre hardware, too.

                      1. 1

                        fossh is apparently “Free and/or Open Source Software and Hardware”.

                    1. 2

                      I agree with milesroute and favor libre, specifically.

                      1. 1

                        Agreed, libre would be better.

                      1. 3

                        Well, you can literally pipe an image to rsync.net. See https://www.rsync.net/resources/howto/remote_commands.html

                        Maybe they’re kinda expensive though? I pay them a ton of money once a year and completely under utilize my account. (This is personal admin debt.)

                        1. 6
                          1. 1

                            This did not happen. He was short on patience and curiosity and quick to give up. He was ill earlier in the week.. Also apparently the recommended minimum age for this kit is nearly twice his age. Whoops.. I mean, I’ve worked with him on other projects and I’m confident that he can carry out the physical assembly of this thing and operate it with the default firmware. I underestimated his ability to interpret the schematics, which are in the style of LEGO instructions, but perhaps of lower quality (not saying much bad; it’s hard to beat LEGO). But really, the problem was that he quit immediately instead of saying “WTF am I even looking at?” :( We’ll try again.

                          1. 3

                            I wasn’t able to/would not pull up Facebook at work. So, I found a pastebin of the source code (via that other news aggregator site). (I haven’t confirmed that this contains the same contents as the original post.)

                            Per somebody’s suggestion, I tried loading it up in Vintage BASIC (I chose the “Generic Linux (Intel, 64-bit)” version).

                            After altering the first few REM lines (no multi-line REMs allowed in Vintage Basic?), the interpreter made it all the way to line 4500 (of the BASIC program, not the file) before crapping out with this error:

                            !SYNTAX ERROR IN LINE 4500, COLUMN 54
                             UNEXPECTED '#'
                             EXPECTING LEGAL BASIC CHARACTER
                            

                            Ok… The issue seems to be PRINT#-1, which must be some idiomatic TRS-80 basic. Sure enough, in this old TRS-80 manual (wrong model?) we see the PRINT# command, where # is followed by the number of a file that we previously OPEN‘d. It doesn’t say anything about negative numbers, though. And this program does not CONTAIN any OPEN statement.

                            My path ends here. Note that there’s probably some TRS-80 BASIC interpreter out there and I coulda started with that instead..

                            1. 3

                              PRINT#-1 may be to a printer. On the TRS-80 Color Computer, PRINT#-2 would output to the printer and PRINT#-1 would output to the cassette.

                            1. 5

                              Two 320 byte files each with hash f92d74e3874587aaf443d1db961d4e26dde13e9c

                              I was intrigued by the statement:

                              Processing power used: none. No brute force processing was necessary to generate these two files based on the other files above.

                              Perhaps someone experienced in hashing algorithms can elaborate. Thanks!

                              1. 10

                                This is a very clickbait title, so flagged as spam. It’s a chosen prefix collision - it’s no more a “new” SHA1 collision than this polyglot PDF/HTML example I made a few years ago. (enable JS if you want to see the polyglot).

                                Or, if you just want to see the SHA1 collision without enabling JS (which is completely reasonable):

                                curl https://apocrypha.numin.it/sha1/a.pdf.html | sha1sum
                                curl https://apocrypha.numin.it/sha1/a.pdf.html | sha256sum
                                curl https://apocrypha.numin.it/sha1/b.pdf.html | sha1sum
                                curl https://apocrypha.numin.it/sha1/b.pdf.html | sha256sum
                                

                                @kghose - the elaboration you were looking for is below, let me know if you have any questions. This is a pretty dishonest blogpost that I still think folks can learn from.

                                1. 7

                                  To expand on this: the shattered.io team computed two different prefixes that look like a PDF header but hash to the same SHA1. So, if you append the same rich content to the prefixes (like, say, HTML with Javascript, or a PDF document), and it can tell the prefixes apart, you can get two HTML documents or PDFs that behave differently but still have the same SHA1. This is because the two computed prefixes (which are actually different) end up putting the SHA1 algorithm in the exact same state, so the hashes will be the same as long as you append the same thing to the prefixes.

                                2. 2

                                  I think that “based on the other files above” means that these were constructed based on a previous PDF collisions. Here’s a hex dump diff of the two files: https://i.imgur.com/6N3PnvA.png (and I’ve verified that the SHA-1 hashes are the same.)

                                  They start out with PDF syntax, but don’t actually render as PDFs; a PDF viewer on my computer declares that they’re broken. You can see the text “SHA-1 is dead !!!!!” in the dump, though. The end of the file is then striped with a pattern of 2 bytes matching, 2 bytes mismatching.

                                  It’s very curious to me that they would have started from a base of a PDF file, given that the base is identical between the files. The Shattered attack uses different bases.

                                  1. 3

                                    Ah. It’s a bit of a deceit.

                                    It looks like they maybe just lifted this straight out of pages 3 and 4 of https://shattered.io/static/shattered.pdf without explaining that? Or more simply, shat-a.bin and shat-b.bin are just the first 320 bytes of shattered-1.pdf and shattered-2.pdf from that website, so they probably just truncated those.

                                    I’m flagging this as “spam”, although maybe “off-topic” is more accurate.

                                    1. 1

                                      They can’t create some large number of such collisions now?

                                      1. 2

                                        Yes, and then what? They then have a large number of pairs of files that all start with a PDF header and the text “SHA-1 is dead” and have the same rest of the file as each other and also the same hash. But that doesn’t really buy you anything.

                                        There’s nothing here that wasn’t described in shattered.io.

                                        1. 2

                                          You can use this tool https://github.com/nneonneo/sha1collider/blob/master/README.md to generate similar specific-shared-prefix collisions.

                                          This post is spam due to being misleading/incorrect/not meaningful.

                                          1. 1

                                            Ah, yes. I think you’re correct.

                                  1. 3

                                    Jeff, I like where you’re going with this. But, I don’t think you’re done yet. If you’d like, I’ll tell you everything I know, freely. I don’t have the same talent for speaking to an unknown audience that you do. But, I can speak to you, if you’d like.

                                    May I ask you a personal question? The oldest machine I ever spent a year with was a Commodore 64. It was a hand-me-down by then. My next machine was a Pentium (classic). When my family got dial-up at home, it was a connection to AOL at 28800 kbps. By this time, most modems would NOT disconnect when the user viewed a web page that describe certain AT commands… That personal question.. How OLD are you? And I don’t mean in terms of revolutions around the sun. I mean, I can see that you are like me–you’ve used computers non-stop since you first started. And, in what generation of hardware and networks was that, when you started?

                                    My next question is, what does line 6 of the html source of your post do?

                                    <link rel="preload" href="fonts/open-sans-v17-latin-regular.woff2" as="font">
                                    

                                    What is a woff2 file? What is inside it? Can it be extracted? What is the license? How many implementations of software exist to interpret that file?

                                    May I proceed in this manner down the file, line by line? (Including the lines of prose.)

                                    Thank you, cheers, –David

                                    1. 2

                                      WOFF2 is a compressed font format. It’s supported by all modern browsers and its reference implementation is available under the MIT license.

                                      The Open Sans font is available under the Apache license.

                                      Now Jeff only needs to answer the first question. ;)

                                    1. 13

                                      In programming we don’t have [real numbers]. We have integers and floating point.

                                      The CPU has floating point and limited precision integers. There is no reason a programming language can’t abstract that away (and IMHO, any high level language really ought to).

                                      For instance, almost all Lisp implementations have exact rational numbers and arbitrary precision integers built in. This frees up the programmer to not have to worry about overflow, floating point precision and other weird behaviour.

                                      Python and Ruby also have arbitrary precision integers built in. There’s a JavaScript proposal to add arbitrary precision integers, but it’s a separate type from regular numbers, so you need to declare in advance that you want to be working on large numbers, which is painful and error-prone.

                                      This industry really has a hard time learning from the past. There’s a lot of research and effort that’s gone into fast calculations which overflow into arbitrary precision integers, for example in the Lisp community (in the early eighties!), but somehow this never really got into the mainstream.

                                      1. 3

                                        There is no reason

                                        The article gives the reason: performance. Working with the CPU representation makes it fast by default.

                                        They can do what you suggested when performance isn’t top priority. That’s most apps.

                                        1. 3

                                          I agree, and rational numbers get really undersold for some reason, but I don’t know of any programming languages that have arbitrary precision reals by default though. (Probably because most of them are infinitely long, upon consideration.) So if you’re dealing with any of the irrational numbers, to multiply by pi or divide by sqrt(2), you still have to care about floating point or some other limited-precision format.

                                          1. 2

                                            Now that’s a more fundamental limitation which would be a lot more interesting to discuss :)

                                            1. 1

                                              Probably because most of them are infinitely long, upon consideration.

                                              So, that’s not a problem. There is a problem, but it isn’t that. For numbers such pi, which are computable, you can make some representation of the number in memory which contains the “true” number. Perhaps that representation is simply a bit of executable code that calculates pi! If you ask the computer to give you all of it, it will do so, though it will block forever in the effort.

                                              But, the main thrust of this article is that there are numbers (most of the Reals) for which there is no such representation available and indeed it has been proven that there cannot be.

                                              So, let yesterday be remembered as the day that a Wikipedia rabbit hole started by this article lead me to discover Constructivism and decide that I’m a supporter. Tentatively. (Don’t actually know math over here..)

                                              1. 1

                                                It should be possible, like a sort of stream and making such numbers “primitive” units basically. The more challenging part would be in displaying these numbers. 2pi*sqrt(2) or something should be represented as such when printed, if you don’t want to lose precision.

                                                I know Kawa Scheme has a way of representing numbers with units (like mm and cm) and has some sort of way of reading and writing those. Possibly you could define pi as a unit. You’d have to teach various primitives like the the trigonometric functions about this so they can return the exact value.

                                                Could be an interesting experiment. I’m not a mathematician either, so I’m probably oversimplifying things (as usual).

                                                1. 1

                                                  A couple of things. I linked to the wrong wikipedia page. Sorry, sorry.. Here is the one I found yesterday: https://en.wikipedia.org/wiki/Definable_real_number#Computable_real_numbers

                                                  And second, I guess I was wrong when I described “the main thrust of [qznc’s] article”.. Actually, this “computable real numbers” thing came from wikipedia. I just associated the idea with the first car in my train of links??

                                                  1. 1

                                                    FYI, after some more research I have determined that the “non-computable real numbers” aren’t what I thought they were. They are less “important” or “mind blowing” than I thought. I think? I don’t know the math, I think I went down an invalid path.

                                                  2. 1

                                                    I like the idea of an “algebraic” numeric type, where values like (* pi (sqrt 2))are represented basically verbatim and can interact with other numeric types as you’d expect (to form bigger algebraic expressions). Then you could have algebraic->inexact and algebraic->exact (which would fail if it was irrational).

                                                    You’d probably have to teach each of the trigonometric functions about each other so they can simplify efficiently.

                                                    1. 1

                                                      Sympy can sort of do this.

                                            2. 2

                                              A while back I discovered that Clisp Common Lisp compiler supports arbitrary precision floating point with the long-float type. Most implementations make double-float and long-float equivalent, and that’s the default on Clisp, but it allows the user to setf (long-float-digits):

                                              CL-USER> (long-float-digits)
                                              64
                                              CL-USER> pi
                                              3.1415926535897932385L0
                                              CL-USER> (setf (long-float-digits) 512)
                                              512
                                              CL-USER> pi
                                              3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811L0
                                              CL-USER> (sin (* pi 1.25L0))
                                              -0.7071067811865475244008443621048490392848359376884740365883398689953662392310535194251937671638207863675069231154561485124624180279253686063220607485499681L0
                                              
                                              1. 2

                                                Cool!

                                            1. 2

                                              Sometimes components are of a lower spec, but labeled as higher. This is particularly common in FLASH memory, batteries, or any product that features multiple grades in identical casing (e.g. phones with different internal storage capacities). I also suspect this happens in capacitor and resistor tolerances and tempcos, but I haven’t done a rigorous study to confirm the suspicion.

                                              What is “tempco”?

                                              I found this jargon-infused article, but I still don’t know what tempco is. Clearly it’s a numeric range..

                                              1. 2

                                                According to Google, it’s “temperature coefficient”, which is a measure of how a voltage measurement circuit’s output varies with temperature. For a digital system like flash memory, I suspect this maps to how sensitive its error rate is to different temperatures.

                                                1. 3

                                                  That’s pretty much it.

                                                  Digital-only parts usually only have a temperature rating, because a “coefficient” by which a digital part gradually goes out of spec is not usually realistic or desirable. For example, a part is specified to operate in the industrial range of -20 to +85C - probably it fails “more” at 200C than 150C, but that’s not a distinction most flash memory customers or users care about.

                                                  Capacitor & resistor tolerances and ratings are subtly different because these things are more “analog” and do incrementally go out of spec. For example ceramic capacitors have a 3 letter code which indicates their “class”, including temperature coefficient properties. This lets the designer know the temperature range across which they deviate at most x% from their nominal capacitance: https://www.raviyp.com/embedded/217-difference-between-x7r-x5r-x8r-z5u-y5v-x7s-c0g-capacitor-dielectrics

                                                  1. 1

                                                    Sounds legit! Thank you.

                                                1. 4

                                                  I see lots of comments on the veins of “those developers are bad because they trusted localStorage”. I think that people who are thinking that this bug is reduced to this are missing the point completely. LocalStorage is used by many apps as a temporary location to keep stuff while they are offline for future synchronizing with the server. This is a best practice, keep things in localStorage, sync when you can. That in-transit data is now lost. There are cases of veterinary hospirals losing the information on which animals have been vaccinated. They simply can’t do the vaccinations again. Field workers losing their work.

                                                  This affects hybrid apps the most. There are a ton of specialized android applications built with phonegap/cordova/ionic, in which localStorage is the primary storage. All that data is lost now. This also affects webSQL which was heavily used by those apps in the past.

                                                  In many cases, the end users feeling there was something wrong with the app, reinstalled them which made the process even worse.

                                                  What people should be rallying around is: “How was this bug not caught by an automated test by the Chrome team?” It is all cool that developers can pick betas and test stuff out but the responsibility for shipping a working product is Google’s, and they are not, in my opinion, having the correct posture on this.

                                                  1. 1

                                                    I read approximately 1/3 of the bug thread word for word. I too thought that the application devs screwed up here. But, you’re missing something, I think… I don’t think most of these apps have any online storage at all. Let’s see, here’s one… https://play.google.com/store/apps/details?id=com.biblioeteca.apps.NoMorePass&hl=en_US It is a password manager.

                                                    So, the crime that these devs committed against their users wasn’t failure to use online storage–that was by design. The crime was failing to explain to the users that their data was being stored in what is tantamount to ephemeral storage–the kind of storage that should be BACKED UP.

                                                    1. 2

                                                      Oh, I am not missing that. On my longer commentary on that bug, I mention:

                                                      “LocalStorage, webSQL, IndexedDB have been rock solid solutions for hybrid apps for probably close to a decade. Data eviction seldom happens for hybrid apps. These are stable APIs and have been working reliably for many years.”

                                                      People been using these storages (localStorage, webSQL) for probably close to a decade in client-only apps. It has been very reliable. After spending many years using it and reading the specs, which doesn’t mention it should be used only for caches or ephemeral data IIRC, it is understandable that those mobile devs started being confident on it. I’ve built apps that used those storages as main storage for many years until the app started to need complex queries and I moved to SQLite, I didn’t move because localStorage was not safe at the time, I moved because I had other needs. Still, my apps always had a way to backup your data to a file which I bet many of those apps also have but if there is no data to backup, then it becomes quite tricky.

                                                      1. 1

                                                        soapdog, ah, I did not know that. Thanks exposing me to this.

                                                        Do many of your users know how to reach into their browser’s profile directory?

                                                        Are these local storage mechanisms synced across devices by some organization (Google, Microsoft, Mozilla)’s sync service? Or backed up “to the cloud”?

                                                        If not.. Then what does a user do when they upgrade to a new device?

                                                  1. 1

                                                    I have thought about this problem for a few years. I considered all these ’gotcha!’s described in the article and the comments here.

                                                    I decided that I want my OS to detect when a process is waiting for input (blocking) and play a sound–like, a tone. Globally.

                                                    Will this cause my PC to emit the tone immediately and continuously as soon as I turn it on? Perhaps yes, if I’m using Ubuntu or something. Maybe not on the machines I’ve been building lately, that don’t really do anything unless I’ve just asked them to.

                                                    Anyway, I imagine I’d want a set of rules. Maybe I’d even exclude from monitoring those processes that aren’t “near” an interactive terminal in the process tree. Or leave it global, but whitelist some processes that I learn block on stdin all the time as part of their normal operation.

                                                    I think I’d learn a lot about my systems! And certainly reduce the specific “🤦” situation the article describes.

                                                    1. 2

                                                      I decided that I want my OS to detect when a process is waiting for input (blocking) and play a sound–like, a tone. Globally

                                                      Well that’s not how I’d do it but I was thinking the terminal emulator could visually indicate a waiting state fairly easily if the OS told it. Like a blinking cursor vs a steady one, or a color or whatever.

                                                      I just don’t think the OS informs it right now…

                                                      1. 1

                                                        top and htop know, right? They both get their information solely from /proc, I heard.

                                                        I think lsof would be able to tell if a process was blocking on stdin, too, right? Not sure where it gets its information.

                                                    1. 2

                                                      I wonder how much code would break if the behavior were changed so that * expanded to the same strings as ./* instead.

                                                      “Never write , always write ./” seems like a good lint rule for shell.

                                                      1. 2

                                                        I learned that GNU tar will behave subtly differently with ./. The ShellCheck page mentions this. Oil has a different solution:

                                                        http://www.oilshell.org/blog/2020/02/dashglob.html

                                                        http://www.oilshell.org/blog/2020/02/dashglob.html#appendix-b

                                                        1. 1

                                                          Oh lovely, that GNU tar behaviour different does not at all spark joy.

                                                          I’m not 100% convinced about dashglob as a solution because you can now be in the situation where you ran “rm *”, expected it to remove all the files in the current directory, but “./–rf” has not been removed and subsequent steps which assume the directory was emptied will fail.

                                                          Shellcheck (or just avoiding shell) for the win, I guess. <3

                                                          1. 1

                                                            That’s also true for dotfiles though. If you do rm * you will be left with .foo, and you cannot rmdir after that.

                                                            dashglob was partly inspired by bash’s dotglob.

                                                            1. 1

                                                              Point

                                                        2. 1

                                                          “Never write *, always write ./*”

                                                          I prefixed each * with a \.

                                                          1. 1

                                                            Oh! Thabks. I didn’t notice the formatting was broken. I do know how to use markdown, I am just less likely to check the result when I’m on my phone and typing is already hard . :)

                                                        1. 4

                                                          I had to resort to wikipedia to get valuable information, as this website assumes that you’re already familiar with the subject. You land on a page asking for pledge about “removing an image” that I’ve never seen in my whole life.

                                                          Perhaps there could be a proposal to replace Lena with PJW’s face ?

                                                          1. 3

                                                            Thanks. I too was wondering what they meant. I don’t watch videos in my news feed so I was kind of curious to follow up and watch later.

                                                            I don’t think it matters much, but will now never use it. I haven’t used it in 25 years so it’s an easy change, although mildly positive.

                                                            I think the link that this image is somehow indicative or causative is false. I think boys like playboy and used a common photo at the time. If we fix the proportion of genders, then the next selection will reflect the developer who makes the arbitrary decision to include an image.

                                                            1. 2

                                                              We’d need an image of PJW that has certain color properties. Lena’s photo was not selected only because she is a model. The original image file was scanned on what was at the time a state of the art scanner.. I’m sure the wikipedia page goes over all this.

                                                              1. 1

                                                                Yeah of course, I was ironizing on the fact that PJW’s face was also used all over the place, and we never saw a “losingpjw.com” website. If you look at it this way, Lena was a model, and her face was used as such, while PJW never asked for it and got the same fate. I do understand that there might be more implications, regarding gender parity in tech industry, their images and such, so I hope people will get the joke.

                                                                1. 1

                                                                  ahh.. PJW was in on the joke from the first instance. Lena was not. Her likeness had been used for … about 25 years before she found out.

                                                                  Back to this:

                                                                  an image of PJW that has certain color properties

                                                                  …This 2017 paper offers some alternatives that fit the bill according to some technical analysis.