1. 3

    This is a pretty nice thing, but I can’t help but to feel a little frustrated. All my monitors have orientation detection built in, and it works on Windows, but I can’t find any way for them to report on Linux, meaning I have to do so by hand.

    Similarly, I can’t seem to get laptop-driven brightness setting on my monitors. Local screen works fine, but DP attached monitors don’t seem to do the right thing, and I can’t figure out why.

    1. 1

      Hmm that is really frustrating. My monitor is from the stone age, so it definitely doesn’t have a feature like that. Are the monitors connected to your machine only by HDMI? I wonder if it’s partially the kernel’s drivers, partially your windowing system.

      1. 4

        No, they’re connected by DisplayPort. It’s definitely a software issue, it was a pretty big step down moving from Debian Jessie to Stretch. MST used to work, now it doesn’t, as well as DPMS which has regressed.

        Remote monitor brightness control etc has worked since before VGA connections went out of fashion with I2C lines, but I’m not sure how well it’s been replaced.

        Brightness: never seen it work on displayport on linux Orientation: never seen it work on displayport on linux MST: Stretch broke it here - I can no longer address chained monitors DPMS: Stretch broke it here - if a screen sleeps, it can’t be woken up without undocking my laptop, turning the monitor off and on, and redocking.

        I use my dock a lot less and for less important things these days so it’s not a big deal, it just would be cool if some developer time was poured into these little quality of life areas, is all.

        1. 4

          Brightness can be controlled over DDC/CI these days, which is still I2C…

          And for some really weird unknown reason, even Windows doesn’t do DDC/CI brightness control out of the box. I had to download ClickMonitorDDC to do it.

          Here’s something for Linux that should do it.

    1. 1

      Really solid article, it should be noted though that most C compilers will also generate SIMD instructions if the right attributes are used and data is laid out properly in memory.

      1. 2

        Our team of five neural networks, OpenAI Five, has started to defeat amateur human teams at Dota 2.

        OpenAI Five plays 180 years worth of games against itself every day, learning via self-play. It trains using a scaled-up version of Proximal Policy Optimization running on 256 GPUs and 128,000 CPU cores

        If that doesn’t ease your fears about an impending AI apocalypse, I don’t know what will.

        1. 5

          That actually makes my AI fears worse. But that’s because they are not the exact stereotypical AI fears.

          What the article says is: if you can afford ten times more computing resources, you get better chances to achieve superhuman resources than by using novel approaches. Train once, run cheaply. So, capital matters, labor qualification doesn’t, economoies of scale with huge front-up costs and small recurring costs. That’s how you get badly broken monopoly markets, no? And of course then it breaks because someone spends on servers and not on human time to make sure nothing stupid happens.

          Yes, OpenAI on its own will probably try to release something already trained and with predictable failure risks; and this is likely to improve the situation overall — I like what they want to do and what they do, I am just afraid of what they find (and thanks to them for disclosing it).

          1. 4

            That’s a very good, and frightening point. My comment was mainly in regards to the side of it that smells like mind boggling computational waste in addition to the staggering amount of experience such models require to achieve sub-human (less than expert) performance. Personally, I think ANNs are very cool, and quite elegant. But I feel like they are treated as the ultimate hammer, and every domain of learning is a nail. Examples such as this post show that they scale less than optimally. And from an algorithmic perspective there has to be a more concise, more computationally efficient way to go about learning to perform a task.

            1. 2
              1. Yes, it takes a lot to get anything out of it, then if you just continue doing the same you get better results than you expected, and the road from median player to expert player is not always that large of a percentage increase in expenses on machine training.

              2. Comparing effort-to-mastery between humans and machines is hard, because the expert play is a product of many millions of hours of human play (and discussion on forums and in chats).

              2a. Yes, it would be intersting if centaur chess or centaur go would take off… A 1kW desktop and a human against a 10kW server. Especially interesting given that game 4 by AlphaGo was lost to a human strategy normally too tiring to keep track of — so humans never play it against each other — but it is unclear whether a weak computer could help a human in using such strategies.

              1. One of the useful properties of cryptocurrencies is to show that humans care little about computational waste per se… I mean, some people understood it after looking at smartphones and comparing them to pre-iPhone smartphones and PDAs, but some people needed a clearer explanation.

              2. Analysing algorithmic perspective requires qualified humans, and good tools for these humans. And some of our tools seem to have degraded (as tools of information processing) with time… (I mean, it’s almost painful to read Engelbart’s book on Intelligence Augmentation and see that some modern tools are worse than the prototypes his team had).

            2. 1

              and thanks to them for disclosing it

              I’m afraid they will just disclose peanuts, to engage people and spread trust in AI.

              It’s basically marketing.

              1. 1

                Well, they already tell what they tried and what didn’t work, which is unfortunately already above the median…

                I meant that I am already afraid of then things they find and disclose now.

            3. 2

              Evidence they can learn in an environment where they start with nothing, the rules change a lot, they have to pick up new concepts quickly based on old ones, malicious people try to destroy them, or possessing common sense. What the machines are used for or doing vs what we do on a regular basis are really, really different.

              1. 1

                It might not even matter if OpenAI can eventually make machine learning work in adversarial environment, though (I think this post might even be weak evidence that they will be able to do it at some point, given enough computational resources spent), as someone will still cut the corners and give a vulnerable agent direct control over something with significant negative externalities of failure.

            1. 5

              This is incredible, man… I always thought IC manufacturing was out of reach of us mere mortals. Though I guess I’m not positive you’re a mere mortal ;)

              1. 2

                Though I guess I’m not positive you’re a mere mortal ;)

                Thank you for the kind words, but this was not my project–I just thought it was very inspirational for all of us. :)

              1. 6

                Twos complement integers in JS are a bit of a sham. All numbers in JS are actually represented as floating points. I remember a while ago I had this fact bite me in the ass while trying to work with large integers. Here’s more info if anyone’s interested. http://2ality.com/2012/04/number-encoding.html

                1. 1

                  Read the intro and bookmarked for later. I am excited to see more crustacean SF! 🤖📚

                  1. 2

                    I think you’ll dig it! Just to be transparent though, this isn’t my work. Just something I came across that I thought others here would enjoy. 🤙🏼

                  1. 8

                    We call artificial neural networks a class of deterministic algorithms that can statistically approximate any function

                    they are just applied statistics, not an inscrutable computer brain

                    The counterpoint to this is we don’t actually know yet if our brain isn’t just a mechanism that can statistically approximate any function. The difference of course is that even if brains were analogous to neural nets, which we currently do not actually know enough to say either way, the complexity is just not there. The AI are like a guppy or a tadpole, very very good at some specific task like swimming, but they aren’t doing any “thinking” as we do because they simply are nowhere near complex enough.

                    I’m not saying our brains are analogous to neural nets, I am saying we don’t actually know enough to to pinpoint the importance of structural covariance of human brains. The structure could be entirely where the intelligence comes from, or it could be very little. The important thing instead of saying it’s not a computer brain, is to say that it’s more like reflexes. Completely unconscious, but potentially very skilled. This will help prevent people doubting your “not an inscrutable computer brain” claim, because when something does a task better than them they’re going to think it’s smarter than them, when really for that AI it’s more of a reflex.

                    1. 2

                      it’s more like reflexes. Completely unconscious but potentially very skilled

                      You’re making a good point but I stumbled on this bit. Intelligence and consciousness are potentially two very different things, so that’s an entirely different line of enquiry. It might be worth framing this as “very skilled, but in a very small set of tasks” instead.

                      1. 1

                        It could be very skilled in a very large set of tasks and still be totally incapable of metacognition.

                        1. 2

                          Yes, we are in agreement. I was just trying to say that it’s better to talk about (current) AI purely in terms of skill and intelligence, as bringing consciousness into it complicates things and is an entirely separate discussion.

                          1. 1

                            It’s what the original author was doing whether they were trying to or not and what I was responding to.

                      2. 1

                        Thanks for your advice!
                        I get your point, but I do not think that AI is like a reflex, since a reflex needs way less data to train.

                        While it’s true we know near to nothing about how our brain actually work, it seems not a statistical tool given how few attempts we need to learn something.

                        However we are doing some progress in our understandings.
                        Here an interesting article about the topic. I strongly suggest you to follow the links there: the article is nice, but the linked sources are great!

                        1. 6

                          I would not be so confident in saying the brain takes little data to learn. Take human development for example. Babies take well over a year consuming a constant stream of experience (unlabeled training data) to become competent enough to even perform simple actions.

                          In my opinion, learning probably seems to occur quickly once the brain has matured a bit and has built a sufficiently large set of lower-level concepts. Such that new high-level concepts can be reasonably represented by a subset of the previously understood lower-level concepts working in tandem.

                          1. 2

                            some years of human experience is little compared to the huge amounts of data necessary to build a competent AI. to make a comparison you have to decide how to measure the data of human experience, but the amount that enters your perception is much smaller than e.g. recorded HD footage.

                            1. 5

                              That’s simply untrue. One eye alone has roughly a resolution of 576 megapixels, 4K is 8.3 megapixels. 1 inch of skin has on average 19,000 sensory cells. Also keep in mind the human brain has orders and orders and orders and orders of magnitude more complexity. It can afford relate each thing to everything instead of having the kind of amnesia that even our most advance neural networks have. It allows for much more complex patterns to be formed much faster.

                              1. 3

                                Yes, but the brain discards most of these informations and apparently the visual cortex works as a sort of lossy compression filter.

                                1. 2

                                  This is some significant hand waving. Does the brain filter? no doubt it would not be able to pay attention to specific things if it didn’t. Does it also process the entire visual field? How else could it find some specific feature. Keep in mind the brain can identify when around 9 photons hit the eye within less than 100 ms. One study recently claims to confirm with significance that humans can see a single photon, but the study was small so maybe just those people can. Either way, I’m going to call bullshit on that, it is not smaller than recorded HD footage.

                                  1. 3

                                    Does it also process the entire visual field? How else could it find some specific feature.

                                    You’re falling for your brain’s convincing suggestion that you have full HD in your entire visual field. Really you don’t, and your brain just fills most of it in. Your visual system finds specific features by quickly shifting the eye from place to place, until it finds something worth looking at. That’s how it picks out features without processing the entire visual field.

                                    Your peripheral vision has very poor color and shape detection. Mostly it has special cells designed to detect motion, and when it detects motion you often shift your focus to it, thus picking up the color and shape.

                                    The fact that you can perceive a few photons hitting the eye within 100 ms is a matter of sensitivity and latency; it has no bearing on the data bandwidth of your visual system.

                                    This video might help:

                                    https://www.youtube.com/watch?v=fjbWr3ODbAo&t=8m38s

                                    1. 1

                                      Even 1% of my vision has more complexity than HD.

                                      1. 3

                                        Based on what? Your fovea covers only about 2 degrees of the visual field. The visual field is about 75 degrees in either direction. The width of the foveal part of the visual field is less than 1/10 the width of your entire visual field, so the foveal part is much less than 1%. If you view an HD screen from far enough away that it appears the same size as your thumbnail at arm’s length, can you distinguish every pixel?

                                        https://en.wikipedia.org/wiki/Fovea_centralis#Function https://en.wikipedia.org/wiki/Visual_field#Normal_limits https://en.wikipedia.org/wiki/Fovea_centralis#Angular_size_of_foveal_cones

                                        1. 1

                                          I’m realizing now we’re talking about two entirely different things. I’m talking about the complexity of input you’re talking about the complexity of perception. The former is measurable, and the latter is very nebulous at best.

                                          1. 3

                                            What’s the difference between input and perception? If you don’t perceive something why would it be considered input?

                                    2. 2

                                      You’re entirely right. His statements are totally wrong. A recent-ish article that does a decent job of describing at least part of the story https://www.sciencedirect.com/science/article/pii/S089662731200092X

                                      1. 1

                                        Can you explain the relevance of that article? It didn’t seem to say much about the amount of sensory data that enters our perception, based on the abstract.

                                        1. 1

                                          It’s because the original poster said

                                          Yes, but the brain discards most of these informations and apparently the visual cortex works as a sort of lossy compression filter.

                                          That’s just not how human vision works.

                                          You are right though. You only have high acuity vision in the fovea. But notions of resolution don’t map well to human vision. It’s also just not a thing that’s worth debating. Much better to discuss questions of “how much information do you need to recognize X” (generally very little, human vision works well with very small images) than “how many bits per second are coming in”. The second is ill-defined in any case. If I have 20/20 vision does it really mean that it’s good to think of that as I see HD video and someone else sees SD video? Not really. It just doesn’t answer any useful questions about human vision.

                                          1. 2

                                            The right questions to ask depend on what you’re interested in. Asking how many bits are coming in isn’t useful for the study of human vision, but it is useful if you’re trying to relate AI to the human mind. Humans “learn” things with much less data than machines, because they have innate capacities built in, which (so far) are too complex to build into a machine learning algorithm. This has been well established since psychology’s departure from behaviorism, but tech people tend to forget it when comparing the brain to computers. Granted there’s no way to determine how many bits are entering your mind, but with enough understanding of visual perception I think we can make the judgement call that you get less data than full-color HD video. Understanding that highlights how little data humans require to learn things about the world.

                                            I’m also not convinced that lossy compression is not a good metaphor for human vision. Clearly we’re not using mp4 or mkv, but if you take a wider view of the concept of lossy compression, it makes sense.

                                            1. 1

                                              It’s actually not a good proxy for comparing humans to machines either. Far more important than # of bits in is what kind of data you’re getting. For example, data where you have some control (like you get to manipulate relevant objects) seems to be far more important for humans. The famous sticky mittens experiments show this very nicely.

                                              In any case, HD video is mostly irrelevant for actual AI. Most vision algorithms use fairly small images because it’s better to have lots of processing over smaller images than less processing over bigger ones.

                                              I think it’s worth separating “tech people” from people that actually do AI / CV / ML. People that work on these topics aren’t being confused by this. There’s a big push in CV and NLP to try to include semantically relevant features.

                                              It’s worth reading the article I linked to. Human vision is not lossy compression and this model doesn’t fit the data that we have from either human behavior or from neuroscience. Once upon a time people thought this but those days are long long over.

                            2. 5

                              A reflex takes millions of years to train. You aren’t taking into account the entire lifespan of the human. They are using other patterns to infer the present context.

                              I’m not saying we are a statistical tool. I’m saying we can’t actually tell that we aren’t with confidence just like we can’t tell that we are with confidence. I’ll read the links.

                              1. 2

                                It seems like additional complexity implies that each neuron itself approximates a smallish neural network. This doesn’t really change much outside of the obvious complexity growth and design considerations.

                                1. 2

                                  Disclaimer: I’m a programmer, not a biologist.

                                  The fact that biological neurons exchange RNA genetical code looks like something that no artificial neural network can do. If I understand this correctly, this means that each neuron can slowly program the others.

                                  Still you can see in the slides that this is not something I base my reasoning upon.
                                  I’ve just thought the article could have been interesting to you, given your reasoning about reflexes. :-)

                                  1. 2

                                    I’m also a programmer and not a biologist :V. However complexity theory hints at the possibility that simple setups can lead to emergent complexity that approaches the complexity of a system with more complex agents. Basically the complexity as the system grows exceeds the additive complexity of the individual agents.

                                    I think it’s totally reasonable to say that a node and edge cluster does not anywhere near approach the complexity of an individual neuron. You however can’t use that information to then say that a neural network isn’t able to achieve the same level of complexity or cognition. Programming also isn’t different from any function which takes arbitrary many inputs, which we already know that NN’s can approximate.

                                    That is of course not to say that it CAN do all the above, merely to say that we should be cautious of any claims that are conclusive either way talking about the future.

                                    1. 2

                                      Nice! You are approaching one of the core argument in my talk! :-D

                                      Programming also isn’t different from any function which takes arbitrary many inputs, which we already know that NN’s can approximate.

                                      No AI technique that I know, neither supervised nor unsupervised nor based on reinforcement learning, can remotely approach a function that produce functions as output.

                                      For sure, no technique based on artificial neural networks: there is no isomorphism between the set of the outputs of the continuous functions they can approximate (aka ℝ) and the set of functions. So whatever the size of your Deep Learning ANN, no current technique can produce an intelligence, simply because an ANN cannot express a function through its output.

                                      It’s pretty possible that, a couple of centuries from now, we will be able to build an artificial general intelligence, but the techniques we will use will be completely different from the one we use now.
                                      More, I guess that the role played by ANN will be peripheral, if not marginal.

                                      That’s the worse threat that the current marketing hype pose to AI.
                                      It’s the most dangerous.

                                      Eager to attract funds, most of the research community is looking in the wrong direction.

                                      1. 1

                                        There’s a difference between no current technique can ever produce vs no technique has produced. It’s not impossible that a dumb technique could have complex consequences as the complexity increases. Sure we may not see it in our lifetime, but I think it’s very premature to call it a dead root for intelligence. We should definitely also travel down other paths, but to call it a dead end I think is severely jumping the gun.

                                        1. 3

                                          Just to clear something up since this person is still spreading FUD. There’s plenty of AI that deals with program induction, i.e., ML that leans functions. There’s a lot of NN work on this now. Just search program induction neural networks on google scholar.

                                          The world is full of these people that don’t know anything about a topic but think they’re the next messiah and they’re the only ones that see the truth. My physicists and historian friends always complain about the crackpots they have to fight. Guess it’s the turn of AI folks!

                                          1. 1

                                            ROFTL! :-D

                                            Thanks for the suggestion.

                                            I know nothing about program induction and will surely study the papers I will find on Google scholar.

                                            I suggest you to open your mind too.

                                            I do not pretend to know something I don’t.

                                            But if one tell me that current computers are not deterministic, I can’t help but doubt about his understanding of them.

                                            I guess you have never debugged a concurrent multithreaded program.

                                            I did.
                                            And I’ve also debugged multi processor concurrent kernel schedulers.

                                            Trust me: they are buggy but still deterministic.

                                            They just look crazy and non deterministic if you do not understand the whole input of your program, that include time for example.

                                            The input of a program is everything that affect its computation.

                                            Also assuming that I am spreading FUD, make it impossible to address the real issues in my slides.

                                            I may suspect that you are spreading hype. ;-)

                                            But I ’m still eager of links and serious objections.

                                            Because I want to know. I want to learn.
                                            This requires the acceptance of one’s ignorance.

                                            Do you wamt to learn too? ;-)

                                            1. 1

                                              RE determinism: I’ve heard (from a reputable source) that you can get a pretty good entropy by turning up the gain on an un-plugged microphone (electrons tunnel about, causing just enough voltage fluctuations to hiss).

                                              Would love to have a reason to need it…

                                              1. 1

                                                What about GPG key generation?

                                                1. 2

                                                  I can get enough entropy for that by mashing my keyboard and waving the mouse about. I’d combine the soundcard approach with a hsm if I needed a lot of entropy on a headless box and didn’t want to trust the hsm vendor.

                                            2. 1

                                              Thanks for being a voice of reason about all this, I honestly don’t have enough domain experience to really hold ground on “Lets not jump to conclusions”.