1. 2

    Yes. The only thing we need to unlock the lock is to know the BLE MAC address. The BLE MAC address that is broadcast by the lock.

    Wow, that’s awful! I wonder if anyone has some good lock recommendations that have passed testing with good marks?

    1. 4

      You should see the mechanical lock they have that flings extra keys at anyone who rings the doorbell.

      1. 1

        Well compared to this, you could always buy basically anything else, including the cheapest normal lock they have at the corner drugstore. It might be not too hard to cut, but at least it has an actual key and won’t open right up for any cellphone ever made.

        1. 2

          Also, according to the author the Tapplock was easier to cut than a normal hardware store padlock: https://twitter.com/cybergibbons/status/1007144017149063168

          1. 2

            the cheapest normal lock they have at the corner drugstore

            …can be opened with a shim fashioned from a soda can.

        1. 3

          …rather not have their digital privacy unknowingly violated

          The internet is a dangerous place to attempt that! I’m not sure self-reported tags would put a significant dent in it.

          …I’d like to know the author’s real intentions before clicking on one of these said links

          That’s unlikely to ever be fully revealed and will most certainly become something we bikeshed to death because there is so much gray area with figuring out someone else’s intent.

          1. 2

            And there’s a lot of gray area even when intentions are clear. Some of the best articles submitted (and, yes, the worst) are content marketing. Pretty much everything with a newsletter signup form or on a company domain.

            1. 1

              I guess the way I view it is that authors can self-report, but just as with any other tag or curation, the burden is on the community to best annotate these things.

              1. 1

                I would benefit from an ad tag.

            1. 4

              https://github.com/yt-project/unyt/search?q=smoot

              We couldn’t find any code matching ‘smoot’ in yt-project/unyt

              rip

              1. 3

                Open an issue? ;)

                1. 3

                  This unit was new to me so I googled it

                  https://en.m.wikipedia.org/wiki/Smoot

                  It tickles me that Smoot later worked for ANSI and ISO.

                  1. 7

                    I always laugh when people come up with convoluted defenses for C and the effort that goes into that (even writing papers). Their attachment to this language has caused billions if not trillions worth of damages to society.

                    All of the defenses that I’ve seen, including this one, boil down to nonsense. Like others, the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift, and, for the things C is not needed for, yes, even JavaScript is better than C (if you’re not doing systems-programming).

                    1. 31

                      Their attachment to this language has caused billions if not trillions worth of damages to society.

                      Their attachment to a language with known but manageable defects has created trillions if not more in value for society. Don’t be absurd.

                      1. 4

                        [citation needed] on the defects of memory unsafety being manageable. To a first approximation every large C/C++ codebase overfloweth with exploitable vulnerabilities, even after decades of attempting to resolve them (Windows, Linux, Firefox, Chrome, Edge, to take a few examples.)

                        1. 2

                          Compared to the widely used large codebase in which language for which application that accepts and parses external data and yet has no exploitable vulnerabilities? BTW: http://cr.yp.to/qmail/guarantee.html

                          1. 6

                            Your counter example is a smaller, low-featured, mail server written by a math and coding genius. I could cite Dean Karnazes doing ultramarathons on how far people can run. That doesn’t change that almost all runners would drop before 50 miles, esp before 300. Likewise with C code, citing the best of the secure coders doesn’t change what most will do or have done. I took author’s statement “to first approximation every” to mean “almost all” but not “every one.” It’s still true.

                            Whereas, Ada and Rust code have done a lot better on memory-safety even when non-experts are using them. Might be something to that.

                            1. 2

                              I’m still asking for the non C widely used large scale system with significant parsing that has no errors.

                              1. 3

                                That’s cheating saying “non-c” and “widely used.” Most of the no-error parsing systems I’ve seen use a formal grammar with autogeneration. They usually extract to Ocaml. Some also generate C just to plug into the ecosystem since it’s a C/C++-based ecosystem. It’s incidental in those cases: could be any language since the real programming is in the grammar and generator. An example of that is the parser in Mongrel server which was doing a solid job when I was following it. I’m not sure if they found vulnerabilities in it later.

                            2. 5

                              At the bottom of the page you linked:

                              I’ve mostly given up on the standard C library. Many of its facilities, particularly stdio, seem designed to encourage bugs.

                              Not great support for your claim.

                              1. 2

                                There was an integer overflow reported in qmail in 2005. Bernstein does not consider this a vulnerability.

                            3. 3

                              That’s not what I meant by attachment. Their interest in C certainly created much value.

                            4. 9

                              Their attachment to this language has caused billions if not trillions worth of damages to society.

                              Inflammatory much? I’m highly skeptical that the damages have reached trillions, especially when you consider what wouldn’t have been built without C.

                              1. 12

                                Tony Hoare, null’s creator, regrets its invention and says that just inserting the one idea has cost billions. He mentions it in talks. It’s interesting to think that language creators even think of the mistakes they’ve made have caused billions in damages.

                                “I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

                                If the billion dollar mistake was the null pointer, the C gets function is a multi-billion dollar mistake that created the opportunity for malware and viruses to thrive.

                                1. 2

                                  He’s deluded. You want a billion dollar mistake: try CSP/Occam plus Hoare Logic. Null is a necessary byproduct of implementing total functions that approximate partial ones. See, for example, McCarthy in 1958 defining a LISP search function with a null return on failure. http://www.softwarepreservation.org/projects/LISP/MIT/AIM-001.pdf

                                  1. 3

                                    “ try CSP/Occam plus Hoare Logic”

                                    I think you meant formal verification, which is arguable. They could’ve wasted a hundred million easily on the useless stuff. Two out of three are bad examples, though.

                                    Spin has had a ton of industrial success easily knocking out problems in protocols and hardware that are hard to find via other methods. With hardware, the defects could’ve caused recalls like the Pentium bug. Likewise, Hoare-style logic has been doing its job in Design-by-Contract which knocks time off debugging and maintenance phases. The most expensive. If anything, not using tech like this can add up to a billion dollar mistake over time.

                                    Occam looks like it was a large waste of money, esp in the Transputer.

                                    1. 1

                                      No. I meant what I wrote. I like spin.

                                  2. 1

                                    Note what he does not claim is that the net result of C’s continued existence is negative. Something can have massive defects and still be an improvement over the alternatives.

                                  3. 7

                                    “especially when you consider what wouldn’t have been built without C.”

                                    I just countered that. The language didn’t have to be built the way it was or persist that way. We could be building new stuff in a C-compatible language with many benefits of HLL’s like Smalltalk, LISP, Ada, or Rust with the legacy C getting gradually rewritten over time. If that started in the 90’s, we could have equivalent of a LISP machine for C code, OS, and browser by now.

                                    1. 1

                                      It didn’t have to, but it was, and it was then used to create tremendous value. Although I concur with the numerous shortcomings of C, and it’s past time to move on, I also prefer the concrete over the hypothetical.

                                      The world is a messy place, and what actually happens is more interesting (and more realistic, obviously) than what people think could have happened. There are plenty of examples of this inside and outside of engineering.

                                      1. 3

                                        The major problem I see with this “concrete” winners-take-all mindset is that it encourages whig history which can’t distinguish the merely victorious from the inevitable. In order to learn from the past, we need to understand what alternatives were present before we can hope to discern what may have caused some to succeed and others to fail.

                                        1. 2

                                          Imagine if someone created Car2 which crashed 10% of the time that Car did, but Car just happened to win. Sure, Car created tremendous value. Do you really think people you’re arguing with think that most systems software, which is written in C, is not extremely valuable?

                                          It would be valuable even if C was twice as bad. Because no one is arguing about absolute value, that’s a silly thing to impute. This is about opportunity cost.

                                          Now we can debate whether this opportunity cost is an issue. Whether C is really comparatively bad. But that’s a different discussion, one where it doesn’t matter that C created value absolutely.

                                    2. 8

                                      C is still much more widely used than those safer alternatives, I don’t see how laughing off a fact is better than researching its causes.

                                      1. 10

                                        Billions of lines of COBOL run mission-critical services of the top 500 companies in America. Better to research the causes of this than laughing it off. Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                        1. 7

                                          Are you ready to give up C for COBOL on mainframes or you think both of them’s popularity were caused by historical events/contexts with inertia taking over? Im in latter camp.

                                          Researching the causes of something doesn’t imply taking a stance on it, if anything, taking a stance on something should hopefully imply you’ve researched it. Even with your comment I still don’t see how laughing off a fact is better than researching its causes.

                                          You might be interested in laughing about all the cobol still in use, or in research that looks into the causes of that. I’m in the latter camp.

                                          1. 5

                                            I think you might be confused at what I’m laughing at. If someone wrote up a paper about how we should continue to use COBOL for reasons X, Y, Z, I would laugh at that too.

                                            1. 3

                                              Cobol has some interesting features(!) that make it very “safe”. Referring to the 85 standard:

                                              X. No runtime stack, no stack overflow vulnerabilities
                                              Y. No dynamic memory allocation, impossible to consume heap
                                              Z. All memory statically allocated (see Y); no buffer overflows
                                              
                                              1. 3

                                                We should use COBOL with contracts for transactions on the blockchains. The reasons are:

                                                X. It’s already got compilers big businesses are willing to bet their future on.

                                                Y. It supports decimal math instead of floating point. No real-world to fake, computer-math conversions needed.

                                                Z. It’s been used in transaction-processing systems that have run for decades with no major downtime or financial losses disclosed to investors.

                                                λ. It can be mathematically verified by some people who understand the letter on the left.

                                                You can laugh. You’d still be missing out on a potentially $25+ million opportunity for IBM. Your call.

                                                1. 1

                                                  Your call.

                                                  I believe you just made it your call, Nick. $25+ million opportunity, according to you. What are you waiting for?

                                                  1. 4

                                                    You’re right! I’ll pitch IBM’s senior executives on it the first chance I get. I’ll even put on a $600 suit so they know I have more business acumen than most coin pitchers. I’ll use phrases like vertical integration of the coin stack. Haha.

                                              2. 4

                                                That makes sense. I did do the C research. Ill be posting about that in a reply later tonight.

                                                1. 10

                                                  Ill be posting about that in a reply later tonight.

                                                  Good god man, get a blog already.

                                                  Like, seriously, do we need to pass a hat around or something? :P

                                                  1. 5

                                                    Haha. Someone actually built me a prototype a while back. Makes me feel guilty that I dont have one instead of the usual lazy or overloaded.

                                                      1. 2

                                                        That’s cool. Setting one up isn’t the hard part. The hard part is doing a presentable design, organizing the complex activities I do, moving my write-ups into it adding metadata, and so on. I’m still not sure how much I should worry about the design. One’s site can be considered a marketing tool for people that might offer jobs and such. I’d go into more detail but you’d tell me “that might be a better fit for Barnacles.” :P

                                                        1. 3

                                                          Skip the presentable design. Dan Luu’s blog does pretty well it’s not working hard to be easy on the eyes. The rest of that stuff you can add as you go - remember, perfect is the enemy of good.

                                                          1. 0

                                                            This.

                                                            Hell, Charles Bloom’s blog is basically an append-only textfile.

                                                          2. 1

                                                            ugh okay next Christmas I’ll add all the metadata, how does that sound

                                                            1. 1

                                                              Making me feel guilty again. Nah, I’ll build it myself likely on a VPS.

                                                              And damn time has been flying. Doesnt feel like several months have passed on my end.

                                                    1. 1

                                                      looking forward to read it:)

                                              3. 4

                                                Well, we have those already, and they’re called Rust, Swift, ….

                                                And D maybe too. D’s “better-c” is pretty interesting, in my mind.

                                                1. 3

                                                  Last i checked, D’s “better-c” was a prototype.

                                                2. 5

                                                  If you had actually made a serious effort at understanding the article, you might have come away with an understanding of what Rust, Swift, etc. are lacking to be a better C. By laughing at it, you learned nothing.

                                                  1. 2

                                                    the author calls for “improved C implementations”. Well, we have those already, and they’re called Rust, Swift

                                                    Those (and Ada, and others) don’t translate to assembly well. And they’re harder to implement than, say, C90.

                                                    1. 3

                                                      Is there a reason why you believe that other languages don’t translate to assembly well?

                                                      It’s true those other languages are harder to implement, but it seems to be a moot point to me when compilers for them already exist.

                                                      1. 1

                                                        Some users of C need an assembly-level understanding of what their code does. With most other languages that isn’t really achievable. It is also increasingly less possible with modern C compilers, and said users aren’t very happy about it (see various rants by Torvalds about braindamaged compilers etc.)

                                                        1. 4

                                                          “Some users of C need an assembly-level understanding of what their code does.”

                                                          Which C doesnt give them due to compiler differences and effects of optimization. Aside from spotting errors, it’s why folks in safety- critical are required to check the assembly against the code. The C language is certainly closer to assembly behavior but doesnt by itself gives assembly-level understanding.

                                                    2. 2

                                                      So true. Every time I use the internet, the solid engineering of the Java/Jscript components just blows me away.

                                                      1. 1

                                                        Everyone prefers the smell of their own … software stack. I can only judge by what I can use now based on the merits I can measure. I don’t write new services in C, but the best operating systems are still written in it.

                                                        1. 5

                                                          “but the best operating systems are still written in it.”

                                                          That’s an incidental part of history, though. People who are writing, say, a new x86 OS with a language balancing safety, maintenance, performance, and so on might not choose C. At least three chose Rust, one Ada, one SPARK, several Java, several C#, one LISP, one Haskell, one Go, and many C++. Plenty of choices being explored including languages C coders might say arent good for OS’s.

                                                          Additionally, many choosing C or C++ say it’s for existing tooling, tutorials, talent, or libraries. Those are also incidental to its history rather than advantages of its language design. Definitely worthwhile reasons to choose a language for a project but they shift the language argument itself implying they had better things in mind that werent usable yet for that project.

                                                          1. 4

                                                            I think you misinterpreted what I meant. I don’t think the best operating systems are written in C because of C. I am just stating that the best current operating system I can run a website from is written in C, I’ll switch as soon as it is practical and beneficial to switch.

                                                            1. 2

                                                              Oh OK. My bad. That’s a reasonable position.

                                                              1. 3

                                                                I worded it poorly, I won’t edit though for context.

                                                      1. 3

                                                        For a good laugh, look here at this PR.

                                                        1. 17

                                                          It’s both easier and more polite to ignore someone you think is being weird in a harmless way. Pointing and laughing at a person/community is the start of brigading. Lobsters isn’t big enough to be competent at this kind of evil, but it’s still a bad thing to try.

                                                          1. 6

                                                            https://github.com/tootsuite/mastodon/pull/7391#issuecomment-389261480

                                                            What other project has its lead calmly explaining the difference between horse_ebooks and actual horses to clarify a pull request?

                                                            1. 3

                                                              And yet, he manages to offend someone.

                                                              1. 4

                                                                Can someone explain the controversy here? I legitimately do not understand. Is the individual claiming to be a computer and a person? Or do they just believe that someday some people will be computers and desire to future-proof the messages (as it alluded to in another comment)?

                                                                1. 7

                                                                  This person is claiming they think of themselves as a robot, and is insulted at the insinuation that robots are not people.

                                                                  Posts like this remind me of just how strange things can get when you connect most of the people on the planet.

                                                                  1. 6

                                                                    So, I tried contacting the author:

                                                                    http://mynameiser.in/post/174391127526/hi-my-name-is-jordi-im-also

                                                                    Looks like she believes she’s a robot in the transhumanist sense. I thought transhumanists thought they would be robots some day, not that they already are robots now.

                                                                    I tried reading through her toots as she suggested, but it was making me feel unhappy, because she herself seems very unhappy. She seems to be going through personal stuff like breaking up from a bad relationship or something.

                                                                    I still don’t understand what is going on and what exactly does she mean by saying she’s a robot. Whatever the reason, though, mocking her is counterproductive and all around a dick thing to do. Her request in the PR was denied, which I think is reasonable. So “no” was said to something, contrary to what zpojqwfejwfhiunz said elsewhere.

                                                                    1. 6

                                                                      As someone who’s loosely in touch with some of the transhumanist scene, her answer makes no sense and was honestly kind of flippant and rude to you.

                                                                      That said, it sounds like she’s been dealing with a lot of abuse lately from the fact that this Github thread went viral. I’m not surprised, because there are certain people who will jump on any opportunity to mock someone like her in an attempt to score points with people who share their politics. In this case she’s being used as a proxy to discredit the social justice movement, because that’s what she uses to justify her identity.

                                                                      Abuse is never okay and cases like this require some pretty heavy moderation so that they don’t spiral out of control. But they also require a pretty firm hand so that you don’t end up getting pulled into every crazy ideascape that the internet comes up with. If I was the moderator of this GitHub thread, I would have told her, “Whatever it is you’re trying to express when you say ‘I am a robot,’ the Mastodon [BOT] flag is not the right way to do it.” End of discussion, and if anyone comes around to try to harass her, use the moderator powers liberally so as not to veer off-topic.

                                                                      Then you could get into the actual meat of the discussion at hand, which was things like “If I have a bot that reposts my Twitter onto Mastodon, could that really be said to ‘not represent a person’? Maybe another wording would be better.”

                                                                      In the end she’s just a girl who likes to say she’s a robot on the internet. If that bugs you or confuses you, the nicest thing you can do is just take it like that and just ignore her.

                                                                      1. 8

                                                                        I don’t think she was rude to me. She’s just busy with other things and has no obligation to respond to every rando who asks her stuff. I’m thankful she answered me at all. It’s a bit of effort, however slight, to formulate a response for anyone.

                                                                        1. 3

                                                                          I mean, I can kind of see where you’re coming from, but I’d still argue that starting with “You should develop your software in accordance to my unusual worldview”, followed by flippantly refusing to actually explain that worldview when politely asked, is at least not nice.

                                                                          Regardless, that might justify a firm hand, but not harassment, because nothing justifies harassment.

                                                                          1. 2

                                                                            I see this point of view too. But I’m also just some rando on the internet. She doesn’t owe me anything, If someone needed to hear her reasons, that would have been the Mastodon devs. They handled it in a different way, and I think they handled it well, overall.

                                                                            1. 1

                                                                              I’m inclined to agree on that last point, though it’s hard to say for sure given all the deleted comments.

                                                                              And I do hope she can work through whatever she’s going through.

                                                                      2. 4

                                                                        I don’t know, personally, anyone who identifies as a robot, but I do know a bunch of people who identify as cyborgs. Some of it’s transhumanist stuff – embedding sensors under the skin, that sort of thing. But much of it is reframing of stuff we don’t think of that way: artificial limbs, pacemakers, etc, but also reliance on smartphones, google glass or similar, and other devices.

                                                                        From that standpoint, robot doesn’t seem a stretch at all.

                                                                        That said, I agree that the feature wasn’t intended to be (and shouldn’t be) a badge. But someone did submit a PR to make the wording more neutral and inclusive, and that was accepted (#7507), and I think that’s a positive thing.

                                                                        1. 2

                                                                          Actually, that rewording even seems clearer to me regardless of whether someone calls themself a robot or not. “Not a person” sounds a bit ambiguous; because you can totally mechanically turk any bot account at any time, or the account could be a mirror of a real person’s tweets or something.

                                                                        2. 1

                                                                          That’s unfortunate. It’s always difficult to deal with these things. I, too, understood transhumanism to be more of a future thing, but apparently at least some people interpret it differently. Thanks for following up where I was too lazy!

                                                                        3. -6

                                                                          American ‘snowflake’ phenomenon. The offendee believes that the rest of the world must fully and immediately capitulate to whatever pronoun they decided to apply to themselves that week, and anything other than complete and unquestioning deference is blatant whatever-ism.

                                                                          1. 16

                                                                            Person in question is Brazilian, but don’t let easily checked facts get in the way of your narrative.

                                                                            1. -5

                                                                              Thanks for the clarification. Ugh, the phenomenon is spreading. I hope it’s not contagious. Should we shut down Madagascar? :-D

                                                                              1. 3

                                                                                TBH I think it’s just what happens when you connect a lot of people who speak your language to the internet, and the USA had more people connected than elsewhere.

                                                                                1. 0

                                                                                  It definitely takes a lot of people to make a world. To paraphrase Garcia, “what a long strange trip it will be”.

                                                                            2. 3

                                                                              She says “she” is a fine pronoun for her.

                                                                        4. 1

                                                                          It’s wonderful. :)

                                                                        5. 3

                                                                          What is happening there? I can’t tell if this is satire or reality

                                                                          1. 2

                                                                            That’s pretty common with Mastodon; there’s an acrid effluence that tinges the air for hours after it leaves the room. That smell’s name? Never saying no to anyone.

                                                                            1. 12

                                                                              Seems “never saying no to anyone” has also been happening to lobster’s invite system :(

                                                                              People here on lobsters used to post links to content they endorse and learn something from and want to share in a positive way. Whatever your motivation was to submit this story, it apparently wasn’t that…

                                                                              1. 4

                                                                                The person who shared the “good laugh” has been here twice as long as you have.

                                                                                1. 1

                                                                                  I’m absolutely not saying you’re wrong, but I’m pretty confident there’s something to be learned here. I may not necessarily know what the lesson is yet, but this is not the first or the last situation of this kind to present itself in software development writ large.

                                                                          1. 7

                                                                            Mostly I just jot down ideas in my current notebook (I have scores of notebooks full of things) and that allows me to stop thinking about that particular thing because I’ll get around to organizing it into my todo list sometime very soon. Then, months later but also seemingly in the blink of an eye, I’ll remember that I wanted to do it and feel an oppressive guilt wash over me for never even starting it. The feelings of shame and regret swirling around all the tasks become denser and more opaque until they dwarf me and I live in their shadow every waking minute. There is no light here, only tasks. Melville knew my plight: “they heap me; I see them in outrageous strength, with an inscrutable malice sinewing them.”

                                                                            1. 2

                                                                              I follow this exact workflow pretty much, but I skimp on the notebooks as an unneeded I/O step.

                                                                              The savings in wasted paper I pass on to my therapist.

                                                                              1. 1

                                                                                I mostly use notecards for that.

                                                                              1. 2
                                                                                • Updates move records across partitions
                                                                                • A default/catch all partition
                                                                                • Automatic index creation
                                                                                • Foreign keys for partitions
                                                                                • Unique indexes
                                                                                • Hash partitioning (where as 10 was just time/range based)
                                                                                1. 3

                                                                                  My intuition for this stuff (also upsampling, style transfer, and many other image to image applications) is that the AI is basically a domain-specific decompression program. With any decompression program, we feed in a small amount of data (e.g. a runlength encoded bitmap image) and get out more data (e.g. a normal bitmap image), but crucially the amount of information doesn’t increase: everything we see in the output was either already present in the input (perhaps in some encoded form), or is an artefact (e.g. the “blockyness” seen in JPEGs). The analogy to decompression isn’t so apparent when we’re turning x-by-y pixels into x-by-y pixels, since the amount of data stays the same, but the idea that we’re “decoding” the input, and that we can’t gain any information (since there’s nowhere for it to come from) is what I’m getting at.

                                                                                  What worries me with these learned systems is that they’re so specific to the domain that their artefacts are indistinguishable from real content. This is especially true for generative adversarial networks, where the generated data is “100% artefact”, and trained specifically to be indistinguishable from real inputs.

                                                                                  The failure modes of these systems won’t be things that we’re used to with generic image processing, like “this bush looks blurry” or “the street sign has incorrect colours”, etc. Instead we’ll get very plausible looking images, which turn out to have quite important problems like “image includes a human figure, but it should actually be a trash can”, “streets signs are missing several intersections”, etc. This is very important to keep in mind when thinking of applications for this technology: when the information is sparse or ambiguous, the system will just make something up, and that will be indistinguishable from a real input. One obvious application of this “night vision” in particular is on attack drones, but that may be a very bad idea if it “hallucinates” targets.

                                                                                  An example of this which comes to mind is the “character substitution” problem on some scanners/faxes/copiers. The idea is to perform OCR on scanned documents, so they can be compressed more easily (e.g. storing “123” instead of all the scanned pixel values of those digits). However, when there’s ambiguity, like a “7” which looks like a “1”, the OCR will pick whichever it thinks is correct (say “1”) and store it just like any other character; losing the information that it could have been something else. When the document gets printed out, a perfect, crisp “1” will appear, which is indistinguishable from all of the correct characters.

                                                                                  1. 3

                                                                                    As a general rule of thumb, I try to always store the confidence level or accuracy estimate of anything processed by machine. For example, working in this domain of computer vision, I might process images to denoise and find contours then use SVM to classify what’s in the image and only store tags that have, say, 0.9 confidence (out of 1.0). The important step is to store metadata, including the list of tags, with their confidence score and anything else that pertains to accuracy, such as the exact kernel or model that was used. This doesn’t solve the problem, but provides insight as to what happened, how to recreate it, and what other output is now suspect.

                                                                                  1. 9

                                                                                    The real answer is elsewhere in the thread: OpenBSD does not allow userspace to access the hardware debug registers.

                                                                                    1. 2

                                                                                      I’ll say this: it’s telling that matters of transparency, disclosure, and trust weren’t considered important for the initial release.

                                                                                      It’s not released.

                                                                                      1. 3

                                                                                        Along the same vein, here’s parsing a CSV using SIMD lookahead and similarly on a GPU.

                                                                                          1. 1

                                                                                            If you’re a Polish guy that looks like this, why would you lead with a stock photo of an asian guy in your article about yourself?

                                                                                            1. 3

                                                                                              What does the ethnic background matter when the photo is clearly intended to represent the abstract concept of programming?

                                                                                            1. 9

                                                                                              This isn’t different enough stack-wise from the Slack electron app to make me want to install it.

                                                                                              1. 7

                                                                                                Replacing a bloated Electron app…. with another electron app.

                                                                                                I don’t think they got the memo?

                                                                                                1. 10

                                                                                                  The author, Cheng Zhao, works for GitHub on Electron and before that worked on node-webkit, which is now known as NW.js. The stack is Yue (cross-platform native UI library) and Yode (Node.js fork with GUI message loop), all of which he also wrote based on lessons learned from Electron.

                                                                                                  I think that lumping it all into one bucket doesn’t acknowledge the progression they’ve been making with each new library and/or approach.

                                                                                                  1. 10

                                                                                                    Needing NodeJS installed for a desktop application is an immediate non-starter.

                                                                                                    If they worked out how to use the OS provided JavaScriptCore and bind that to a native UI library, maybe I’d try it.

                                                                                                    But I have very little faith in the Javascript community as a whole.

                                                                                                    1. 2

                                                                                                      I’m curious what the tradeoffs of Node vs JavaScriptCore for a JS engine in this kind of application are.

                                                                                                      For platform-specific stuff like UI widgets, certainly agree that using anything that platform ships with is a plus. But from the point of view of developing a cross-platform framework, having a common lower layer like node.js means less platform- or language-environment-specific quirks to work around. What’s the case for not using it?

                                                                                                      1. 0

                                                                                                        End users not having to install it, obviously?

                                                                                                        1. 3

                                                                                                          End users don’t need to manually install node.js to use Wey, they get a packaged .app which bundles it. https://github.com/yue/wey/releases/tag/v0.1.0

                                                                                                          The packaged app would be a bit smaller (node.js installer by itself is 15MB for macOS), I guess…

                                                                                                          1. 1

                                                                                                            OK, thats marginally better, but it’s still the same issue as Electron: the whole runtime is distributed (and updated, or not) with the app.

                                                                                                1. 6

                                                                                                  Besides the negative points discussed above, Atom is effectively the same tool as Sublime Text except it runs slower.

                                                                                                  I disagree with that statement. Sublime Text is great, I love its speed, but it has a bunch of tiny awkward details that Atom doesn’t have, and Atom has some cool features that Sublime Text doesn’t.

                                                                                                  From ST one of the things that bothers me the most is that it assumes I want to drag text that I’ve selected, which is false, actually I basically never want to drag text. This assumption means that I can’t select something and then re-select something inside that selection, because it assumes a drag is a text drag, not a selection.

                                                                                                  Another bit I find Atom does great is the splits, I love its approach. My favorite of any editor.

                                                                                                  Not that I use it a lot, but the Git support from Atom is great.

                                                                                                  I can’t figure out how to make ST’s subl command behave like I want. I want it to behave exactly like Atom’s:

                                                                                                  • subl . opens current dir in a window and nothing more
                                                                                                  • subl opens new window and nothing more
                                                                                                  • If a window is opened without a file, it just opens an empty window with no working dir

                                                                                                  Right now it also opens whatever old window I had open when I last closed ST, and I can’t find how to disable that.

                                                                                                  Also, to be fair, Atom has Teletype now. I haven’t used it, but it looks cool.

                                                                                                  I probably missed something, but I think I’ve done enough to show it’s not “the same”.

                                                                                                  1. 2

                                                                                                    The ‘drag selected text’ continually confounds me. I can’t imagine anyone finding that useful. The other thing is Eclipse and other IDEs dragging/dropping arbitrary objects in project/navigator views, “oops where’d that folder go?” It’s maddening.

                                                                                                    1. 3

                                                                                                      One always cuts and pastes, right? Who drags around a block of text..

                                                                                                      1. 1

                                                                                                        Have you tried going to preferences -> settings and addding/changing "drag_text" to false?

                                                                                                      2. 2

                                                                                                        The dragging thing is probably OS-specific. I don’t see it on my Ubuntu.

                                                                                                        1. 1

                                                                                                          It looks like there’s an undocumented option remember_open_files in ST. That combined with alias subl="subl -n" in your shell should get pretty close to the behavior you’re looking for.

                                                                                                        1. 2

                                                                                                          It’s reassuring that they have identified their strengths and areas that could use improvement in opening up positions such as PostgreSQL Expert. Although I’m sure they would have a better chance of hiring a top-notch person if it were open to remote and not just local to Amsterdam, once they have some additional infrastructure/scaling/database folks around, scaling issues like this will become routine (i.e. not blog-worthy).

                                                                                                          1. 1

                                                                                                            I wonder if there’s a roadmap for this. There are some bits that still need work sprinkled throughout in the source and I’m not seeing a great description of the in memory and on-disk data structures to the point where one could do capacity planning (is it just protobuf dumped to disk?). I see benchmarks in the tests as well, but nothing to put the times and MB/sec in context that’s helpful enough to do capacity planning if I were to run them in a target environment.

                                                                                                            1. 1

                                                                                                              Thanks for the comments. I don’t really have a roadmap for it, but I think the most work remaining is fleshing out the UI and allowing for more kinds of queries (zooming in on specific quantiles or histograms, etc). The specific TODO you linked there is only an issue for the shutdown code: how long do you wait when doing a best effort dump to disk on exit? In general, I also tend to use the TODO comment more loosely to remind me about design tradeoffs in specific locations, in case those decisions aren’t optimal.

                                                                                                              Capacity planning is discussed here, but I agree that documentation around the disk layout would be good, as well as making that more prominent. The database tends to be very disk light in terms of I/O due to it only having to write on 10 minute (configurable) intervals, and only writing a small amount of data (around ~300 bytes per metric, regardless of number of observations). I added an issue to keep track of what you’ve identified.