1. 1

    Thanks for sharing your creation with the world! I get sad a little whenever I see someone write a language interpreter/compiler in $NOT_HASKELL. Don’t get me wrong, I’m not implying your decision wasn’t the best one under the circumstances, I’m just curious whether you’ve considered doing that.

    1. 1

      The grammar of the language is defined in EBNF / Tatsu in python https://github.com/endgameinc/eql/blob/900a25e7e8721292be61e11352efb5329d399b53/eql/etc/eql.ebnf and in spite of what has been released we’ve also implemented it in a couple other languages internally. Neither of them Haskell as we don’t use that at all internally, but I think we’ve talked about doing so in OCaml.

      The language is relatively simple and even the extensions w/ functions etc don’t lock it to any particular PL stack, nor would they prevent you from compiling EQL statements into little programs “straight”. The heavy lifting around EQL has to do with making it compatible with data formats and schemas from other security tools, IE how do security events from windows/linux/mac compare, etc.

      Ideally this query language will have other implementations. At its heart it’s just a way of ingesting events and selecting those that match patterns either within single events or in chains of interrelated events. None of that is wedded to python or anything else.

    1. 2

      I think this language my coworkers created is a unique contender for use beyond the security world where defining how events correlate with other events in a given timespan has meaning. The read the docs documentation has information on the how and what: https://eql.readthedocs.io/en/latest/ anyway, have at it. :)

      1. 1

        This is a pretty interesting area. I think there are a few cool ways of doing it. The most brute force way would be to create audio edits en masse and then approximate a function to classify those edits against unedited audio. If you had a particular type of audio you want to detect on, you would want to gather a test set regardless of method.

        For a classification function you could try messing with sklearn or tensorflow. Getting the labelled data might be the hardest part.

        1. 2

          We’re doing lots of cool things at endgame.com most things are listed as DC (arlington), SF, or Remote and we have software engineering and SRE roles. We do Go and Python on the backend, relatively up to date React on the frontend, but also have some OCaml, C/C++ and Rust in places. If you’re interested in what we do feel free to message me, I work in research but I try to annoy work with our engineering teams quite a bit.

          1. 3

            Don’t make me think is almost exactly what I want when using a tool, whether it’s a guitar, a synth, a car or a computer. Show me a tool that can make me think less about it and more about what I want to achieve and I’ll give it a try.

            1. 4

              The user should only need to think about the things they came to the interface to think about. In the case of a musician, they’re focusing on melody, or improvisation, or some higher level structure & don’t want to be distracted by the keys sticking.

              With a musician, they do years of rote practice to get enough experience with an inherently awkward interface to be able to put the interface out of their mind. That’s exactly the kind of thing we don’t want to require of computers, since computers are general purpose machines: they can do anything & look like anything, so being awkward enough to require years of training and limited enough to only do a handful of things is stupid.

              In practice, “don’t make me think” doesn’t actually allow users to get into the groove in computer work. Instead, the developer’s imagined version of the ideal way through the user’s task is neither natural nor obvious to the user, and too limited to work for the entire domain of the user’s tool use. The user therefore needs to think like a hacker by default and create a set of awkward workarounds for performing important tasks in applications that aren’t intended to perform them (or to duplicate functionality that’s buried in an even more awkward set of metaphorical leaps & is therefore totally undiscoverable).

              Formal training & instruction manuals can solve the problem of discoverability (but, of course, requiring or even providing instruction manuals violates “don’t make me think”). Nevertheless, the problem of flexibility remains.

              1. 1

                I think the fact that a lot of paradigms in OSX haven’t changed since I first touched the Aqua beta in 2000 has given me 18 years of not really worrying too much about where things have moved. Unlike Win10 which in spite of using windows since 3.1 I find a mixture of searching for a setting / searching for where that was, over and over. Usually without the benefit of a search bar that works (ie that doesn’t search the internet instead of my computer).

                I think flexibility in interfaces can be seen in the level of power using, but I’m not looking to recreate my computing environment generally. If I’m coding it’s for very specific purposes, but not composing things in a graphical way. Nevertheless the ability of windowed computing environments to have windowed computing environments in them means I can run linux, windows, etc inside my Mac and not really suffer too badly from lack of flexibility.

                I don’t mean to say that I hope all innovation stops with the WIMP interfaces we have, but I don’t think that being locked into a system that works consistently for almost two decades is a problem for me. Jumping into the latest KDE, GNOME or whatever is always the same story as windows above. Some number of things are out of place and broken when it comes to muscle memory.

                As much as I love new controllers for music, I usually have gravitated towards one that mimic the control I’m used to. Take for instance the Linnstrument that is ‘tuned’ like a bass guitar, really easy to intuitively understand. Besides the command line I don’t have a computer user interface that I’ve used for over a decade, other than oddly the UI this post is angry at. I like being comfortable w/ what I know being where I left it.

                1. 1

                  I’m not in favor of systems being experimentally changed by third parties & imposed upon users. User interface design for personal computing (as opposed to institutional use) should be under the control of single end users. This means keeping interfaces the same when they work properly, but providing the tools necessary to fix poor problem-fit when desirable (and making those tools accessible & discoverable).

                  Composition is an example of a mechanism that can be added to a WIMP system without changing behaviors users are familiar with (although by definition it requires a full rewrite of the applications & underlying GUI system) while solving a number of common problems that are normally solved without the aid of automation through repetitive & error-prone work. Users frequently use sequences of programs in tandem to perform stages of transformation when one program isn’t capable of performing a transformation but another is – using photoshop to crop an image before inserting it into a word document, for instance. Composition makes it possible to transclude the working copy of something in one program into another program without the knowledge or permission of the authors of the original program – for instance, linking the image editor to the embedded image such that modifications appear immediately. Because it’s fairly concrete & not too dissimilar to existing mechanisms, it’s worth using as an example: it’s low-hanging fruit.

                  In situations where an improved UI is obvious and the domain has heavy intersection with developers, we actually do see novel UI designs integrated. Notebook interfaces are pretty common in systems intended for use by mathematicians, for instance. But, I think it’s important that everybody be able to scratch their own itches.

                  There are a couple of interacting factors I’m criticizing, & it’s hard to cover them all in one essay while doing them justice.

                  One factor is a developer and designer monoculture: few developers have enough UI ideas in their toolkits to easily imagine appropriate UI designs for particular problems, particularly when they are also trying to imagine the problems from an outsider perspective. As a result, they fall back on familiar patterns even when those patterns produce pathological results. This is the problem I have with WIMP – not that it is flawed (it is, but so are all the alternatives) but that because so few designers are aware of or can imagine alternatives to it, its flaws become totally inescapable, even when escaping them should be trivial.

                  Another problem is an artificial & politically-enforced developer-user divide: users are not expected to modify the applications they run to their own liking, and the tools we use don’t make it straightforward for non-technical users to gradually ease into modifying their own applications to their liking. Instead, users are expected to “become developers” if they want to modify the applications they use, read lots of documentation, maybe take a class, and eventually work up to being able to control simple graphical primitives in awkward ways. Then they are expected to spend lots of time in deep study of the existing implementation, propose & test a change, and offer it up to the maintainers. If the maintainers don’t like the change, the user is going to have a hard time continuing to use it, because software is expected to scale & therefore even applications run on personal computers are mass-produced, with periodic new versions not designed to support heavy user-side modifications.

                  The developer-user divide has economic causes & economic effects, but the clearest argument that it can be eliminated is that we, as developers, live our entire time as users mostly on the other side of it: we know how to modify our own applications, and we do it; we maintain our own versions when upstream doesn’t like them, and we merge changes as we like. Our ability to live in a computing environment that suits us comes out of having already passed the handful of trials that gave us the knowledge to do this: we learned how to use autotools, and we learned c and c++, and we learned the GTK and QT and Tk APIs enough to be dangerous. We still sometimes decide that making a change to an application is more trouble than it’s worth. The difference between us and a non-technical end user is only that fixing something, for them, is always too much trouble because they haven’t cleared the prerequisites Lowering unnecessary barriers helps us just as much as it helps them.

              2. 4

                A musician friend of mine had a good argument for the Kaossilator; there’s no real reason to use a centuries-old UI for synthesizing music.

                Maybe you can’t construct a grand piano with that UI, but it is easier to make something Just Sound Good with a Kaossilator than a trad. keyboard synth.

                1. 1

                  Oddly the theremin is about a century old and in some ways (if you stretch it) is more like a kaossilator (xy for pitch / volume but instead done in ‘r’ from the antennas). But yeah, I certainly don’t want innovation to end, but I really enjoy the comfort of things I know working the way they always have, then customizing as needed.

                2. 2

                  But the problem is training. I don’t notice the editor I use now—it’s completely invisible to me for the most part. But when I was first learning it? Oh I hated it. It worked differently from the editor I was used to [1] and it took a few years for it to become invisible to me (like the previous editor I used). And that’s the point—if it’s easy to use from the outset, it’s not powerful enough to do everything I want.

                  A car is conceptually easy to operate, but we still have to be trained to drive. A program is a tool—you have to learn how to use it before it becomes second nature.

                  [1] Which only ran under MS-DOS. It’s limited to 8.3 filenames. It doesn’t understand directories. And lines are limited to 255 characters. It’s still my favorite editor. Sigh.

                1. 4

                  I laughed in pain knowing where this was going. Glad you found a fix.

                  1. 1

                    As someone who’s forgotten a lot of Go, I see the first two and I am left wondering how do I properly pass by reference into a Go function. It would be nice to have the ‘solutions’ for some of these explicitly coded up as well.

                    1. 5

                      Before reading the article I was expecting a mention of MML but, no dice.

                      1. 2

                        Woah, I just wrote a big ranty comment then clicked on MML. I don’t think I’d seen that in 25 years… and I didn’t really know what it was at the time. wow. thanks.

                      1. 3

                        Can I jump in with a pontification on the term Domain Specific Language. For instance, TidalCycles is a Haskell domain specific language for musical patterns, originally just sample based, but now able to talk to supercollider (a la overtone/sonicpi) but in the end it still has plenty of Haskellisms.

                        And this gets to my main point about DSLs, more and more of the time dsl seems to mean “some function names that can maybe be chained together in neat ways with relevant data types” but you end up still having mucho extraneous syntax hiding in plain sight in the domain specific language. So the term has been coopted from Language to Domain Specific Model in a new Language.

                        Compare pure music DSL’s like CSound, RTcmix, SuperCollider itself (smalltalky), LilyPond (notation only, TeX-like) or perhaps most adventurous syntax wise ChucK from Princeton and you don’t have to learn another language’s ins and outs, you merely have to learn the terms and how they combine. Anyway, it’s just a distinction I wish was more clear, if you’re using a General Purpose Language’s parser as is… you’re just writing in that language. If you write a new Parser, now you’re in pure DSL land.

                        Just a rant I’ve had in my mind that I wanted to type out. Keep rockin.

                        1. 3

                          This is really cool, I’m thinking this could be a really neat way and reason to introduce seL4 at work, running custom rust code (assuming all of the hard bits, file i/o, etc) can be worked out.

                          1. 1

                            A lot of this and the ensuing arguments sound like a No True Scotsman argument, Scrum is good but everyone does it wrong, Scrum is bad unless you do it right.

                            I’m not an expert in or fan of scrum, but it seems to work as an outline for talking about the hard parts of programming and project development. If you can’t work with your coworkers, managers, devs, whatever to make the nouns and verbs of scrum flex to your needs look elsewhere. I think a good portion of the explosion over the past 20 years of these techniques (and things like GTD and other self-help tools / manufacturing methodologies over the past 100) is that no one knows how to do this ‘right’ for all the possible things that could be done, but will pay for any insight that might get them closer to an effective strategy.

                            The idea that the inventors of manufacturing methods like assembly line, TQM, TPM or programming methods XP, Agile, Kanban (stolen a bit from the TQM folks), Scrum have all encompassing insight is dangerous, but it isn’t new or all bad. At best the devs have the power to take the frameworks presented, find hypothetical fixes for their flaws and can implement them. At worst these plans come from above and end up damaging morale and momentum.

                            Either way “scrum is the wrong way” is an absolutist statement that doesn’t provide much help in exposing a right way (other than the article stating it would be good scrum instead of bad)

                            1. 8

                              for folks just discovering jq, you might like my blog post on it as well, which doesn’t go much further depth-wise, but does show some more features https://zxvf.org/post/jq-as-grep/

                              1. 3

                                This is good because the folks I know working on security for ‘mobile’ chips at Apple are now working on security for chips in Apples in general.

                                1. 10

                                  I’ve been a member of SO since public beta, and have just under 30K rep.

                                  My experience is considerably different. Looking through my deleted items, the ones that weren’t deleted by myself were deleted because the enclosing Q was deleted, and I agreed with every deletion I looked at.

                                  1. 3

                                    Depends on how you use it, and whether you are lucky enough to always take the discussions that are too complex or novel for StackOverflow off the platform right away, or not make them there in the first place. (E.g., effectively, depends on how much trust you put into the platform.)

                                    As I mentioned on reddit, most of the stuff that got deleted for me are actually my own questions, quite disproportionately, where I’ve spent considerate time on doing the research, and where the answer is non-obvious.

                                    If your question doesn’t meet metrics, the StackOverflow company will automatically remove it without any human intervention whatsoever, and block your own access from it, until/unless you have 10k. Is that really fair, after you’ve spent several hours doing the research and formulating a clear-enough question, which is so clear noone has even bothered to provide an incomplete and misunderstood answer for it? There’s really no reason for this.

                                    The toxic part is that when you bring up these kinds of things on meta, they school you into not posting questions that “don’t belong” in the first place, and your meta questions themselves quickly gain -15 downvotes (not -15 rep, but -15 actual downvotes, within a day or two), and get automatically deleted promptly, so, the next person wouldn’t even have anything to refer to (and neither will you have the text in case you wanted to repost elsewhere).

                                    1. 1

                                      If your question doesn’t meet metrics, the StackOverflow company will automatically remove it without any human intervention whatsoever, and block your own access from it, until/unless you have 10k.

                                      I have no idea what you are talking about. Can you elaborate on this?

                                      1. 1

                                        Go to /help/privileges, then the 10k link on StackOverflow to /help/privileges/moderator-tools has the following text:

                                        SO: You also have a new search operator available to find your own deleted posts: deleted:1.

                                        The reddit discussion has a link to the criteria for automatic deletion; in my case, the following seems to have been triggered a number of times:

                                        SO: The system will automatically delete unlocked, unanswered questions on main (non-meta) sites with score of zero (or one, if the owner is deleted), fewer than 1.5 views per day on average, and fewer than two comments that are at least 365 days old. (RemoveAbandonedQuestions)

                                        Basically, when you make that comment that the question is useless, you’re making sure it wouldn’t actually be deleted, unlike a question that’s simply ahead of its time. Duh!

                                    2. 2

                                      I still don’t understand how I’m supposed to get ‘rep’ to upvote something, and I’ve never had the time to understand their internet points system to do so. I’ve been ‘using’ stackoverflow since it came out to beat expertsexchange and usenet, etc. But yea I probably have like 1 rep. I understand why they hold voting, but it always makes me sad when I want to upvote a good answer or downvote a terribly wrong one and I can’t. No idea what the route is from user to community member and no desire to read up on it… which maybe makes me not a community member. :)

                                      1. 8

                                        It’s as simple as just asking and answering questions. I think just asking a single question and accepting an answer gets you enough rep to vote.

                                        1. 6

                                          I also have had semi-decent (if small) success editing questions for clarity. It got me far enough to get upvote/downvote privileges.

                                          1. 3

                                            They require a minimum of 15 rep to upvote, and 125 to downvote, see /help/privileges on each of their sites.

                                            Getting 15 rep is, like, really easy — you get 5 rep when your question gets +1, and 10 rep when your answer gets +1. Basically, all it takes is a single question and answer that someone does a +1 for, and you can upvote. (And you can even ask and answer your own question.)

                                            1. 1

                                              Interesting about that last one, knowing that that might’ve added 10-100 questions to stack overflow, if I’d taken the time to do it. Good to know. I think I have a complex about asking questions online in asynchronous forums. Chances are if I don’t know the answer, I’d rather keep looking than take the time to write it down somewhere and then wait. I’ll usually jump on IRC (or slack or discord these days) if the question is so pressing. I’d have to be in really dire straits to post and wait, it would feel almost like praying for an answer. :) (even tho 9 times out of 10 once I’ve worded the question I’m closer to a solution anyway… like I said I have a complex)

                                              1. 3

                                                You assume that it takes time to get an answer on StackOverflow. IME, very often for the more popular topics, the answer often appears right away within a couple of minutes. Folks race each other to answer first. :-)

                                                (Of course, it highly depends on the tag.)

                                                1. 2

                                                  the answer often appears right away within a couple of minutes.

                                                  Only if your question is something every mildly experienced programmer would know. As soon as you start asking things a little harder than you are often left without an answer.

                                                  1. 1

                                                    Yeah I think I was molded in the era of web 1.0 responsiveness (think perl monks) and it’s probably cost me a bit. Not to mention whatever the false bravado/fear of showing ignorance that leads me to not ask enough questions in general.

                                                    Duly noted though, thanks!

                                          1. 14

                                            I believe that OO affords building applications of anthropomorphic, polymorphic, loosely-coupled, role-playing, factory-created objects which communicate by sending messages.

                                            It seems to me that we should just stop trying to model data structures and algorithms as real-world things. Like hammering a square peg into a round hole.

                                            1. 3

                                              Why does it seem that way to you?

                                              1. 5

                                                Most professional code bases I’ve come across are objects all the way down. I blame universities for teaching OO as the one true way. C# and java code bases are naturally the worst offenders.

                                                1. 5

                                                  I mostly agree, but feel part of the trouble is that we have to work against language, to fight past the baggage inherent in the word “object”. Even Alan Kay regrets having chosen “object” and wishes he could have emphasized “messaging” instead. The phrase object-oriented leads people to first, as you point out, model physical things, as that is a natural linguistic analog to “object”.

                                                  In my undergraduate days, I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                                                  I suppose I have taken that “Aha!” moment for granted and can see how, in the absence of such an explicit lesson, it might be hard to discover the notion on your own. It is definitely a problem if OO concepts are presented universally good or without pitfalls.

                                                  1. 4

                                                    I encountered a required class with a project specifically intended to disavow students of that notion. The project specifically tempted you to model the world and go overboard with a needlessly deep inheritance hierarchy, whereas the problem was easily modeled with objects representing more intangible concepts or just directly naming classes after interactions.

                                                    Can you remember some of the specifics of this? Sounds fascinating.

                                                    1. 3

                                                      My memory is a bit fuzzy on it, but the project was about simulating a bank. Your bank program would be initialized with N walk-in windows, M drive-through windows and T tellers working that day. There might’ve been a second type of employee? The bank would be subjected to a stream of customers wanting to do some heterogeneous varieties of transactions, taking differing amounts of time.

                                                      There did not need to be a teller at the drive-through window at all times if there was not a customer there, and there was some precedence rules about if a customer was at the drive-through and no teller was at the window, the next available teller had to go there.

                                                      The goal was to produce a correct order of customers served, and order of transactions made, across a day.

                                                      The neat part (pedagogically speaking) was the project description/spec. It went through so much effort to slowly describe and model the situation for you, full of distracting details (though very real-world ones), that it all-but-asked you to subclass things needlessly, much to your detriment. Are the multiple types of employees complete separate classes, or both sublcasses of an Employee? Should Customer and Employee both be subclasses of a Person class? After all, they share the properties of having a name to output later. What about DriveThroughWindow vs WalkInWindow? They share some behaviors, but aren’t quite the same.

                                                      Most people here would realize those are the wrong questions to be ask. Even for a new programmer, the true challenge was gaining your first understandings of concurrency and following a spec rules for resource allocation. But said new programmer had just gone through a week or two on interfaces, inheritance and composition, and oh look, now there’s this project spec begging you to use them!

                                                  2. 2

                                                    Java and C# are the worst offenders and, for the most part, are not object-oriented in the way you would infer that concept from, for example, the Xerox or ParcPlace use of the term. They are C in which you can call your C functions “methods”.

                                                    1. 4

                                                      At some point you have to just let go and accept the fact that the term has evolved into something different from the way it was originally intended. Language changes with time, and even Kay himself has said “message-oriented” is a better word for what he meant.

                                                      1. 2

                                                        Yeah, I’ve seen that argument used over the years. I might as well call it the no true Scotsman argument. Yes, they are multi-paradigm languages and I think that’s what made them more useful (my whole argument was that OOP isn’t for everything). Funnily enough, I’ve seen a lot of modern c# and java that decided message passing is the only way to do things and that multi-thread/process/service is the way to go for even simple problems.

                                                        1. 4

                                                          The opposite of No True Scotsman is Humpty-Dumptyism, you can always find a logical fallacy to discount an argument you want to ignore :)

                                                  3. 2
                                                    Square peg;  
                                                    Round hole;  
                                                    Hammer hammer;  
                                                    hammer.Hit(peg, hole);
                                                    
                                                    1. 4

                                                      A common mistake.

                                                      In object-orientation, an object knows how to do things itself. A peg knows how to be hit, i.e. peg.hit(…). In your example, your setting up your hammer, to be constantly changed and modified as it needs to be extended to handle different ways to hit new and different things. In other words, your breaking encapsulation by requiring your hammer to know about other objects internals.

                                                    2. 2

                                                      your use of a real world simile is hopefully intentionally funny. :)

                                                      1. 2

                                                        That sounds great, as an AbstractSingletonProxyFactoryBean is not a real-world thing, though if I can come up with a powerful and useful metaphor, like the “button” metaphor in UIs, then it may still be valuable to model the code-only abstraction on its metaphorical partner.

                                                        We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                                        1. 2

                                                          Factory

                                                          A factory is a real world thing. The rest of that nonsense is just abstraction disease which is either used to work around language expressiveness problems or people adding an abstraction for the sake of making patterns.

                                                          We need to be cautious that we don’t throw away the baby of modelling real world things as real world things at the same time that we throw away the bathwater.

                                                          I think OOP has its place in the world, but it is not for every (majority?) of problems.

                                                          1. 3

                                                            A factory in this context is a metaphor, not a real world thing. I haven’t actually represented a real factory in my code.

                                                            1. 2

                                                              I know of one computer in a museum that if you boot it up, it complains about “Critical Error: Factory missing”.

                                                              (It’s a control computer for a factory, it’s still working, and I found that someone modeled that case and show an appropriate error the most charming thing)

                                                              1. 2

                                                                But they didn’t handle the “I’m in a museum” case. Amateurs.

                                                        2. 1

                                                          You need to write say a new air traffic control system, or a complex hotel reservation system, using just the concepts of data structures and algorithms? Are you serious?

                                                        1. 1

                                                          glad to be watching this. Thinking about applying for a week at recurse as I like their mission.

                                                          1. 3

                                                            I work in the computer security industry so I can see the fear up close on what meltdown/spectre could do. Still, changing the fundamental operation of every single branch prediction on your CPU is far more wide ranging and troublesome. I can’t understand how anyone’s threat model for “bad JS on a web page” is different with M/S unless you have your nuke codes on your computer. It all feels like an overreaction to privileged memory reads and possible privileged execution. I think the detection of attempted exploitation makes a lot more sense that wholesale attempting to stop it using bolted on, performance devastating, fixes. (I don’t speak for my employer, or for folks with “serious” security concerns, I just think folks with “serious” concerns already had this in their threat model)

                                                            As is the cure seems worse than the disease (modulo script kiddie mass exploitation) and yea the entire processor was designed to run code you sort of trust, which was broken by the open trust model of the internet. Regardless, even if this was mass exploited I’d rather have my cake (fast speculative execution) and eat it (detect exploitation before loss) than just throw my cake in the gutter to keep anyone else from eating it. Linus seems pretty correct here and stuck between a haphazard patch rock and a designed in bug hard place.

                                                            PS this was a rant not a well thought out argument, but I mostly agree with it ;)

                                                            1. 12

                                                              At least on Linux you can disable KPTI with a boot parameter, if you don’t feel the default is a good performance/security tradeoff for a particular case.

                                                              Cases where I could imagine it being reasonable: 1) scientific computing clusters, which tend to use an everyone-trusts-everyone security model anyway, 2) on-premises virtualization where you’re using virtualization just as a deployment/management mechanism, not for security boundaries, and 3) certain kinds of (hopefully well firewalled) database servers, where the performance impact seems to be particularly severe, and where most of the sensitive data is in userland anyway (the database), so the threat model of local privilege escalation isn’t your biggest worry.

                                                              But admins of those kinds of setups know what they’re doing enough to change the default. I think most people who don’t know what they want are better served by a more-secure default, even with a performance hit. There is so much code, from networking to browsers, that relies on these security boundaries not being easily bypassed, that I think a malware-detection approach to mitigating it is likely to be too much of a whack-a-mole game. Plus your average random server on the internet, or home desktop, isn’t even using its compute capacity most of the time anyway, so unsafe performance tuning is hardly necessary.

                                                              1. 1
                                                            1. 3

                                                              I ran 3.0.2 with a wide open JSON RPC host with an unpassword protected wallet for a month or so. Turns out no one was exploiting it in the wild on any sites I visited.

                                                              What’s a bitcoin worth these days anyways?

                                                              1. 3

                                                                Around $16,500, give or take a few %

                                                                Today’s high was $17,224, low of $16,187.

                                                                1. 2

                                                                  This is good, but what’s weird is I read that article agreed with it… looked at the charts… saw they were just … bogus-ish and accepted the author’s arguments which weren’t founded in the data. But still an interesting deep dive into the data.

                                                                  The ability of folks with high social reach to post stuff with less consideration than I (or this wesleyac.com author) give it on casual reading is odd isn’t it?