Does anyone know if it is possible to permanently disable hyperthreading on recent MacBook Pros without using XCode? (I think some automated software hack at startup is good enough for me)
CPUSetter looks like it should be able to do it.
The send button on the conversation should be un-clickable, if a message has already been sent but has not received a response
Wouldn’t this mean that if you never get a response, you can’t use the app anymore?
Yes, but moreover, it does not make the client request idempotent. The definition given for idempotence is “the property of certain operations whereby they can be applied multiple times without changing the result beyond the initial application”. Making the button un-clickable doesn’t make it possible to apply the operation multiple times without changing the result; it prevents the operation from being applied more than once via the UI.
To make chat message sending idempotent, one could add a unique id or sequence number to each message so that it’s possible to detect and drop the duplicate message.
Good point. Even nowadays, the web still relies on the “refresh” button as well as system issues are still magically fixed on reboot :)
Why not to use transactions and (compound?) unique indexes in order to make the whole operation atomic?
It’s a trade-off; there’s a cost to using transactions and you can avoid that cost if your operations are idempotent. As you say though, if it’s infeasible to make an op idempotent then transactions are an excellent fallback :)
But… is this a computer? This article is rather vague and if it just prints out the numbers of the pulled strings… The Wikipedia article about computers says the firsts electromechanical computers were built at the end of the 30’s, however this device is from 1913: more than 20 years earlier! And the Wikipedia article about Grand Central mentions the hidden basement and the German sabotage but there is not a single word about any so-called computer.
Does anyone here have more information about it?
I didn’t know this was an issue. :-)
tail -f - hitting ctrl+d does not quit the program,
but cat - does.
I think there is a DNS issue:
$ nslookup lazymastodon.com
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
*** Can't find lazymastodon.com: No answer
I have this issue with two different ISPs so…
Hackernews killed it I had to make a quick backup on s3
http://lazymastodon.com.s3-website-eu-west-1.amazonaws.com/
if you clear cache DNS should point to that now
As I read this I thought about my experiences with Diaspora and Mastodon. Pages like this one or this one (click “Get Started”, I couldn’t do a deep link because JavaScript) are, IMHO, a big part of the reason these services don’t take off. How can an average user be expected to choose from a basically random list of nodes? How can I, a reasonably “technical” person, even be expected to do so?
So then why not host my own node? First, I don’t have time and most people I know don’t either. If I was 15 again I totally would because I had nothing better to do. I also don’t want to play tech support for a good chunk of my social network, and providing a service to someone has a tendency to make them view you as the tech support.
Second, if I do that I’m now in charge of security for my data. As terrible as Twitter and Facebook are, they’re probably still a lot better at securing my data than I am (at the very least they probably patch their systems more often than I would). Even worse, if some non-technical person decides to bite the bullet and create a node for his/her friends, how secure do you think that’s going to be?
Further, what are the odds that I, or whoever is maintaining the node, basically gets bored of it one day and kills the whole thing? Pretty damn high (maybe I and all my friends are assholes, though, so whatever).
Anyway, this post really spoke to me because I’ve been trying to escape Evil companies for awhile now and “federated” just doesn’t seem to be the answer. I now believe that centralized is here to stay, but that we should start looking at the organizations that control the data instead of the technology. For example, if Facebook were an open non-profit with a charter that legally prevented certain kinds of data “sharing” and “harvesting” maybe I wouldn’t have any problem with it.
How can an average user be expected to choose from a basically random list of nodes?
How did they choose their email provider? Not be carefully weighing the technical options, surely. They chose whatever their friends or parents used, because with working federation it doesn’t matter.
what are the odds that I, or whoever is maintaining the node, basically gets bored of it one day and kills the whole thing?
Same as what happened with many early email providers: when they died, people switched to different ones and told their friends their new addresses.
Really, all this argument of “what if federation isn’t a holy grail” is pointless because we all already use a federated system — email — and we know for a fact that it works for humans, despite all its flaws.
How did they choose their email provider? Not be carefully weighing the technical options, surely. They chose whatever their friends or parents used, because with working federation it doesn’t matter.
In contrast to mastodon instances - which are very alike - email providers have differentiated on the interface and guarantees they provide and market that. People react to that.
In contrast to mastodon instances
While this was largely true in the beginning, many Fediverse nodes now do market themselves based on default interface, additional features (e.g. running the GlitchSoc fork or something like it), or even using non-Mastodon software like Pleroma. I suspect this will only increase as additional implementations (Rustodon) and forks (#ForkTogether) take off and proliferate.
How did they choose their email provider?
I think federated apps like Mastodon are fundamentally different than email providers. Most email providers are sustainable businesses, they earn money with adds or paid plans or whatever and have their own emails servers and clients with specific features. Self-hosted email servers are a minority. Please tell if I wrong, but I don’t think one can easily earn money with a Mastodon instance.
However I agree that both are federated.
You’re certainly not wrong, though I would argue that email, particularly as it was 20+ years ago when it went “mainstream”, is much simpler (for instance, it doesn’t require any long-term persistence or complicated access control) and therefore easier to federate successfully (in a way that humans can handle) than social networking.
AP style social network federation also doesn’t require long-term persistence or complicated access control.
Yeah, I listed them in my comment… “long-term persistence or complicated access control”. Admittedly I didn’t go into much detail. Email is a very simple social network, there isn’t much “meat” to it, particularly as it existed when it became popular.
email has very long term persistence, much longer than something like facebook because it’s much easier to make backups of your emails than to make backups of your facebook interactions.
i guess i don’t know what you mean by “complicated access control.”
Email is basically fire and forget. You download it to your computer and then you’ve got it forever (modern email does more, but also includes more of the privacy / data issues that come with other social networks). But most users can’t easily give other people on-demand access to their emails, which is the case with Facebook, Twitter, etc. Email is really meant for private communication (possibly with a large group, but still private), Facebook and company are for private, semi-private, and even public communication, and they require a user to be able to easily retroactively grant or retract permissions. Email doesn’t handle these other use-cases (this isn’t a fault of email, it doesn’t try to).
The ability for interested parties to interact without reply all. I can post a picture of a beautiful burrito, and people can comment or ignore at their leisure, and then reply to each other. I guess there’s some preposterous email solution where I mail out a link to an ad hoc mailing list with every update and various parties subscribe, but… meh.
something that handles a feature like that need not be email per se, but it could have a very similar design, or be built on top of email. something like what you suggested wouldn’t seem preposterous if the clients were set up to facilitate that kind of use.
In the case of Mastodon, which instance you pick does matter. Users can make posts that are only visible to others in the same instance. If you pick the “wrong” home instance, you’ll have to make another account in another instance to see the instance-private posts there. If you’re a new Mastodon user, you might not know that one instance is good for artists and another good for musicians, etc. In any case, this is as easily solvable problem by adding descriptions and user-provided reviews to each instance.
Second, if I do that I’m now in charge of security for my data. As terrible as Twitter and Facebook are, they’re probably still a lot better at securing my data than I am
Setting a price tag on your datas doesn’t secure them. There are enough scams and hoaxes on Facebook to share your information with other companies that I have to disagree with you. And since those social networks are collecing more data than necessary, it is easier to lose data.
Facebook and Twitter also present single valuable targets and are thus more likely to be targeted. A hundred mastodon instances may be individually less secure due to the operators having fewer resources or less experience, but compromising a single server won’t get you as much.
That’s a good point, although Wordpress vulnerabilities are still a big deal even though there are tons of small servers. The server might not be a monolith, but if the software is then it’s only slightly more work to attack N instances.
True, although it depends whether the vulnerabilities are in the application being served or in the web server or OS serving it.
Prelude: I worked myself into a state where I needed a two-hour break after about twenty minutes of keyboarding. Two keyboards with different layouts on different desks, changing between them every few minutes, a terrible project, and some ungood stress out of the office. But as luck would have it, that office was in a… well, not in a hospital, but on campus, so got to see a real specialist quickly. A friend in his department just brought me along. He spoke cluefully and I’ve followed his advice in the decades since. In brief: “Pay attention to your body. Learn what hurts and stop doing that. Don’t let anyone at you with a knife.” It’s served me well. I admit to a certain arrogance about it.
A colleague and I bought and tested about 10-15 keyboards, including some very expensive specials, and the ones we ended up using weren’t the most expensive one, which probably would get you into trouble with the bookkeepers if you were to try it. Arrogance helps.
I currently use an 88-key unlabelled WASD with dampening o-rings. Because it keeps me from moving my hands much while I type, and keeps me from looking at the keyboard, and I like the feeling of the keys. You should ignore the first sentence in this paragraph and focus on the second, because the brand name isn’t important, how your body reacts is vastly important. Does it feel okay? Then okay. Not? Then change.
The same applies to your chair and desk, because your body is one. The shoulders are tightly connected to your hands. (The chair comes first, btw. You get a chair for sitting on, then a desk that suits your body on that chair. http://rant.gulbrandsen.priv.no/arnt/ideal-office has more speechifying about chairs and stuff. I speechify too much.)
This reminds me that when I was younger, I spent a lot of time behind my computer sitting on a plain old stool and I remember it was much more comfortable than the more traditional big and heavy armchair I currently use at my daily job. I think not having a backrest forces me to keep my back in a straight and comfortable position.
I use synergy to share a single keyboard and mouse across computer systems. It works fine, and it can deal with Macs, Linux and Windows. You can even cut-n-paste across the systems as well (text only though).
Have they fixed the bugs I ran across when I tried that?
I haven’t encountered those issues, but the Mac is the server, and the Linux system is the client. I don’t know if that makes any difference.
Also, I think I started using it post 2011, so maybe those bugs don’t exist in the version I’m using.
A stressful job plays a larger role than you might think. I speak from experience. About 10 years ago, I had really bad pain typing - for months. I quit my job, moved somewhere else, got a new job and the pain went away. It still flares up from time to time if I overdo it, but goes away quickly.
I love how this thing is written: only plain old Lisp that directly translates to plain old HTML and plain old SQL. No complex template engines, ORMs, multiple inheritance, events, callbacks and modern complex machinery I’ve struggled with in the past.
Nowadays I tend to follow a similar simplistic approach to web programming, and it’s so much better. I don’t think I’m the only one, and I think many of us try to “rediscover” this simplicity through recent projects like HyperScript or Ecto.
I think we should learn a lot form the past.
I have taken to writing web “apps” in Ruby using only the standard library. It’s way more than enough. And extremely educational to use: you’ve got to put all the pieces of a web stack together yourself. After building a few personal apps this way I have a few utility classes that cover the abstractions I care about.
The biggest is a 50 line SimpleController base class that extends WEBrick::Servlet, pre-processes request data, wraps responses with some default headers, and renders ERB templates with a render method like Rails does.
And I set up WEBrick to authenticate my client TLS certificate. I love having all the security and convenience of ssh public keys for my personal web apps too. Although I wouldn’t bother with that in a million years if MacOS Keychain Access didn’t make it trivial to generate client certificates.
WEBrick - built in HTTP server
ERB - built in HTML template engine
.rhtml files as ERBerb cli tool ships with Ruby, great for debugging templatesYAML::DBM - built in database
db['key'] = obj
DB_LOCK = Mutex.newdef transaction() DB_LOCK.synchronize { yield DB } endtransaction { |db| v = db[k]; v.a = b; db[k] = v }WEBrick::UserDB interface in a 10 line classMinitest - built in unit test framework
Kernel.open - default open call is special magic
require 'open-uri' makes regular open work on http[s]:// URIs
content = open('https://google.com').readcontent = requests.get('https://google.com').text.open method to URI objectsp = open('|cat'); p.write('neat'); p.read() == 'neat'
Thread / Queue / Mutex / ConditionVariable / Monitor
I’d argue it’s not about rediscovering something we forgot. Doing things simply is actually really, really hard. And everyone has to learn it by themselves, I don’t believe it’s a teachable technique. And it just takes time. And by the time you’re there you couldn’t care less about hyping up your acquired knowledge and putting it on display in the form of a framework :-) That’s why trending hot stuff is invariably over-complicated.
I wrote a web app in C++ (long story) and tried to do this. For HTML I made a DSL using variadic templates and user-defined operators so I could write div("class-name"_class, p("Hello there")) etc, found a nice SQL DSL to write queries in a similar fashion, and so on. It was simpler to me than something like Django, but of course I would never use C++ for a public web app.
By the way, it seemed to me that Google uses planes to collect high resolution 3D imagery in some places.
Is it trustworthy? Or is it just another kind of clickbait? (I know nothing about networking and these claims look incredibly significant, so… I would be pleased if someone can confirm this)
so, it’s of the style of various IoT botnet scanners/hackers we’ve seen in the skiddie space, so even if from a strange source, it’s definitely fitting of the style of tools you’ll see, usually prefaced with PRIV8PRIV8PRIV8PRIV8PRIV8 or gr33tz 2 mah krew sirPWN, leetjar, ....
Furthermore, as someone who works in the penetration testing & adversarial simulation (aka “red team”) space, nothing of the document is terribly surprising: many places rely on terribly-configured infrastructure, there’s a lot of garbage floating around in networks, and teams very often take a “we don’t have money to fix that” approach to security. For example, I’ve had more clients than I care to count receive report after report detailing high or critical findings ala NIST 800-30, and yet claim to not have money for the same. I mean simple things like “sshv1 running on all internal routers” or “world-readable anonymous FTP server contains sensitive client information.”
I’ve discussed this with colleagues in the space and the general consensus is one of malaise; everyone knows this to be the case, but no one really cares. What impact did Equifax have? None, no one even thinks about these things anymore. Businesses write off these risks via Risk Acceptance, and move one. The government is more concerned about critical infrastructure, but that is a double-edged sword (and I say that as someone who used to work in gov).
tl;dr: even if not credible, the source is relatively spot on with similar “posts from the underground,” and no one really cares, because so much is broken, but businesses often can just accept the risk and move on.
I worked at a small business oriented ISP/web hosting company from around 2003 to 2010. What I remember was getting a “security audit” once that was 500 pages of crap like “OH MY GOD YOU HAVE PING ENABLED! DO YOU KNOW PEOPLE CAN FIND THOSE COMPUTERS?” and “OH FOR XXXX SAKE YOU’RE RUNNING DNS DO YOU KNOW HOW HORRIBLE THAT IS THAT PEOPLE CAN FIND YOUR COMPUTERS?” to even “XXXXXXXXXXXX YOU’RE RUNNING A WEB SERVER! ANONYMOUS PEOPLE CAN ACCESS THIS COMPUTER YOU XXXXXXX XXXXNUT!” Yeah, hard to take seriously page after page of “just cut the network cables if you want to be safe” crap.
So here’s how I would respond to the “OH XXXX YOU HAVE SSHv1 ON INTERNAL ROUTERS!” claim—“Hey boss, we need to upgrade all our Cisco routers.”
“Do you have $NNNNNN to upgrade the infrastructure?”
“You’re the one with the money.”
“Do the best that you can. I’m dealing with customers that are late with their payments.”
We were buying equipment on the second hand market because we couldn’t afford do deal directly with Cisco. So, for the sake of the Internet, we’re supposed to shut down and go quietly into the night? But in the meantime, I just restricted SSH (when we got SSH on the routers—early on we were stuck with TELNET) to only accept connections from known hosts.
Oh ja, I’m not surprised at all by this either. For ever good pentester I know, there are dozens or more of ZOMG LE TOOL SAYS YOU HAS 0DAY. Honestly, the infosec industry is one of shills, and the infosec community is one of hero worshipping cliques. It’s pretty rough at times to be a simple professional.
Wrt your example of SSHv1, the overall risk for me would depend on what other environmental controls are in place. For example, I worked at an ISP that had all management interfaces exposed only to a special administration VLAN for routers. So, the likelihood in that case would be very low; an attacker would either have to transit multiple security boundaries and launch a fairly noisy attack, or it would have to be a malicious internal attacker who would likely already have legitimate access to those same devices. The impact is high regardless because this could impact core business functionality. Very low x High = low, please fix it during your next upgrade cycle.
And that’s my problem with the “hah! they should have just patched everything!” mentality: people don’t have the $ or time to take infrastructure down. I mean, good heavens, Equifax blamed one person… clearly that’s a sign of a dysfunctional org if you’ll ever see one.
Yet another approach is not to use setters that actually modify the object, but return a new modified object:
public class Person {
private String name;
public String getName() {
return name;
}
public Person setName() {
Person p = new Person();
p.name = name;
return p;
}
}
(It’s non-idiomatic, terribly verbose and there is probably a much better way to implement this, but you get the idea: immutability.)
While parser combinators tend not to have a tokenizer step explicitely I find it useful to still maintain the distinction between Tokenization and AST building when using them.
It’s much easier to write them without getting lost if you separate the types of parsing they each represent. It also forces you to focus on the primitives of your grammar separately from the way they combine to build the AST’s.
I also find that it makes adding error handling and reporting slightly easier when using parser combinsators, an area they have historically given a lot of people trouble with.
I agree with you, and you can easily write a tokenizer and an AST builder using parser combinators : http://parsy.readthedocs.io/en/latest/howto/lexing.html
In fact, this is possible because Parsy works on any iterable : strings, lists, sets, etc. Parsy handles token lists exacly like strings.
Parsy looks cool. I haven’t used it but it looks like something I could see myself using. You are right that good parser combinator libraries will work with any iterable so separating the two is usually not that much effort and the gains are well worth it.
It should also be faster to do separate tokenization if you have any backtracking.
I was just discussing this on subreddit:
My argument was that you should do the easy thing with the fast algorithm (lexing in linear time with regexes), and the hard thing with he powerful but slow algorithm (backtracking, whatever flavor of CFG, etc.)
I did the same thing with PEGs. PEGs are usually described as “scannerless”, but there’s no reason you can’t lex first and operate on tokens rather than characters.
This will be my very last week-end working on my manuscript; I am expected to send it to my reviewers on September, 1st.
Good luck!