Has anyone informed them that using the term “sexy” in a technological context is, to use the proper term of art, “highly problematic”?
It certainly is “unprofessional”. Do I live under a rock or is this an american-centric view that it is a “highly problematic” term? It may be too playful in professional context, but it does not appear to me as sexist. Men are also described as sexy. What am I missing?
It’s sad however that there is only two comments in this thread about Geany itself. That means that it is an issue enough.
You don’t live under a rock. Had I read this a few years back I’d have reacted much the same way.
However, look at it from a different perspective. You’re a woman. You’re hoping to break into this industry and maybe make some contributions to an open source project.
I am NOT saying that simply by using the word ‘sexy’ guarantees that you’ll also engage in other locker room talk that could make people uncomfortable, but it’s a big red flag that you might, and that’s often enough to keep people away.
So, from my perspective as an ally, I did the right thing. I filed an issue, and the project can either ignore it or take action on it as they see fit. I’ve done my bit.
The authors do seem to be mainly not native English speakers. Afaict they’re mostly(?) Germans, and I may be wrong (my German is poor), but I think “sexy” as a loanword in German is often used as a synonym for “chic” or “trendy” or “cool”, e.g. you can apply it to a product like a bicycle or coffee maker without implying anything sexual about them. Which used to also be true in English but is nowadays pretty old-fashioned/obsolete usage. Maybe they don’t notice the connotations in modern English make that usage not really work? Anyway, since @feoh filed an issue we can see what they think once someone’s pointed it out.
I wouldn’t doubt it! I’m not trying to fling poo here. I’m trying to sensitize people who are probably NOT aware of the implications of the words they’ve chosen.
All this implies women are so sensitive, and weak for their sensitivities, that they’d join “tech” but they saw the word sexy being used in admiration, got triggered and retreated to their safe space instead.
Good job on belittling women by trying to conjure a shitstorm over something this trivial.
I think “sexy” as a loanword in German is often used as a synonym for “chic” or “trendy” or “cool”, e.g. you can apply it to a product like a bicycle or coffee maker without implying anything sexual about them.
Someone should let them know that if they write “sexy”—or any other word, for that matter—over and over again in big bold headline all over a webpage, people (read: Americans) will read into it, regardless of their intention.
Sexy is not a bad word. Also I am from overthere and the world is not only about America and I in my opinion that Americans should respect our culture as much as they want theirs to be respected.
Honestly guys, you are overshooting. Not a single woman I know would be offended or kept from contributing to Geany just because the project refers to itself as being sexy. And this includes one woman from Poland. I haven’t seen a single woman of lobste.rs complaining here neither. So maybe being overprotective as a man about how women might feel could be recognized as being chauvinism.
My point being: there will always be somebody that is going to feel offended because of his gender, nationality, taste, race or for whatever reason. People need to relax.
Honestly, I feel more and more that lobste.rs is about patronizing people and the technical discussions are becoming background noise 🙁
My impression was that it’s not common anymore in American English at least; something like “machine learning is becoming a sexy field” (vs. “trendy field”) feels like quite dated slang to me. But I could be overgeneralizing something that’s only true in certain regions or sociolects.
I just did. Created an issue: https://github.com/geany/geany/issues/1672 Please consider +1 if you agree.
So, there’s a lot about scale to be said when talking about network diagrams. It might be relatively trivial to generate something that can be mapped out with dot/graphviz if your scale is small - like a local network, or if your detail granularity is low - like only the actual lines between elements, the logical links between routers.
However, if your purpose is proper documentation, it will be non-trivial, and I have yet to see proper auto-generated diagrams that can also be viewed as proper documentation that reads intuitively.
I prefer omnigraffle on Mac to draw my diagrams, but it has become a lot less about the tools for me, and a lot more about how you draw. Good network diagramming is an skill that has to be learned, I believe. It has a lot to do with learning how to draw complex topologies in a readable fashion, how to order elements on the canvas and all that.
I would advise you to find a tool that you believe have the features you require, and then train yourself by only focusing on the things that you draw on the diagram, and how they are presented.
I would advise you to find a tool that you believe have the features you require, and then train yourself by only focusing on the things that you draw on the diagram, and how they are presented.
Which is a very good advice. Only problem I have with this is that there are multiple hands touching the graphs where I work and they have different opinions about what’s the right way to draw a diagram, so over time diagrams are being polluted by different styles and different mindsets making them virtually unmaintainable. I know that this is a social problem for which I am searching a technical solution which is … not optimal. The other part is that my hope is that an automated process could help against documentation rotting away.
But I totally agree that it is a skill to be learned and if you mastered it, the tooling becomes irrelevant.
Yeah, that is a problem, and with no simple solution.
I worked a place once that solved at least some of the problems by having formalised guidelines for how to draw diagrams, including stencils for everything, all lines and shapes.
That at least helps to keep diagrams stylistically similar, but I totally get why you want the automated approach.
How large are the networks you guys graph with graphviz? I am looking for something capable of handling small networks as well as larger ones (multinational company with ~12 VPN sites).
A feature of our VSAT terminals allows us to quickly reset the password and regain control of the terminal in the instance of passwords being compromised
Does this mean there is a set of hardcoded super-admin credentials?
At the very least it’s backdoored in some way or form. It’s giving me the chills that they are selling it as a feature. Also this
They also note that it’s standard practice in the industry to deliver systems with default, hard-coded credentials, based on the assumption they’ll be reset with something stronger later.
shows how less they care about their customers and it’s one of the reasons I don’t have a good opinion about standard industry practices or best practices. Look Mum, everybody is doing it, so how am I too blame?
I’ve converted my mail flow to use mblaze. Along with fsf and a few helper scripts, it’s perfect for my use. Used in conjunction with offlineimap + msmtp, and $EDITOR.
Never looking back.
I think mblaze is this and I think fsf would be some sort of fuzzyfinder?
Maybe fzf? I could see that being a nice workflow.
Correct. I just hooked it up to my script and forgot about it. It does nicely with the –preview command. There’s probably more I could do with that, but for now it solves pretty much all of my mail consumption related issues
Aside from a regular offlineimap setup and an also-regular msmtp setup, I have two scripts to help me with mblaze: one called “mymail” and one called “mshowwithawk” (which is a mouthful, but I never invoke it by hand so I don’t care.
mshowwithawk is:
#!/bin/bash
mshow $(echo $1 | awk '{print $2}')
and mymail is:
#!/bin/bash
mlist -s ~/Mail/$1/$2 | msort -d | mseq -S | mscan 2> /dev/null | fzf --preview="mshowwithawk {}"
Usage is: mymail <any ~/Mail/ subdir that contains maildirs> . Reason for this script is I have two accounts, and I often switch between my work and personal email, so I often call it like “mymail otremblay INBOX” or “mymail work INBOX”. Next improvements are gonna be defaulting to INBOX and allowing for the -s flag to be passed or not from the mymail script (because sometimes, I need to see old mail too.)
The output is a list of selectable items, with a preview of the currently selected item on the right. Yes, right in the terminal. The list is populated by mail prefixed with an ID I can then use with mshow if I need better output (say, in case the email provided a worthless (but still present) text/plain thing. I use elinks to dump out text/html messages (configurable in mblaze).
I use mblaze’s “minc” to pass messages from the maildir’s new to cur, and mflag -S to flag everything as read once I’m done.
I like the workflow because it is just a construction of a collection of small specialized programs working together. I mean, if needed, I can still just invoke mlist by itself and grep through email headers, if I so desire. Or pump the whole output elsewhere to any other unix-standard utility if I want to. Heck, it would be trivial to include spamassassin header parsing, or any other kind of header parsing. I’m also a sucker for CLI interfaces, mostly on account of it being the easiest way I know to compose software with one another out of small blocks. I feel like I should probably start a blog about my crap, but I’m afraid that said crap would be too trivial for people to enjoy.
mblaze is indeed pretty nice (similar to mu. I use it to automate some tasks with n my email workflow (archiving, marking as done, digging up the full thread when a mail arrives, …) it helps me a ton. But when it comes to actually read and reply to mails, it doesn’t cut it, so I use mutt for that.
Hm, an annoyingly large portion of the text goes off the screen on my phone with no way to scroll over there, even in landscape mode and with “Request desktop site” enabled in chrome. Makes it rather annoying to read.
I even have to go to landscape mode on my iPad otherwise I face the same issue, just to have lots of margins on both left and right side of the article. Not cool.
… which makes the following rather funny:
Dozuki makes documentation software for everything — from visual work instructions for manufacturing to product manuals that will make your customers love you.
One of the complains linked herein is one of my biggest complaints for all of the free operating systems: man pages aren’t kept in sync with what is actually there.
It’s not enough to make the man pages reflect things that are added; you need to update the pages that reference those pages too.
Major pet peeves.
I really like how OpenBSD handles man page issues - missing, outdated, unclear, or incorrect documentation is considered a bug. Of course, what they have to back that up is a culture that considers those bugs serious and actually fixes them. For a user, it means you can probably use apropos instead of Google and still get good and relevant information.
Yeah, I actually managed to configure my wifi on my laptop with just apropos and man pages on openbsd. The culture is wrong with linux.
For a user, it means you can probably use apropos instead of Google and still get good and relevant information.
Not only still but often even better results than by searching the web. At least that’s my experience when it comes to OpenBSD man pages vs Internet tutorials…
Thank you for using (and explaining) the name constraint extension. It’s a really useful feature for cases like this one. You point it out as odd that the certificate itself mandates the constraints (instead of the user) and I agree that user control would be an interesting (advanced!) feature here. But with a parent node in a chain of trust, I’d say it makes sense again. Especially given that every certificate you can get these days is not generated by you but for you, with all data and attributes created and modified on your behalf.
Having said that and understanding your reluctance to rely on a third-party (i.e., a real CA) for your availability, I personally can’t find a way to accept this as a valid concern. Isn’t it very hypothetical?
I like to compare to the ssh model, which isn’t perfect either, but is often simpler and fails in more predictable ways. When I add a key to known_hosts, I specify the hostname and the key. But doing so doesn’t automatically mean I trust that key for all the other hostnames embedded within it (of which there aren’t any, but you see the point.) In my opinion, asking a user to inspect a cert and make sure it only does what it says it does is high risk and prone to failure. If you inspect the cert I’ve provided with the right tools, you can asses what it does, though of course I could also misspell (perhaps with Unicode) some fields, or toss an inconspicuous but quite powerful dot in somewhere. Making the user enter the name of the site they trust it for would be much safer. We’ve tried to make things “easy”, but the end result is a system that’s actually incredibly difficult to use safely.
I don’t think my concerns are hypothetical. Not too long ago: https://blog.hboeck.de/archives/886-The-Problem-with-OCSP-Stapling-and-Must-Staple-and-why-Certificate-Revocation-is-still-broken.html
We can point our fingers at OCSP in this case, but I think that’s in sufficiently close proximity to justify concerns about systemic fragility.
I did try LE back when they started, but was rejected because my email was “malformed” which isn’t the problem you think. It wasn’t because I had a plus in it. It’s because I don’t have an A record for my domain. I only need an MX record. So that’s two problems. One observed by lots of people, and a second (quite minor) one that I personally experienced. When people tell me to try it because “it just works” I’m skeptical because I’ve seen it not work. I’m picking on LE, but I have little reason to believe they are outliers in this regard.
I like to compare to the ssh model, which isn’t perfect either, but is often simpler and fails in more predictable ways. When I add a key to known_hosts
I think the problem starts here. Most people seem to be interested in transport encryption and not authenticity, i.e. they care more about not being spied upon than the fact if Bob really is Bob. I’d argue that’s why everybody just says yes to “add key to known_hosts” or “do you want to trust this cert”. But this my opinion as a layman.
What I like about the SSH model is that it comes with cert pinning built-in but then again, I normally have control over boxes I ssh in so I know when host keys changed, but I will never be in a position to know if it’s ok that the cert for e.g. Amazon changed. So what we would need is maybe something like OpenBSD key rotation where my current cert also knows about the next cert and my browser can check if a new cert is actually ok. Question remains how to built trust on your first visit and what about homoglyphs…
I sometimes feel like checking the authenticity of a given 3rd party entity on the internet is a lost cause.
I’d love it if the general public could be relied on to know the difference between transport encryption and authenticity.
For use-cases like “is this the real amazon.com, is this really my credit union”, authenticity continues to be important. I agree that it’s far less important for blogs, or for my own favorite transport-encryption example - not leaking your webmd history to MITMs.
Talking to a fake webmd seems like it could be pretty bad tbh. It might tell you your cancer symptoms are nothing to worry about, or something.
Or give your insurance company evidence to deny a claim for preexisting conditions (fairly or not).
Absolutely but I think most people are more concerned about somebody spying on them than about running into an imposter / MITM thus they click away the error so they can get to the content. Funny enough I think this is a statement about how nice humanity actually is because we are not expecting that a stranger is going to rip us off at first sight.
I think that’s the hacker bubble; most non techies I know are much more frightened of having their credit cards stolen.
I don’t want to leave the impression that I think authenticity is unimportant. But I have grown the impression that our subconsciousness wants to believe imposters are nothing but a product of our fantasies and for good reasons, imagine a world in which we would constantly question the authenticity of the information provided. I doubt it would be a nice place to live in.
Thus I think a solution that involves user interaction is destined to fail. But I am starting to be way off-topic.
Making the user enter the name of the site they trust it for would be much safer. We’ve tried to make things “easy”, but the end result is a system that’s actually incredibly difficult to use safely.
I’m also not sure why user-specified and cert-specified would be mutually exclusive. Using the intersection of them would make perfect sense. This way, a root certs can claim it’s valid for anything, but I might want to trust it only for *.blah.com.
It is somewhat annoying that this sort of scoping is available in my adblocker, but not in my TLS trust model.
From what I understand, most CA’s today wont just cross sign a customer CA though. Doing so would in fact likely get them marked as untrusted in most browsers I imagine. Combined with (based on my readings) somewhat spotty support for name constraints, the best you can hope for today seems to be either flashing lights and klaxons (self signed cert warnings), or hoping for the best and installing/trusting the signing private CA (many corporations do this for internal uses).
Congratulations on the formal proof and otherwise solid work. You’re almost at EAL7 with I think only one other VPN being designed as strong. Navy cancelled it, though, right before evaluation. Just add covert-channel analysis and source to object code verification to top it off. Then you’ll be Number 1 far as the implementation.
EDIT: Also, we have a formal methods tag now to highlight these posts.
I guess getting the big vendors like e.g. Cisco, Check Point and Fortinet to support it will be the hardest part.
It would be idea but don’t get your hopes up: big vendors usually want to use stuff like this for free or create crappy knock-offs. High-assurance security gets little love from either proprietary or FOSS sectors. It’s almost always CompSci people/companies, defense contractors, smartcard companies, or an occasional private company/person trying to create something better w/ security as differentiator. When they do, it’s uncommon for it to be FOSS (outside CompSci) since high-assurance software takes a lot of labor. They usually want something in return.
This problem led in the 1990’s to the concept of “incremental” assurance. You create a solution for the clients the fast, market-grabbing, money-making way. Make sure it’s decomposable w/ decent API or whatever to facilitate rewrites. Part of the money you make on your main product/service can be put into redoing critical parts of the software with high-assurance security. Although little data to test that, a co-founder of INFOSEC, Paul Karger, used it in his last project at IBM to fund a high-assurance, smartcard OS (Caernarvon). Broke up long-term project into intermediate deliverables IBM could sell independently and/or at least see something produced to justify funding.
I don’t agree fully with both of the articles but I am more in favour of this one. Yes he has some silly points, but the original article also has some fairly tin foil haty stuff like ‘do you trust your network equipment?’.
How about do you trust your computer’s hardware vendor or how about the servers’? Or compromised curves or bad DH params? I also am 100% sure that my website does not need HTTPS. I am also aware that some third party might tinker with the last postcard on the way. I am aware of the pitfalls for both use cases and I made an informed decision about it. But - as n-gate also admits - there are situations where https should be set, like forms.
mysql -uroot -p -e “GRANT ALL ON smtpd.* to ‘opensmtpd’@‘127.0.0.1’ IDENTIFIED BY ‘opensmtpdpass’”
Hm. Can’t see why opensmtpd has grant all on smtpd.* Am I overseeing something or could OpenSMTPd and dovecot live with read-only access to the database?
Thanks for confirming. I would leave a comment over at cagedmonster but can’t see where. Maybe he’ll see it here.
Feels like a commercial as the site itself doesn’t give me any useful information. Would you mind giving some insight into ClearPath?
ClearPath
OS 2200 is the operating system for the ClearPath Dorado mainframe systems. The OS 2200 operating system is directly descended from EXEC-8 and the Unisys ClearPath mainframes in this series are the descents of the UNIVAC. See https://en.wikipedia.org/wiki/OS_2200 for info.
MCP is the operating system for the ClearPath Forward mainframe systems. The MCP operating system is directly descended from the original MCP and the Unisys ClearPath mainframes in that series are descendants of Burroughs Large system mainframes. https://en.wikipedia.org/wiki/Burroughs_MCP for info.
These are essentially the only two widely deployed mainframe systems that remain in constant mission-critical production outside of IBM systems and I think it is quite the treat that Unisys, the current owner of these platforms, has made free-of-cost virtualization solutions for these platforms available to mere hobbyists and enthusiasts.
Not at all. It’s sometimes a challenge to balance editorializing vs. providing some additional context when you are familiar with the products but others are not. IBM offers a similar emulation/virtualization environment for developers but not for hobbyists and they enforce a developer training and certification requirement and charge a $900 fee for the ADCD - Application Development Controlled Distribution, which has very specific licensing restrictions disallowing any use beyond development. http://dtsc.dfw.ibm.com/adcd.html has info.
In this context the Unisys offer, while not a “free” nor open-source distribution that the Linux or BSD community might want to see, is quite generous when you look at the industry norm.
Very similar to the this story: https://lobste.rs/s/yooxnl/running_openbsd_on_azure but I was unsure if I should flag it…
[Comment removed by author]
I could’ve sworn there’s been quite a bit of work gone into improving and streamlining the installer in the past ten years.
You don’t need to worry about MBR and CHS vs LBA as much for sure. And there’s a slightly different set of questions, but the principle is very much the same. It would be hard for me to identify what release of OpenBSD was being installed merely from a list of questions asked without consulting a reference sheet. Maybe it asks about ntpd or maybe not, but you’ve really got to be paying attention to notice.
This is something I often get when I show others how to install OpenBSD. But then they are stunned by how fast the process is and that the defaults are sane and most of the time I just hit enter. Also I find it sexy that I can drop into a shell for setting up complicated stuff the installer doesn’t handle and pickup the installation more or less where I left it.
I think the installer doesn’t get enough credit just because it’s not using X11 or curses.
~C is nice if you realize that you need a forward when you already have a connection open, or if you’re using some tool that makes an SSH connection for you, but doesn’t give you easy access to the commandline.
~C is one of the most used escape sequences for me as it is not always clear from the beginning that I need a forwarding. I also have the feeling that many people don’t know about it.
Nice read even though he wasn’t right about hacking is cool will be dead in 10 years. But I can understand the attraction: breaking is easier than fixing, let alone designing so a lot of people chose the path the earliest satisfaction: pentesting. I’d love to see something like defcon for the blue team to emerge.
Defensive Security The episodes last around an hour and are – for me at least – fun to listen to
That was the final push I needed and I am actually declaring Wednesday my ed day, so I am going to use ed exclusively as my editor on Wednesdays wherever possible. And I was actually enjoying it today :-)
And being an OpenBSD user I already had some moments where knowledge of ed would have been very handy.
Finally there is some movement on the SSL front. I wonder how long OpenSSL will sustain.
Here is Theo’s response to BoringSSL: http://marc.info/?l=openbsd-tech&m=140332790726752&w=2
“there will be less comment deletion when users become inactive by deactivating or being banned”
This is a great improvement since we’ve had some great comments deleted as a side effect of deactivation. Since you said you didnt undelete, does that mean the comments “removed by author” or whatever it says are still there where they could be restored if account owner chose? Or am I misunderstanding?
Yes, users can undelete comments unless they were deleted by a moderator. Lobsters does “soft” comment deletion via a boolean
is_deletedon thecommentstable.Would it be possible to make the removed by author delete hide the username as well?
Yes, but overwriting the data prevents a lot of very easy-to-make future bugs that present it.
Does’t have to be overwritten from the database. Just has to show [deleted] or something on the frontend.
At least for the being banned part I dislike the decision. A ban expresses the wish that someone should no longer be part of the community, this should include all of his statement. For one because most bans will most likely happen because of comments and secondly I think this to be good style to not use the work of someone with whom I don’t want any interaction. At the very least the non-deletion should be an opt-in feature so the banned user can decide about the fate of his contributions.
The rationale behind here (I guess) is that when comments are deleted (even bad ones) context for comments around is lost. Some users (like me) are reading lobsters comments weeks after the last comment has been posted, by removing some comments, the threads start to be a big mess and you have to become and expert at guessing what’s missing.
I totally agree that if someone has been banned, it’s probably because of his comments, but as you’ll note, when such bad comments are written, there is often very good replies that are often very informed and really nice for the threads.
There are some pros and cons about it and, to me, when there is a doubt, better to keep more information than to trash it.
Ha! Thank you for better expressing my thoughts than I was able to do myself 😀
That sounds like an all or nothing decision. Nuance would require considering that a person might do good or bad things (in eyes of community) on a site with ours concerned about getting rid of the bad things. The good things they do, esp in terms of insightful comments, can still have benefit for people at that time and future readers. @pushcx previously said something along the lines of this site becoming a treasure trove of discussions worth archiving, sharing, or Googling. That’s what I hope as well. His seems to be supporting that goal by only throwing out the trash (negative scores) of banned users while keeping the treasure (neutral or good scores).
On top of this, you might also not want to use those comments in light of your preference to avoid further interactions with such people. I can see the sense in it. Them just being here without the person being here is probably not going to hurt us in general case, though. Far as opt-in to non-deletion, two things come to mind:
(a) That’s giving extra privileges and developer time for codebase to someone we’ve already decided to punish severely. Are they even worth the time? And do we want to let them take back any good stuff they gave after they’ve annoyed us enough for a ban? As in, we’d get nothing versus something out of whatever time we put into them.
(b) Copyright Law. Depending on country (esp in US though), whoever publishes something has rights to how their content is shown and distributed. There may be some copyright consideration here. A preemptive solution would be modifying the site’s terms so that all comments are Creative Commons by default. Whatever is most permissive. It would be on invite page with a notice to current Lobsters that it applies from now on and retroactively to all comments. People leaving for legal reasons could fully delete their comments.
All or nothing it is but I am in favor of keeping them all if the banned user is ok with it because:
o I think it is fair after being banned to have a say in this. I don’t think this gives somebody power over something besides his own contributions. Also it’s easy to do, all you need is a single SQL statement to flip the is_deleted flag mentioned by @pushcx above or am I misunderstanding something? Also there is zero dev time, an e-mail from the banned user to the sysop if he feels like it’s ok to keep his comments should suffice
o only deleting negative comments is also harmful as it removes context and it might lead to stories being only told by the winners. Especially as I didn’t see a threshold being mentioned. So -1 is enough to be eradicated…
o I don’t think a single ban happened because of the majority of the community requested a ban. I also don’t think a single comment got the majority of votes from the users following the chosen tags. Most people are quiet. So I find it hard to justify the decision with “it’s better for the community”
If I have to choose between keeping or deleting my actual preference would be to keep all unless law forbids to or the banned user asks for deletion of all of his comments. This would mean that all voices are being heard and we are adults and should be able to deal with unpleasant stuff.
I hope banning is something that’s happening very seldomly and I’d rather deal with the blanks than starting to make arbitrary decisions about good and bad.
If this site is going to become a treasure trove of good conversations worth archiving, I’d like to see what some of the good threads we’ve had so far are.
I consider those anything where I see something really interesting or well-explained here that I usually don’t see elsewhere that’s worth passing on. I’ve seen quite a few over the year you’ve been here. You really haven’t learned anything or saw a unique comment you enjoyed here?
I probably phrased that poorly.
I’ve seen and enjoyed a lot of conversations here. What I’m wondering about is being able to discover said conversations as a new user, or after the fact. A “Best of Lobsters”, if you will. They don’t happen quite every day, and I’m not aware of any lists of those. I’ll probably start tracking them in my wiki, but yeah.
The fact that your ability to dig into the history of your threads only goes back a couple of months, for example, keeps me from being able to reminisce without basically writing a program to scrape the whole site.
I do think this site can be a treasure, but I’d kinda like a map. :)
“What I’m wondering about is being able to discover said conversations as a new user, or after the fact. A “Best of Lobsters”, if you will.”
I see what you mean now. I’ve been wondering same thing recently. It would probably have to be us curating the best that we post somewhere over time. HN added a feature for it at the site level called “favorites” where you just click that button on a story or comment to add it to your favorites page. For our site, lists might be made for each user, integrated from many users, or even done on a per tag basis. I don’t have a specific recommendation for now past bookmarking/saving good stuff to hand to people later when appropriate.
“The fact that your ability to dig into the history of your threads only goes back a couple of months, for example, keeps me from being able to reminisce without basically writing a program to scrape the whole site.”
(@pushcx might find this interesting, esp second paragraph)
Oh yeah, that’s a great point that reminds me of something. It is hard versus some other sites to find an older conversation due to limits of how far I can go back. On HN, I’ve been able to help people who replied late (i.e. work/family reasons) by opening a chain of older posts in new tabs that I quickly use “Find” on. I found our conversation in just a few minutes that way. There probably should be no limit past a rate limit either in general or when the site has significant activity. Basically, one user not DDOSing another.
Now, one might say just search for the info. That doesn’t work. The site search is very unreliable for me. That would be OK given I usually do a site-specific search on DuckDuckGo or Google for these sites that gets close to conversation I need. Even worse, those either find no threads or only a few since they don’t seem to index Lobsters at all. I mean, even Google doesn’t know much of this site exists. I’m not sure why since web tech and SEO isn’t my thing. If this is to be an archive, then it’s imperative that be fixed because something invisible to Google is asking to be overlooked or forgotten over time.
In my case, there were actually people asking for my comments or mini-essays on certain topics. I also wanted to revisit some from others. I couldn’t find them in Lobsters search or Google. One gave me threads with little relevance. One gave me “no results.” So, I just gave up on those prior writings until Google indexes the site or some other solution is found.