As someone who literally learned about Gopher first, and the World Wide Web second (in an information retrieval module in university), this article has a great perspective - historical!
It’s important, I think, to note that in pre-Cambrian explosion of internet use in the mid-90s, Gopher was a centralized, locked-down service. It was something used by university librarians to organize access to documents. HTTP was the Wild West - students, mostly young people,. were given free reign within their quotas of ~home to express themselves how they wished. The university library didn’t have a Star Trek section - the student’s homepages most definitely did.
That’s definitely true. I actually talked to the university when I was a student about putting content on the gopher server, and they weren’t sure administratively how that could work (though they already knew about my .plan “service”).
This is just the kind of content I joined lobsters to be introduced to. Well written, interesting and authored by someone who has both experience-in and an admiration-of the subject. Thank you for sharing.
I came to Gemini first – I didn’t really get Gopher, but Gemini made just enough since to me that I grokked it, then I was able to come to Gopher that way.
I really appreciate this article coming from the other direction. It gives more of a context to the whole thing, which is nice. The Gemini ML is having a lot of discussion right now about adding things to make it more HTTP-ish, which I think is misguided – Gemini is inherently an outgrowth of Gopher (solderpunk started thinking of it as a more secure Gopher, and the other (imo) QOL improvements came from there), so there’s a lot of HTTP stuff that isn’t wanted.
Anyway, I agree that Gemini is a fun “middle-child” of Gopher and HTTP, and that it’s not going to supplant either – it’s just a cool hobbyist space for me.
That’s an interesting lateral motion. But again, I think it proves that Gopher and Gemini really aren’t splitting the user base. They have different appeals to different people.
I agree 100%. Sometimes I go over to Gopher to read some stuff there, and it’s good too! What I like about both protocols is their focus on content, rather than presentation. And ASCII art, of course :)
Idle thought: I would like a “web reboot” that is data-driven … Basically the client could measure how well the page is performing, and the servers would get feedback on it, and (in some fantasy world) the page authors would adjust to that.
For some color on that, I actually like images and videos on the web. And mathematical formulas, and SVG diagrams like Richard Hipp’s recent pikchr:
But I think it would be cool if there was a somewhat compatible but stripped down browser that enforces a network transfer time limit and rendering time limit of say one second.
And then it would send back a “reverse error” like a “HTTP 600” if that time is exceeded. It would just stop downloading or stop rendering.
The obvious problem is the incentives… Most people would just wait longer than 1 second. They want the free content, and get addicted to the free content, hence suffering through all the ads.
But I guess the end result is that you could browse https://www.oilshell.org/ with such a browser and I wouldn’t have to change my content :)
And for sure I would look in the logs to see how many people were sending back the “data transfer aborted” and “rendering aborted” codes …
I guess a different variation on this idea is if the client sends the codes to some kind of reputation service. (There is an obvious privacy problem there, but probably some mitigations too.)
And you could have a search engine that uses that as a signal … serve up pages with better latency.
In fact I thought Google at one point latency as a signal for ranking, but I find that hard to square with the state of the web right now … it must be a very weak signal. I guess the problem is that sometimes high quality content is on a terrible page. That seems to be how the economics have worked out.
So again this is basically a “soft migration” rather than a “web reboot”.
Basically the client could measure how well the page is performing, and the servers would get feedback on it
That’s what Google is claiming to do when ranking websites. But it would downrank websites who typically bring in a lot of ad revenue, so they don’t actually do it :(
In a fantasy world where we had competition in the search engine space, website authors would probably adjust their pages, just like they do currently by creating lots of spam filler content for SEO.
As someone who literally learned about Gopher first, and the World Wide Web second (in an information retrieval module in university), this article has a great perspective - historical!
It’s important, I think, to note that in pre-Cambrian explosion of internet use in the mid-90s, Gopher was a centralized, locked-down service. It was something used by university librarians to organize access to documents. HTTP was the Wild West - students, mostly young people,. were given free reign within their quotas of ~home to express themselves how they wished. The university library didn’t have a Star Trek section - the student’s homepages most definitely did.
That’s definitely true. I actually talked to the university when I was a student about putting content on the gopher server, and they weren’t sure administratively how that could work (though they already knew about my
.plan
“service”).This is just the kind of content I joined lobsters to be introduced to. Well written, interesting and authored by someone who has both experience-in and an admiration-of the subject. Thank you for sharing.
This is a nice read, well written. I enjoy it.
Thanks! I wanted to give a little different perspective, since I think it’s usually approached from a Web point of view.
I came to Gemini first – I didn’t really get Gopher, but Gemini made just enough since to me that I grokked it, then I was able to come to Gopher that way.
I really appreciate this article coming from the other direction. It gives more of a context to the whole thing, which is nice. The Gemini ML is having a lot of discussion right now about adding things to make it more HTTP-ish, which I think is misguided – Gemini is inherently an outgrowth of Gopher (solderpunk started thinking of it as a more secure Gopher, and the other (imo) QOL improvements came from there), so there’s a lot of HTTP stuff that isn’t wanted.
Anyway, I agree that Gemini is a fun “middle-child” of Gopher and HTTP, and that it’s not going to supplant either – it’s just a cool hobbyist space for me.
That’s an interesting lateral motion. But again, I think it proves that Gopher and Gemini really aren’t splitting the user base. They have different appeals to different people.
I agree 100%. Sometimes I go over to Gopher to read some stuff there, and it’s good too! What I like about both protocols is their focus on content, rather than presentation. And ASCII art, of course :)
Idle thought: I would like a “web reboot” that is data-driven … Basically the client could measure how well the page is performing, and the servers would get feedback on it, and (in some fantasy world) the page authors would adjust to that.
For some color on that, I actually like images and videos on the web. And mathematical formulas, and SVG diagrams like Richard Hipp’s recent pikchr:
https://pikchr.org/home/doc/trunk/doc/examples.md
But I think it would be cool if there was a somewhat compatible but stripped down browser that enforces a network transfer time limit and rendering time limit of say one second.
And then it would send back a “reverse error” like a “HTTP 600” if that time is exceeded. It would just stop downloading or stop rendering.
The obvious problem is the incentives… Most people would just wait longer than 1 second. They want the free content, and get addicted to the free content, hence suffering through all the ads.
But I guess the end result is that you could browse https://www.oilshell.org/ with such a browser and I wouldn’t have to change my content :)
And for sure I would look in the logs to see how many people were sending back the “data transfer aborted” and “rendering aborted” codes …
I guess a different variation on this idea is if the client sends the codes to some kind of reputation service. (There is an obvious privacy problem there, but probably some mitigations too.)
And you could have a search engine that uses that as a signal … serve up pages with better latency.
In fact I thought Google at one point latency as a signal for ranking, but I find that hard to square with the state of the web right now … it must be a very weak signal. I guess the problem is that sometimes high quality content is on a terrible page. That seems to be how the economics have worked out.
So again this is basically a “soft migration” rather than a “web reboot”.
That’s what Google is claiming to do when ranking websites. But it would downrank websites who typically bring in a lot of ad revenue, so they don’t actually do it :(
In a fantasy world where we had competition in the search engine space, website authors would probably adjust their pages, just like they do currently by creating lots of
spamfiller content for SEO.What are people using for Gopher server and client software these days?
My own. Floodgap’s backend is Bucktooth ( gopher://gopher.floodgap.com/1/buck ) and I usually use Firefox with OverbiteNX ( https://addons.mozilla.org/en-US/firefox/addon/overbitenx/ ) or Overbite Android on my phone ( https://gopher.floodgap.com/overbite/ ).
However, Gophernicus is the server I see deployed most frequently nowadays ( https://github.com/gophernicus/gophernicus ).
ARe you the guy behind floodgap?
Yes.
I really enjoy floodgap, thank you for maintaining it!
tips hat My pleasure!
A lot of documentation on gopher://gopher.floodgap.com/1/buck, thank you for this.
Plain old pygopherd, even got it working on Ubuntu 20.04: https://raymii.org/s/tutorials/Installing_PyGopherd_on_Ubuntu_20.04.html - client wise OverbiteNX or Lynx, and on IOS a client named Gopher.
To add more links to the list :
ffplay gopher://bitreich.org/9/radio/listen
I wrote my own gopher server. I also have a gopher client I wrote, but I haven’t published that one yet.
[Comment removed by author]
[Comment removed by author]