Tip: don’t implement it, just make one big page and the content will load on its own. 0% code, 0% bug 0% calculation overhead, 0% interaction (web crawler browse it just fine, I guess the users as well) :-)
Once you go past a few tens of thousands of lines, most renderers get into a fair bit of trouble. You can test this with a simple script generating a single, long file.
Then we need pagination every ten thousands of lines.
Maybe back in the days it was only 1000, and pagination was cumbersome.
Also we may not want to charge 10 000 000 lines with images every times we go to twitter.com
Twitter is a good example of terrible infinite scrolling: each ‘page’ is quite short, it’s noticeably slow to load the next page, and it doesn’t start loading the next page until you’re at the very end of the current one.
However, it’s really not an option to load every tweet; you have to cut the page at some point.