People who do experiments with the visual system are (supposed to be) particularly careful about visual display. People (are supposed to) use particular software that is cognizant of refresh rates and so on. This was especially critical for CRT screens and is still vital to understand and control for LCD displays.
For psychophysics (effect on behavior) it’s not clear if things like the scan time affect things (but I found this paper) but for neurophysiology of the early visual system the neurons can be modulated by the screen refresh rate (e.g. this) so you absolutely have to worry about what exactly is being displayed on screen and when and where.
For some of my experiments I used a light sensor taped to a corner of the screen. The corner would flash a square at each frame and I used the sensor output as one of my recording tracks in addition to the pulses my video software would put out marking salient visual events like stimuli appearance and disappearance.
None of the experiments I did involved browsers as a display software, though I guess we now have more and more of this.
Software I’ve used have been MonkeyLogic and lablib and have played with PsychoPy. Note that all take pains about understanding video timing.
My cognitive neuropsychologist friend (University of Kent, Vrije Universiteit Amsterdam, Otto-von-Guericke-Universität Magdeburg) reports:
The assumption there in my experience is totally untrue — it’s more that every psychologist knows that you can’t display things for exactly the time we say, so we get it as close as we can and know that everyone knows what we are talking about in the paper. My department also specifically uses horrible experiment presentation software that everyone hates because they’ve spent ages looking at timing options and it provides a few milliseconds more in terms of accuracy than other software. Psychology forums are full of discussions related to tiny timing inconsistencies.
People who do experiments with the visual system are (supposed to be) particularly careful about visual display. People (are supposed to) use particular software that is cognizant of refresh rates and so on. This was especially critical for CRT screens and is still vital to understand and control for LCD displays.
For psychophysics (effect on behavior) it’s not clear if things like the scan time affect things (but I found this paper) but for neurophysiology of the early visual system the neurons can be modulated by the screen refresh rate (e.g. this) so you absolutely have to worry about what exactly is being displayed on screen and when and where.
For some of my experiments I used a light sensor taped to a corner of the screen. The corner would flash a square at each frame and I used the sensor output as one of my recording tracks in addition to the pulses my video software would put out marking salient visual events like stimuli appearance and disappearance.
None of the experiments I did involved browsers as a display software, though I guess we now have more and more of this.
Software I’ve used have been MonkeyLogic and lablib and have played with PsychoPy. Note that all take pains about understanding video timing.
My cognitive neuropsychologist friend (University of Kent, Vrije Universiteit Amsterdam, Otto-von-Guericke-Universität Magdeburg) reports: