I think more undergrad programming courses should involve robotics. It gives you a certain sense of self-awareness when you need to make sure everyone nearby knows where the Big Red Power Shutoff Button is before you run your program.
This is why I cannot endorse certain languages for teledildonics work.
I’m confident in my C (or god help me, my Javascript), but not that confident.
(answering the question: I had a 90 pound robot become very upset when its serial comms bugged out and decided to go on walkabout during an undergrad course. We almost put a hole in the wall with it, and after that the robot was always on blocks. Final solution ended up being in hardware; we had a shared power bus with the motor controller and so when it changed speeds–from, say, starting up–sometimes the brains would freak out and do stupid things…moved the brains to its own dedicated, isolated power bus, and things improved.)
But this? If I was writing this program in Python (which I probably would; 10 lines of Python is nicer to write than 100 lines of C), I would’ve probably made the exact same mistake of accidentally including the last byte. Nothing about Python’s safety would’ve prevented this, nothing about C’s unsafety caused this. It was simply a logic error.
I agree with you, but I want to dig into that word “simply” in “simply a logic error”. Could nothing have been done?
Suppose that our language of choice has iterators and sized unsigned integers. Then, there is an obvious programming pattern which would have prevented this mistake: Operate on samples by iteration. I hear that this was one of the motivating patterns in C++’s design, so this is not a new idea.
Suppose that our file storage of choice has structured data, and stores WAV files in a rich typed structure. Then, iteration on samples is mandatory and the bug cannot occur, regardless of choice of programming language. I have heard of systems like this, and I’ve used programming environments which emulate this behavior, automatically parsing file headers and presenting rich data layouts.
I have been physically hurt from bad audio code twice :
Once after pulseaudio decided to max out the volume. Nowadays most distros disable this “feature” but always double-check your config.
Another time, much worse, was joining a discord call on my phone, with IEMs. Android helpfully switched to “phone mode” which not only blasts your volume to 100%, but also pushes it even further so you can really hear what the other person is saying. That feature is supposed to turn on when no headphones are plugged in, but the opposite happens on my phone.
Put your OS volume to 100% and use hardware for volume control. Same should apply to anything that could hurt you physically (modern cars are scary).
In 2017 I had a bluetooth speaker paired to a Dell XPS where I had to set the volume at 200% to hear decent audio from the speaker. When it unpaired, pulseaudio pushed 200% volume through the laptop speakers, which blew out. I’m more pissed that Dell didn’t have any hardware regulation to prevent this than with Pulseuadio.
Put your OS volume to 100% and use hardware for volume control.
Indeed. Ideally, while testing audio stuff, people should even go a step further and set the hardware volume as low as possible in order to not suffer too much if the software accidentally outputs a square wave at maximum amplitude. But this pretty much implies to use “loud” test files, in other words, loudness war and compression does help in this case. Another more complicated solution is to plug a limiter or compresser between the sound card and speakers.
I’ve code-hurt both myself and other devices many many times – to the point that my favorite pet is nowadays the watchdog. Lets take three examples:
Gutting a phone like a fish:
Many years ago I did a lot of system tracing for hunting down touch to display latency and power consumption bugs in Android. This is mostly what evidence that remains of the rig itself - https://www.youtube.com/watch?v=bW07-iPqaEk but basically a high-precision/high accuracy clock outputting to a seven-segment display that was OCRed through high speed cameras, a lot of filters and tuning, reflective tape on a “finger”, directed lights and some other things that I forgot / omitted.
The robot itself (mostly out of frame) was one of those pneumatic industrial monsters. The robot was configured to do all sorts of fancy motions, and by tapping the display scan-out buffers there was also automated navigation through UI and scenarios that put some D&D dungeon masters to shame. You picked a preset, it ran the calibration routine, and then you got nice streamed output of all major culprits in the hardware and software stack for whatever case was being studied. In the early development days though, the calibration was sometimes re-used and >names redacted< had left the calibration profile forced in a state where the jig holding the phone in place was not there, shifting the “ground level”. Thus, the robot happily cut into the unsuspecting phone, to the point of going through the display and pcb, puncturing the battery. The “finger” also broke and needing costly repairs.
Early days of coding VR was not much fun (it still isn’t). Headset on, fire things up, get blasted by something looking awful, close eyes in panic, headset off, patch, rebuild, retry. When things started to stabilize a bit, confidence grew to the point of watching movies and coding inside the thing - great stuff. What was not so great was precision errors from sensor filters that snuck in and causing the left and right eye matrices to accumulate “drift” in rotation and translation – so when you think you are looking straight forward, your eyes eventually needed to look to the extreme sides in order to accomplish that feat. On top of the nasty strain this accumulated, when the headset was removed, the world did not respect the alignment and rotation of my eyeballs, my head felt like someone was jumping on it and it took a blindfold for a full day to get back on track.
One of my favourite projects to date, Senseye, is about finding non-native representations for visualizing large quantities of data for debugging/reverse engineering. One fun experiment in that sense was taking a large chunk of data, say a few Tb of disk, pick some data independent 3D projection, put on the VR HMD again and lie in bed watching the pretty shapes and colours for a few hours. I don’t have a recording of the particular experiment, but this YT clip shows a little of what the concept was. Like other sensory substitution experiments, there were expected things and unexpected things. The expected effect appeared - patterns were abound and it was rather easy to get a rough “oh I’m in this kind of cluster of data types, yeah that’s a blonde, brunette, kind of a thing. What I did not expect was that upon getting back into the real world, I had substantial difficulties telling things apart. A strange sensation to describe, but what I knew was a doorway ‘felt’ like a solid wall and walking through it felt just as blocking as trying to walk into concrete. Pens, phones and soldering irons on the table felt ‘part of it’ and picking / touching felt uneasy and dreamlike. Things reverted after some 15-30 minutes and I haven’t been able to repeat it.
I worked on implementing some circuitry and the microcontroller firmware for a muscle stimulator (used in medical science) as part of my undergrad EE degree. What’s a muscle stimulator? Well … two electrodes are placed on the skin near the muscle to be stimulated, and electric charges are pulsed through … :)
Let’s just say that the testing phase was quite eye opening for all parties involved. I was the only tester, but once my classmates heard about what I was doing, they all volunteered!
Note: All testing was done under supervision, which was a bit unnecessary considering the device was powered (9V battery if it was being used in production) by a 9V power supply with the current significantly limited.
Additionally, I have bilateral cochlear implants, and the program mapping can be quite painful.
A disposable camera flash is powered off a single AA battery at 1.5 volts. If you short the flash bulb with your skin, you will have electrical burns that take months to heal (not to mention it’ll knock you on your ass).
Capacitors can do some very unintuitive things to a mild power supply.
I think more undergrad programming courses should involve robotics. It gives you a certain sense of self-awareness when you need to make sure everyone nearby knows where the Big Red Power Shutoff Button is before you run your program.
This both made me chuckle and appreciate the self awareness of the author. Nice touch.
This is why I cannot endorse certain languages for teledildonics work.
I’m confident in my C (or god help me, my Javascript), but not that confident.
(answering the question: I had a 90 pound robot become very upset when its serial comms bugged out and decided to go on walkabout during an undergrad course. We almost put a hole in the wall with it, and after that the robot was always on blocks. Final solution ended up being in hardware; we had a shared power bus with the motor controller and so when it changed speeds–from, say, starting up–sometimes the brains would freak out and do stupid things…moved the brains to its own dedicated, isolated power bus, and things improved.)
There are many good reason not to choose C.
But this? If I was writing this program in Python (which I probably would; 10 lines of Python is nicer to write than 100 lines of C), I would’ve probably made the exact same mistake of accidentally including the last byte. Nothing about Python’s safety would’ve prevented this, nothing about C’s unsafety caused this. It was simply a logic error.
I agree with you, but I want to dig into that word “simply” in “simply a logic error”. Could nothing have been done?
Suppose that our language of choice has iterators and sized unsigned integers. Then, there is an obvious programming pattern which would have prevented this mistake: Operate on samples by iteration. I hear that this was one of the motivating patterns in C++’s design, so this is not a new idea.
Suppose that our file storage of choice has structured data, and stores WAV files in a rich typed structure. Then, iteration on samples is mandatory and the bug cannot occur, regardless of choice of programming language. I have heard of systems like this, and I’ve used programming environments which emulate this behavior, automatically parsing file headers and presenting rich data layouts.
I wonder how much of the value of University is to allow people to learn from mistakes like this outside industry. No snark intended.
Sometimes a particular comment reminds me about why I love lobste.rs, and this was one :)
I have been physically hurt from bad audio code twice :
Put your OS volume to 100% and use hardware for volume control. Same should apply to anything that could hurt you physically (modern cars are scary).
Pulseaudio flat-volumes. Still on by default in Debian 10, got me too.
Should be disabled in Debian 11: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=674936
In 2017 I had a bluetooth speaker paired to a Dell XPS where I had to set the volume at 200% to hear decent audio from the speaker. When it unpaired, pulseaudio pushed 200% volume through the laptop speakers, which blew out. I’m more pissed that Dell didn’t have any hardware regulation to prevent this than with Pulseuadio.
Indeed. Ideally, while testing audio stuff, people should even go a step further and set the hardware volume as low as possible in order to not suffer too much if the software accidentally outputs a square wave at maximum amplitude. But this pretty much implies to use “loud” test files, in other words, loudness war and compression does help in this case. Another more complicated solution is to plug a limiter or compresser between the sound card and speakers.
I’ve code-hurt both myself and other devices many many times – to the point that my favorite pet is nowadays the watchdog. Lets take three examples:
Many years ago I did a lot of system tracing for hunting down touch to display latency and power consumption bugs in Android. This is mostly what evidence that remains of the rig itself - https://www.youtube.com/watch?v=bW07-iPqaEk but basically a high-precision/high accuracy clock outputting to a seven-segment display that was OCRed through high speed cameras, a lot of filters and tuning, reflective tape on a “finger”, directed lights and some other things that I forgot / omitted.
The robot itself (mostly out of frame) was one of those pneumatic industrial monsters. The robot was configured to do all sorts of fancy motions, and by tapping the display scan-out buffers there was also automated navigation through UI and scenarios that put some D&D dungeon masters to shame. You picked a preset, it ran the calibration routine, and then you got nice streamed output of all major culprits in the hardware and software stack for whatever case was being studied. In the early development days though, the calibration was sometimes re-used and >names redacted< had left the calibration profile forced in a state where the jig holding the phone in place was not there, shifting the “ground level”. Thus, the robot happily cut into the unsuspecting phone, to the point of going through the display and pcb, puncturing the battery. The “finger” also broke and needing costly repairs.
Early days of coding VR was not much fun (it still isn’t). Headset on, fire things up, get blasted by something looking awful, close eyes in panic, headset off, patch, rebuild, retry. When things started to stabilize a bit, confidence grew to the point of watching movies and coding inside the thing - great stuff. What was not so great was precision errors from sensor filters that snuck in and causing the left and right eye matrices to accumulate “drift” in rotation and translation – so when you think you are looking straight forward, your eyes eventually needed to look to the extreme sides in order to accomplish that feat. On top of the nasty strain this accumulated, when the headset was removed, the world did not respect the alignment and rotation of my eyeballs, my head felt like someone was jumping on it and it took a blindfold for a full day to get back on track.
One of my favourite projects to date, Senseye, is about finding non-native representations for visualizing large quantities of data for debugging/reverse engineering. One fun experiment in that sense was taking a large chunk of data, say a few Tb of disk, pick some data independent 3D projection, put on the VR HMD again and lie in bed watching the pretty shapes and colours for a few hours. I don’t have a recording of the particular experiment, but this YT clip shows a little of what the concept was. Like other sensory substitution experiments, there were expected things and unexpected things. The expected effect appeared - patterns were abound and it was rather easy to get a rough “oh I’m in this kind of cluster of data types, yeah that’s a blonde, brunette, kind of a thing. What I did not expect was that upon getting back into the real world, I had substantial difficulties telling things apart. A strange sensation to describe, but what I knew was a doorway ‘felt’ like a solid wall and walking through it felt just as blocking as trying to walk into concrete. Pens, phones and soldering irons on the table felt ‘part of it’ and picking / touching felt uneasy and dreamlike. Things reverted after some 15-30 minutes and I haven’t been able to repeat it.
I worked on implementing some circuitry and the microcontroller firmware for a muscle stimulator (used in medical science) as part of my undergrad EE degree. What’s a muscle stimulator? Well … two electrodes are placed on the skin near the muscle to be stimulated, and electric charges are pulsed through … :)
Let’s just say that the testing phase was quite eye opening for all parties involved. I was the only tester, but once my classmates heard about what I was doing, they all volunteered!
Note: All testing was done under supervision, which was a bit unnecessary considering the device was powered (9V battery if it was being used in production) by a 9V power supply with the current significantly limited.
Additionally, I have bilateral cochlear implants, and the program mapping can be quite painful.
A disposable camera flash is powered off a single AA battery at 1.5 volts. If you short the flash bulb with your skin, you will have electrical burns that take months to heal (not to mention it’ll knock you on your ass).
Capacitors can do some very unintuitive things to a mild power supply.
Capacitors are amazing little components! AEDs are basically giant capacitors, due to the amount of energy they dissipate in a single charge.
Yeah, but it was deliberate ;)
https://gitlab.com/duncan-bayne/arghspec … plus a Phidget I/O board, a relay, the transformer from a joke shocking pen, and an adhesive ECG patch on my arm.
I found that if I placed the patch just right, I could make my arm spasm every time a test failed.
Not great for encouraging red -> green -> refactor though :)