i understand the drive to make use of as much as possible while spending the least amount of money, but this should not be the way anyone operates what they expect to be a reliable system. If you don’t need it to be reliable, you can just make every disk its own vdev and detect errors without possibility of correction.
Given the constraints imposed, I would have asked ZFS to stripe over mirrored pair vdevs, and replaced a small drive with either a 6 or 8TB drive, so I could have 2x(6 or 8), 2x6, and whatever 2x2TB vdevs were available. I would also strongly consider moving root off the USB stick – my experience is that they like to fail much faster than any other kind of media.
My inclination would be to try to solve this problem using geom on FreeBSD and not use ZFS at all, but I’m pretty ignorant about ZFS generally. I just feel like it would be easier to understand what is going on when the inevitable disk failures start rolling in.