On Linux you don’t need to write any code to measure memory usage; it’s built into the time command (which is different than the time shell keyword!)
Creating 1 MB, 10 MB, and 100 MB strings in Python:
$ /usr/bin/time --format '%M' -- python -c '"x"* 10**6'
$ /usr/bin/time --format '%M' -- python -c '"x"* 10**7'
$ /usr/bin/time --format '%M' -- python -c '"x"* 10**8'
I use this for Oil’s benchmarks, which I just updated: https://www.oilshell.org/release/0.12.9/quality.html#benchmarks
This is a C++ program and it’s using iostream to output text, which surely allocates some memory up front. In my experience iostream is pretty heavyweight. The program would be better off waiting until the end to write any output.
Also, on both Darwin and Linux a large call to malloc will just reserve address space, but won’t actually allocate any pages until they’re written to. Depending on how “memory usage” is measured, these unmapped pages may or may not count.
I’m not sure exactly what metric “RSS” corresponds to on macOS; that’s from a POSIX API using POSIX terminology but the intrinsic Darwin terminology isn’t quite the same. The one I’ve used the most over time is RPRVT, the Resident Private size, which is the amount of paged-in address space private to that process. It’s also possible to measure just the size of the malloc heap, which is also useful but might not count “huge” heap blocks allocated by vm_alloc.