This might be a viable approach if you’re mainly doing trigonometry functions. But if you’re maxing that with calculus, I think the costs will outweigh the benefit.

Reminds me a bit of Norman Wildberger’s trigonometry.

I think if you are strong enough at math to be able to handle the wide-ranging consequences of reimagining the foundations, you probably don’t need the benefit, and if you aren’t, then you probably don’t want to put yourself in a position where there are only one or two books you can use. These iconoclasts are right that you can reconceptualize everything if you want to put in the work. But the proselytization of it I question.

Does such derivatives actually come up in game engine code? How often?

Remember that Casey Muratori was talking specifically about code, most notably game engines. It’s not about reimagining all of maths. It’s about reimagining a little part of game engine code. Something that some libraries have already done, since apparently half turns are already a thing in some trigonometric libraries.

That’s why fernplus specifically said if you’re mixing it with calculus.

Ah, my bad.

I’ve seen in HN a comment explaining that using radians is real helpful for symbolic manipulation, most notably because it makes sin() and cos() derivative of each other (just like exponential is a derivative of itself). That same comment however noted that it didn’t help one bit with most numerical applications implemented in computer programs.

It comes up anywhere that dot products are used, because dot products can be interpreted as cosines. First and second derivatives are used when tracing/casting rays, in particular.

For anyone else who struggled to confirm that in their heads, Wolfram Alpha agrees that d/dt (sin(2 πt)) = 2 π cos(2 πt), which is equivalent to 2 π mycos(t).

And if you’re a game designer trig functions are almost certainly most of what you’re actually doing, because that’s what you need to do to calculate what polygons appear where.

Modern CPUs and GPUs have hardware support for sin, cos and tan calculations.
I wonder if trig functions that operate on turns rather than radians can really be faster if they aren’t using this hardware support. I guess it depends on the specific processor, and maybe also on which register you are operating on (since intel has different instruction sets for different size vector registers).

If you are programming a GPU, you are generally using a high level API like Vulkan. Then you have the choice of using the SIN instruction that’s built into the SPIR-V instruction set, vs writing a turn-based sin library function using SPIR-V opcodes. I wouldn’t expect the latter to be faster. Maybe you could get some speed using a lookup table, but then you are creating memory traffic to access the table, which could slow down a shader that is already memory bound.

A lot of code avoids using hardware sin and cos because they are notoriously slow and inaccurate. As such, it ends up using software emulated sin, cos, etc. So turns definitely shouldn’t be worse.

Maybe this is changing, but historically on CPUs, using the sin instruction is not a great idea.

I wonder if trig functions that operate on turns rather than radians can really be faster if they aren’t using this hardware support.

CUDA has sincospif and sincosf, only the latter has its precision affected by –use_fast_math, so maybe all “builtin” functions still do the conversion to turns in code before accessing hardware?

i’d definitely want to name it “sintau” or something instead of “sin”, or have a “turns” newtype to make it obvious that it’s different than every other sin function in every other library

Right; the problem here is you don’t have to convince the application developers; application developers are stuck with whatever the standard library uses. Changing the standard library functions is much, much more difficult.

it’s not though - as per the article the problem is with the pi, not with the 2, so switching to tau provides no improvement. you’re still working with an irrational multiplier for no good reason if your only goal is to do trigonometric calculations.

For most purposes, reparameterizing the circle don’t yield enough practical benefits to matter.

For one, really important (to me and many people) case, radians are better.If I write my Fourier transform in terms of exp(2 pi i x), I don’t have to remember which direction of transform I defined as having a factor of 1/sqrt(2pi) out front.

This might be a viable approach if you’re mainly doing trigonometry functions. But if you’re maxing that with calculus, I think the costs will outweigh the benefit.

Reminds me a bit of Norman Wildberger’s trigonometry.

I think if you are strong enough at math to be able to handle the wide-ranging consequences of reimagining the foundations, you probably don’t need the benefit, and if you aren’t, then you probably don’t want to put yourself in a position where there are only one or two books you can use. These iconoclasts are right that you can reconceptualize everything if you want to put in the work. But the proselytization of it I question.

you’re just using a function mysin instead of sin, where mysin(t) = sin(2 pi t). There wont be any problems with it.

The derivative of sin(t) is cos(t). The derivative of mysin(t) is not mycos(t), it’s 2 pi mycos(t).

Does such derivatives actually come up in game engine code? How often?

Remember that Casey Muratori was talking specifically about code, most notably game engines. It’s not about reimagining all of maths. It’s about reimagining a little part of game engine code. Something that some libraries have already done, since apparently half turns are already a thing in some trigonometric libraries.

That’s why fernplus specifically said if you’re mixing it with calculus.

I guess one place where it might come up is small-angle approximations: when measured in radians, sin(x) is about x, and cos(x) is about 1-x^2/2.

Ah, my bad.

I’ve seen in HN a comment explaining that using radians is real helpful for symbolic manipulation, most notably because it makes sin() and cos() derivative of each other (just like exponential is a derivative of itself). That same comment however noted that it didn’t help one bit with most numerical applications implemented in computer programs.

It comes up anywhere that dot products are used, because dot products can be interpreted as cosines. First and second derivatives are used when tracing/casting rays, in particular.

Ah, and when you derive radians based cosines you avoid multiplying by a constant there. Makes sense.

For anyone else who struggled to confirm that in their heads, Wolfram Alpha agrees that

d/dt(sin(2πt)) = 2πcos(2πt), which is equivalent to 2πmycos(t).The chain rule:

(

f∘g)′(x) =g′(x)f′(g(x))Substituting:

f(x) =sin(x)g(x) = 2πxmysin=f∘gGives:

mysin′(x) = 2πcos(2πx)Hope this helps.

And if you’re a game designer trig functions are almost certainly most of what you’re actually doing, because that’s what you need to do to calculate what polygons appear where.

Modern CPUs and GPUs have hardware support for sin, cos and tan calculations. I wonder if trig functions that operate on turns rather than radians can really be faster if they aren’t using this hardware support. I guess it depends on the specific processor, and maybe also on which register you are operating on (since intel has different instruction sets for different size vector registers).

If you are programming a GPU, you are generally using a high level API like Vulkan. Then you have the choice of using the SIN instruction that’s built into the SPIR-V instruction set, vs writing a turn-based sin library function using SPIR-V opcodes. I wouldn’t expect the latter to be faster. Maybe you could get some speed using a lookup table, but then you are creating memory traffic to access the table, which could slow down a shader that is already memory bound.

A lot of code avoids using hardware sin and cos because they are notoriously slow and inaccurate. As such, it ends up using software emulated sin, cos, etc. So turns definitely shouldn’t be worse.

Maybe this is changing, but historically on CPUs, using the sin instruction is not a great idea.

CUDA has sincospif and sincosf, only the latter has its precision affected by –use_fast_math, so maybe all “builtin” functions still do the conversion to turns in code before accessing hardware?

https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__SINGLE.html#group__CUDA__MATH__SINGLE_1g9456ff9df91a3874180d89a94b36fd46

seems like a good idea

i’d definitely want to name it “sintau” or something instead of “sin”, or have a “turns” newtype to make it obvious that it’s different than every other sin function in every other library

Right; the problem here is you don’t have to convince the application developers; application developers are stuck with whatever the standard library uses. Changing the standard library functions is much, much more difficult.

Using tau is basically the same as using turns in practice. tau/2 = half turn, tau/4 = quarter turn, etc.

it’s not though - as per the article the problem is with the

`pi`

, not with the`2`

, so switching to`tau`

provides no improvement. you’re still working with an irrational multiplier for no good reason if your only goal is to do trigonometric calculations.For most purposes, reparameterizing the circle don’t yield enough practical benefits to matter.

For one, really important (to me and many people) case, radians are better.If I write my Fourier transform in terms of exp(2 pi i x), I don’t have to remember which direction of transform I defined as having a factor of 1/sqrt(2pi) out front.