If you were writing assembly by hand, you could mix-and-match pointer types to do whatever you wanted given that these concepts had no special meaning to the OS.
You don’t need assembly for that, the compiler is designed to do it and the documentation encourages you to. The four core memory models just represent very simple starting points.
First, the compiler allows __near and __far to decorate any pointer, including a function declaration. Typically this is used to allow a section of code to make near function calls and operate on near data by default, but can cross segments via a far function call, or cross segments for a data access. The programmer is expected to organize code so that “related” code - those calling into each other a lot - share a code segment, and data frequently accessed together share a data segment.
For Microsoft C, this was originally done with /NM, /NT and /ND compiler switches, allowing each module to refer to a specific named code and/or data segment, so near accesses referred to that named segment, which spans compilation units. In the simple model, a large calling convention program did the same thing, except every source file ended up with its own segment (which is much more than necessary.) Ideally, each segment would be as close as possible to 64Kb in size, to maximize the number of near accesses.
Further, the compiler added a __based keyword to express that one pointer shares a segment from a different pointer. This is useful if a piece of code is accessing two far pointers that have a relationship - eg. manipulating a data structure outside of its default segment. Rather than have two far pointers - which takes four 16 bit registers in a register constrained architecture - the compiler can have three registers, being one segment and two offsets.
In hindsight, I think these memory model labels obscured the real guidance. If you’re a hobbyist writing a small program, use the small memory model. If you’re a development team writing a large program, you need to manage segments. The “large” model should essentially never be used, because it results in more segments than necessary, with more far pointers than necessary; the documentation describes how to improve it, but doesn’t actively label it as a wasteful idea.
You don’t need assembly for that, the compiler is designed to do it and the documentation encourages you to. The four core memory models just represent very simple starting points.
First, the compiler allows
__nearand__farto decorate any pointer, including a function declaration. Typically this is used to allow a section of code to make near function calls and operate on near data by default, but can cross segments via a far function call, or cross segments for a data access. The programmer is expected to organize code so that “related” code - those calling into each other a lot - share a code segment, and data frequently accessed together share a data segment.For Microsoft C, this was originally done with
/NM,/NTand/NDcompiler switches, allowing each module to refer to a specific named code and/or data segment, so near accesses referred to that named segment, which spans compilation units. In the simple model, a large calling convention program did the same thing, except every source file ended up with its own segment (which is much more than necessary.) Ideally, each segment would be as close as possible to 64Kb in size, to maximize the number of near accesses.Further, the compiler added a
__basedkeyword to express that one pointer shares a segment from a different pointer. This is useful if a piece of code is accessing two far pointers that have a relationship - eg. manipulating a data structure outside of its default segment. Rather than have two far pointers - which takes four 16 bit registers in a register constrained architecture - the compiler can have three registers, being one segment and two offsets.In hindsight, I think these memory model labels obscured the real guidance. If you’re a hobbyist writing a small program, use the small memory model. If you’re a development team writing a large program, you need to manage segments. The “large” model should essentially never be used, because it results in more segments than necessary, with more far pointers than necessary; the documentation describes how to improve it, but doesn’t actively label it as a wasteful idea.
Pins and traces were expensive in 1978.