Cobol has a niche no other language occupies: a compiled language for record-oriented I/O.
That might sound strangely specialized, but it’s not. Record-oriented I/O describes, I would argue, nearly *all* applications. Yet, since the advent of C, nearly all applications have relegated I/O to an external library, and adopted the Unix byte-stream definition of a “file”.
Sounds like a database. In particular, a database which is closely integrated with and expressed in terms of the data structures of a general-purpose programming language; see kdb/jd, where a column is ‘just’ an array. See also experiments in transparent persistence.
I think this is a sliiiightly narrow view of databases-and-languages. In the DEC world, record management/ISAM was a library available to all languages on the system. On i, there is the notion of record-based files and things in RPG to support it more easily, but there’s also just SQL/RDBMSes available system-wide too. Arguably in the Smalltalk (and similar) world, you can just deal with persistent objects instead of a traditional database.
MUMPS was very interesting to develop in. It required creative use of data structures to solve problems when accessing data from a global store within a lightweight process.
However, combining data and application code so tightly lead to an incredible amount of spaghetti in practice and the consequence was manual QA was the only option for gaining confidence in delivering something safe and effective.
Afaik, old mainframe-y code often dealt almost exclusively with record-oriented databases, and these are the systems where COBOL was born. As pointed out, IBM i still works this way; I have friends who works with it (and MUMPS, now that I think of it) and it sounds like a heckin’ wild ride. And we often have latter-day formats such as CSV and SQLite that we use for basically the same purposes, and are used when you have giant wodges of real-world data such as accounting, genetics, event logs, and so on. Record-oriented databases sound great for many problems, just also pretty restrictive for many others.
Update and insert and delete make a lot of things harder….
If you can assume these things are entirely separate and don’t happen while you’re processing, ie. you’re doing batch processing, there are huge simplifications you can make and most things just become linearly walking through all the records and are a lot faster and easier.
These days the likes of sqlite and pgsql do near magic (compared to what DB’s did in the old days), so it isn’t as compelling as it use to be.
Yeah, I’m curious what advantages they expect in comparison to adding bdb to a non-cobol app. Or sqlite if you want a separate query language. Or some odbms of your local language’s choice if you want better data structure integration.
Sounds like a database. In particular, a database which is closely integrated with and expressed in terms of the data structures of a general-purpose programming language; see kdb/jd, where a column is ‘just’ an array. See also experiments in transparent persistence.
Or MUMPS, for that matter.
I think this is a sliiiightly narrow view of databases-and-languages. In the DEC world, record management/ISAM was a library available to all languages on the system. On i, there is the notion of record-based files and things in RPG to support it more easily, but there’s also just SQL/RDBMSes available system-wide too. Arguably in the Smalltalk (and similar) world, you can just deal with persistent objects instead of a traditional database.
MUMPS was very interesting to develop in. It required creative use of data structures to solve problems when accessing data from a global store within a lightweight process.
However, combining data and application code so tightly lead to an incredible amount of spaghetti in practice and the consequence was manual QA was the only option for gaining confidence in delivering something safe and effective.
Afaik, old mainframe-y code often dealt almost exclusively with record-oriented databases, and these are the systems where COBOL was born. As pointed out, IBM i still works this way; I have friends who works with it (and MUMPS, now that I think of it) and it sounds like a heckin’ wild ride. And we often have latter-day formats such as CSV and SQLite that we use for basically the same purposes, and are used when you have giant wodges of real-world data such as accounting, genetics, event logs, and so on. Record-oriented databases sound great for many problems, just also pretty restrictive for many others.
Update and insert and delete make a lot of things harder….
If you can assume these things are entirely separate and don’t happen while you’re processing, ie. you’re doing batch processing, there are huge simplifications you can make and most things just become linearly walking through all the records and are a lot faster and easier.
These days the likes of sqlite and pgsql do near magic (compared to what DB’s did in the old days), so it isn’t as compelling as it use to be.
Yeah, I’m curious what advantages they expect in comparison to adding bdb to a non-cobol app. Or sqlite if you want a separate query language. Or some odbms of your local language’s choice if you want better data structure integration.