Once we move beyond one-liners, a natural question is why. As in ‘Why not use Python? Isn’t it good at this type of thing?’
The reasons provided are fine, but for me the main reason is speed. AWK is much, much faster than Python for “line at a time” processing. When you have large files, the difference becomes clear. (perl -p can be a reasonable substitute.)
Once you are writing long AWK programs, though, it’s time to consider Python or something else. AWK isn’t very fun once data manipulation gets complicated.
+1. In my eyes, it’s Awk and then Perl. Perl turns out to be much better for these purposes than other scripting languages. The difference in startup time between Perl and Python is very significant. If you don’t use (m)any modules, Perl scripts usually start just as quickly as Awk scripts.
I’m sure that’s true for some kinds of scripts, but that doesn’t match my experience/benchmarks here (Python is somewhat faster than AWK for this case of counting unique words). For what programs did you find AWK “much, much faster”? I can imagine very small datasets being faster in AWK because it’s startup time is 3ms compared to Python’s 20ms.
Regarding your results, 3.55 under awk is with or without -b?
I get 1.774s (simple) and 1.136s (optimized) for Python. For simple awk, I get 2.552s (without -b) 1.537s (with -b). For optimized, I get 2.091s and 1.435s respectively. I’m using gawk here, mawk is of course faster.
Also, I’ve noticed that awk does poorly when there are large number of dictionary keys. If you are doing field based decisions, awk is likely to be much faster. I tried printing first field of each line (removed empty lines from your test file since line.split()[0] gives error for empty lines). I got 0.583s for Python compared to 0.176s (without -b) and 0.158s (with -b)
Once you are writing long AWK programs, though, it’s time to consider Python or something else. AWK isn’t very fun once data manipulation gets complicated.
I am consistently surprised that there aren’t more tools that support AWK-style “record oriented programming” (since a record need not be a line, if you change the record separator). I found this for Go, but that’s about it. This style of data interpretation comes up pretty often in my experience. I feel like as great as AWK is, we could do better - for example, what about something like AWK that can read directly from CSV (with proper support for quoting), assigning each row to a record, and perhaps with more natural support for headers.
You are right. Recently I was mixing AWK and Python in a way that AWK was producing key,value output easily readable and processed later by Python script. Nice, simple and quick to develop.
AWK is one of the most important languages in my toolbox. When I have to process lot of log files, there is no better tool. Unfortunately younger SDE are not very familiar with this language.
As an Amiga owner and user, I can only call this ST nonsense masochism… joking.
It does its job as well as it did back in the day, so why change it?
Worst that could happen (floppy or machine dies), he probably has a decent backup policy for these floppies, and the machine can be emulated well on inferior, unreliable, but commonly available newer hardware, until it gets fixed.
Yep. My hardware retrocomputing days are behind me (emulators do 99% of what I want and with kids and pets large amounts of rare hardware is asking for trouble), but the amount of enthusiast hardware out there to bring these machines into the 21st century is amazing.
There’s been a big boom lately in building not just “connect to the modern world” stuff but actual “upgrades” that theoretically would’ve been at home thirty years ago. The vampire upgrades for the Amiga 600, for example, are really nifty.
Vampire, I love the idea (FPGA-based accelerator board), but not a fan of the implementation (proprietary, closed and inflexible).
These days there’s the PiStorm… that one I like it’s open hardware, but I’m not a fan of the raspberry-pi-running-m68k-emu-as-a-linux-process approach. It can provide nowhere near a m68k’s interrupt response time, and thus it is an aberration.
An open FPGA-based solution would be neat, but remains undone.
There is Gotek floppy drive emulator commonly used in old machines. He appears to be very familiar with computers in general (like mentioning steem emulator), so I wouldn’t be surprised that he had installed it on his machine.
Gotek is just some hardware for FlashFloppy, the open source floppy emulation software.
I like to put emphasis on FlashFloppy, rather than the hardware.
Note that Gotek does indeed ship some firmware that works at some level. I wouldn’t want to use that.
Replacing floppy drive with a FlashFloppy device basically neuters a machine, making it unable to access actual floppies. But there’s uses for it. I use mine as DF1: (external drive) on my Amiga computers. On the A500 I can conveniently switch whether the FlashFloppy is DF0/DF1 at will, thanks to a small board that mounts under the relevant CIA chip.
It does its job better than present day hardware. He me tioned emulators and how he doesn’t bother transfering the application to a modern machine because it would just add a bunch of wait time for system start up and such things.
It is popular scam in Poland too. We have Olx board with messaging but these scammers always use Whatsup to communicate with seller. One more argument to not use it :) Whatsup of course.
Switch/case pattern matching would solve this issue in more elegant way, like it was done in e.g. C#. Learning curve of modern C++ is steeper every new version is released, which is quite sad.
<conspiracy_theory_mode=on>Who was so ultimately insane to put JS and browser stack in the spaceship? That has no right to work correctly! Everything must been recorded before!<conspiracy_theory_mode=off> ;)
Any non-maintained system will fall apart anyway, so Cobol is not an issue here. Rewriting it in Java, .net or any new fancy environment will cause the same issue in the future - maybe even quicker, as these languages are evolving much faster than Cobol.
Honestly, after over 13 years of programming in c++ I’m fed up. Year or two ago I begin feeling no joy in programming this language. All these changes are fine and needed, but due to the compatibility with older versions, the syntax and rules are vague.
I just need to find more motivation to learn something else. I’m tempted by F# and Go. Rust by all these poor evangelists and still not stable API is out of the question.
I mean - that there was story @lobsters quite recently about some tool where they had to dig into ~100 crates to find out the most recent version of rust able to compile everything. So, by stable ABI and API I mean just that - when I would like to write some simple tool that has some dependencies to other libraries, I don’t want to spend time to look for correct rust version to satisfy all dependencies. I will try to find that story.
Thanks for finding the link, I think it points to some terminological difference here!
“Latest stable” is always going to work as a correct rust version, as Rust is backwards compatible. It’s always possible to compile older code with newer compiler. The API is stable.
What is challenging is compiling newer code with older compiler. This is obviously a problem in any language, but is more pronounced in Rust for two reasons. First, Rust has a short six-week dev cycle. Second, there’s currently no tooling for specifying minimal required Rust version. This is indeed perceived by many as a big problem (although other argue that “evergreen compiler” is actually an advantage), but I think “lack of stable API” is the wrong name for the problem.
I have to admit against myself, but one of the most stable GUI API is provided by MS Windows. It is still possible to run today applications written 25+ years ago. It’s definitely not the best GUI library in the world :)
BeOS was my primary operating system for a couple of years (I even bought the Professional Edition…I might still have the box somewhere). Did my research and built a box that only had supported hardware - dual Celeron processors, 17” monitor at 1024x768, and some relatively large disk for the time.
I remember downloading it and playing around with it (maybe it was small enough to boot from a floppy?) but I couldn’t do anything useful with it. Was a bit too young as well, I guess today I could make do better with unfamiliar stuff.
It was my daily driver. 99% of my work at the time involved being connected to a remote device (routers and firewalls mostly), and BeOS could do that just fine.
It was a great system. There hasn’t been a better one since.
I had triple boot machine - Windows/Linux/BeOS that time. I used BeOS mainly to learn C++ programming. Their GUI toolkit was at that time quite nice - much nicer than MFC :)
Ah, my bad. I don’t remember the motherboard; this was 20 years ago. Sadly, I haven’t built my own since…probably 2002? I’m so out of the loop it’s not even funny.
(Unless you count putting a Raspberry Pi in a little plastic case as “building your own machine”. If so, then…it’s still been a few years.)
oh that’s quite OK. The BP-6 was quite famous in that era for allowing SMP with celerons that were built to disallow it. It was quite a popular choice for x86 BeOS at the time.
That time I found one modest board that was missing all these ‘gaming’ blink blinks - MSI B350 PC Mate. After two years I’m quite satisfied - system is rock solid with bit of RAM/CPU overclocking.
I think the fact it is popular works against it slightly, as it’s old enough and established enough not to have a cool factor.
It’s built in C++, which as another pointed out makes it trickier to write language bindings. They do exist, however - the Python bindings being decent, I hear.
I think it’s hard to find reasons not to use it. It has been used for PDA (remember those?), phone and car dashboard applications - and an entire desktop environment with all its applications, so it scales up and down a long way.
I see there is a complaint appearing that it makes a large binary. That’s pretty much a C++ issue - and has effects which don’t affect most applications - runtime linking speed, size on disk. When you’re at KDE scale, those issues become important, but for most use cases, these are less than a non-issue.
Qt is not a lightweight dependency. It is massive.
It’s a huge amount of code, and in some contexts, like compiling to WebAssembly or AppImage, it will result in a huge executable, which might cause problems, like long download times in a web application.
There is a high learning curve due to the size and complexity of Qt. Qt has reinvented all of the wheels. Like, it replaces the C++ standard library with idiosyncratic Qt replacements that you need to learn. Eg, std::string is replaced by QString. See also, code bloat.
Many C++ developers prefer to use a collection of small libraries with minimal dependencies. Each library solves one problem, and for each library, you get to pick the best of breed, the best tool for your job. Qt seemingly attempts to replace the entire C++ library ecosystem.
Qt begun development (1991, according to wikipedia) when STL was not yet available (1994, according to wikipedia). That may answer the question why is it so huge and bloated. I guess today when we have c++11/14/17 it would look so different.
There is a high learning curve due to the size and complexity of Qt. Qt has reinvented all of the wheels. Like, it replaces the C++ standard library with idiosyncratic Qt replacements that you need to learn. Eg, std::string is replaced by QString. See also, code bloat.
Again please forgive my newbie question - doesn’t everything do this? When I worked with MFC in the 90s it did, and I know BOOST does, and as you say so does Qt…
No, not all C++ libraries are as huge and multi-faceted as Qt, Boost and MFC. The vast majority of the C++ libraries that you will find on github are in fact much smaller, much simpler, and much more specialized. This is natural, because most C++ libraries are created and maintained by a single author to solve a specific problem.
In a previous response, @drs said: “My preference is smaller and lighter toolkits that have a minimal set of dependencies or can be statically linked into a small (less than 2 megabyte) executable”. I have the same preference. Static linking and executable size become important considerations if you need to ship your application in a self-contained format that contains all of your dependencies. It’s particularly important if you are compiling your code into WebAssembly.
Since I’m writing a GPU-centric application, I chose GLFW to create windows and manage keyboard/mouse input, OpenGL for rendering 2D and 3D graphics, and ImGui to create GUI widgets. Note that these are three separate libraries, which are developed independently of each other. GLFW and ImGui allow you to replace OpenGL with other graphics APIs, and ImGui allows you to replace GLFW with other window/io APIs. This is quite different from how giant monolithic GUI libraries work.
Thanks very much for explaining your choices and preferences, super useful background! I hadn’t heard of GLFW. Interesting that it’s a part of the OpenGL family.
4GB RAM and SATA port and these little beasts could handle casual daily work without any issues - web browsing, libreoffice/google docs, some games (via Android).
Adding some new theme to well know system doesn’t make it new OS. Are there any specific goals or features that will distinguish OS108 from other Linux/*BSD distributions? All are secure and open source. By looking into website I was not even interested to try iso image in VM. This is quite sad.
That’s interesting. I can look up systems created in the DDR and USSR, but what did people actually have access to? That’s a different question. I hadn’t even thought of smuggling in western computers.
In Poland there were special shops where you could buy western items for $$$ (called Pewex and Baltona). There were computers like Atari XE/XL or Commodore 64 offered. Additionally there were flea markets or specialized computer markets organized once per week usually in schools where regular people were selling and buying computers, equipment and (non-genuine) software in local currency (Złoty). These hardware and software items were usually smuggled with help of families on the West or sailors. Sometimes people just got permission and passport to travel to the western countries and with money that was saved they were buying computers. I read somewhere that someone smuggled computer in a freezer. Selling such computer could give money to survive half or whole year. There were official (state) companies that were building and selling computers but for regular people they were very hard to buy. Usually their products were exported to other countries and only items that didn’t met quality standards were sold in local market.
Sometimes regular person could buy computer or some electronic equipment in special shops for scouts (Składnica Harcerska).
I remember that my father obtained second hand Timex 1000 (clone of Sinclair ZX81) with 16kB expansion pack in the mid of 80’s - I guess it cost lot of money that time. For sure it was USA version as there was special transformer attached to its power supply. Later (around 1989-1990) they bough me and my brother Atari 65XE.
My colleague had a family in the West Germany and thus he had got Amstrad/Schneider CPC 464 - actually not very popular machine in Poland that time.
Name is very close to ‘Vala’ project.
At first I thought that’s what this was. But this is actually quite a bit neater
Yes I had to re-read the title 3 times to convince myself it didn’t say Vala :)
The reasons provided are fine, but for me the main reason is speed. AWK is much, much faster than Python for “line at a time” processing. When you have large files, the difference becomes clear. (
perl -p
can be a reasonable substitute.)Once you are writing long AWK programs, though, it’s time to consider Python or something else. AWK isn’t very fun once data manipulation gets complicated.
+1. In my eyes, it’s Awk and then Perl. Perl turns out to be much better for these purposes than other scripting languages. The difference in startup time between Perl and Python is very significant. If you don’t use (m)any modules, Perl scripts usually start just as quickly as Awk scripts.
I’m sure that’s true for some kinds of scripts, but that doesn’t match my experience/benchmarks here (Python is somewhat faster than AWK for this case of counting unique words). For what programs did you find AWK “much, much faster”? I can imagine very small datasets being faster in AWK because it’s startup time is 3ms compared to Python’s 20ms.
Any time the input file is big. As in hundreds of MGs big.
I used to have to process 2GB+ of CSV on a regular basis and the AWK version was easily 5x faster than the Python version.
Was the Python version streaming, or did it read the whole file in at once?
Streaming.
Regarding your results,
3.55
underawk
is with or without-b
?I get
1.774s
(simple) and1.136s
(optimized) for Python. For simple awk, I get2.552s
(without -b)1.537s
(with -b). For optimized, I get2.091s
and1.435s
respectively. I’m usinggawk
here,mawk
is of course faster.Also, I’ve noticed that awk does poorly when there are large number of dictionary keys. If you are doing field based decisions, awk is likely to be much faster. I tried printing first field of each line (removed empty lines from your test file since
line.split()[0]
gives error for empty lines). I got0.583s
for Python compared to0.176s
(without -b) and0.158s
(with -b)Same here. If you are making extensive use of arrays, then AWK may not be the best tool.
I dunno, I think it’s pretty fun.
I am consistently surprised that there aren’t more tools that support AWK-style “record oriented programming” (since a record need not be a line, if you change the record separator). I found this for Go, but that’s about it. This style of data interpretation comes up pretty often in my experience. I feel like as great as AWK is, we could do better - for example, what about something like AWK that can read directly from CSV (with proper support for quoting), assigning each row to a record, and perhaps with more natural support for headers.
You are right. Recently I was mixing AWK and Python in a way that AWK was producing key,value output easily readable and processed later by Python script. Nice, simple and quick to develop.
AWK is one of the most important languages in my toolbox. When I have to process lot of log files, there is no better tool. Unfortunately younger SDE are not very familiar with this language.
Fantastic! I wish I knew it before started working on UWP app project last year where we used lot of async - resulted in mysterious crashes :)
As an Amiga owner and user, I can only call this ST nonsense masochism… joking.
It does its job as well as it did back in the day, so why change it?
Worst that could happen (floppy or machine dies), he probably has a decent backup policy for these floppies, and the machine can be emulated well on inferior, unreliable, but commonly available newer hardware, until it gets fixed.
I think, near the end of the video, the owner said he had replaced the original floppy drive with a USB?
Amazing.
Yep. My hardware retrocomputing days are behind me (emulators do 99% of what I want and with kids and pets large amounts of rare hardware is asking for trouble), but the amount of enthusiast hardware out there to bring these machines into the 21st century is amazing.
There’s been a big boom lately in building not just “connect to the modern world” stuff but actual “upgrades” that theoretically would’ve been at home thirty years ago. The vampire upgrades for the Amiga 600, for example, are really nifty.
Vampire, I love the idea (FPGA-based accelerator board), but not a fan of the implementation (proprietary, closed and inflexible).
These days there’s the PiStorm… that one I like it’s open hardware, but I’m not a fan of the raspberry-pi-running-m68k-emu-as-a-linux-process approach. It can provide nowhere near a m68k’s interrupt response time, and thus it is an aberration.
An open FPGA-based solution would be neat, but remains undone.
There is Gotek floppy drive emulator commonly used in old machines. He appears to be very familiar with computers in general (like mentioning steem emulator), so I wouldn’t be surprised that he had installed it on his machine.
Gotek is just some hardware for FlashFloppy, the open source floppy emulation software.
I like to put emphasis on FlashFloppy, rather than the hardware.
Note that Gotek does indeed ship some firmware that works at some level. I wouldn’t want to use that.
Replacing floppy drive with a FlashFloppy device basically neuters a machine, making it unable to access actual floppies. But there’s uses for it. I use mine as DF1: (external drive) on my Amiga computers. On the A500 I can conveniently switch whether the FlashFloppy is DF0/DF1 at will, thanks to a small board that mounts under the relevant CIA chip.
I guess it’s matter of proper cable and drive that properly recognizes its order (doesn’t ignore wires), not FlashFloppy / Gotek itself?
Amiga can select a floppy out of 4 drives. AIUI it’s a simple enable on 4 different wires.
SEL0/1/2/3 are GPIOs on one of the CIAs. Getting between socket and CIA is ideal in order to mess with these.
https://www.engadget.com/2015-06-14-amiga-controls-school-district-hvac.html
It does its job better than present day hardware. He me tioned emulators and how he doesn’t bother transfering the application to a modern machine because it would just add a bunch of wait time for system start up and such things.
It is popular scam in Poland too. We have Olx board with messaging but these scammers always use Whatsup to communicate with seller. One more argument to not use it :) Whatsup of course.
DJGPP was my entrance to the world beyond 64kb in DOS. What a wonderful times :)
Switch/case pattern matching would solve this issue in more elegant way, like it was done in e.g. C#. Learning curve of modern C++ is steeper every new version is released, which is quite sad.
Perhaps RISC-V will gain bigger momentum when NVidia will buy ARM?
<conspiracy_theory_mode=on>Who was so ultimately insane to put JS and browser stack in the spaceship? That has no right to work correctly! Everything must been recorded before!<conspiracy_theory_mode=off> ;)
Any non-maintained system will fall apart anyway, so Cobol is not an issue here. Rewriting it in Java, .net or any new fancy environment will cause the same issue in the future - maybe even quicker, as these languages are evolving much faster than Cobol.
Honestly, after over 13 years of programming in c++ I’m fed up. Year or two ago I begin feeling no joy in programming this language. All these changes are fine and needed, but due to the compatibility with older versions, the syntax and rules are vague. I just need to find more motivation to learn something else. I’m tempted by F# and Go. Rust by all these poor evangelists and still not stable API is out of the question.
Did you mean to say “stable ABI”? Rust language and standard library API have been fully backward compatible since 1.0.
I mean - that there was story @lobsters quite recently about some tool where they had to dig into ~100 crates to find out the most recent version of rust able to compile everything. So, by stable ABI and API I mean just that - when I would like to write some simple tool that has some dependencies to other libraries, I don’t want to spend time to look for correct rust version to satisfy all dependencies. I will try to find that story.
[edit] Found: https://blog.debiania.in.ua/posts/2020-03-01-rust-dependencies-scare.html
Thanks for finding the link, I think it points to some terminological difference here!
“Latest stable” is always going to work as a correct rust version, as Rust is backwards compatible. It’s always possible to compile older code with newer compiler. The API is stable.
What is challenging is compiling newer code with older compiler. This is obviously a problem in any language, but is more pronounced in Rust for two reasons. First, Rust has a short six-week dev cycle. Second, there’s currently no tooling for specifying minimal required Rust version. This is indeed perceived by many as a big problem (although other argue that “evergreen compiler” is actually an advantage), but I think “lack of stable API” is the wrong name for the problem.
You named the issue better than me - thank you!
I have to admit against myself, but one of the most stable GUI API is provided by MS Windows. It is still possible to run today applications written 25+ years ago. It’s definitely not the best GUI library in the world :)
BeOS was my primary operating system for a couple of years (I even bought the Professional Edition…I might still have the box somewhere). Did my research and built a box that only had supported hardware - dual Celeron processors, 17” monitor at 1024x768, and some relatively large disk for the time.
It was great.
It was - Very fast, very fun.
Out of interest, what did you use it for?
I remember downloading it and playing around with it (maybe it was small enough to boot from a floppy?) but I couldn’t do anything useful with it. Was a bit too young as well, I guess today I could make do better with unfamiliar stuff.
It was my daily driver. 99% of my work at the time involved being connected to a remote device (routers and firewalls mostly), and BeOS could do that just fine.
It was a great system. There hasn’t been a better one since.
I had triple boot machine - Windows/Linux/BeOS that time. I used BeOS mainly to learn C++ programming. Their GUI toolkit was at that time quite nice - much nicer than MFC :)
was it the Abit BP-6? I had two of those as well, for BeOS. Loved them almost as much as I loved a real bebox. Way faster too :-)
Nah, all self-built, back when building your own machine could actually be significantly cheaper than buying a prebuilt one.
the bp-6 is a motherboard. I hope that counts as self-built :-)
Ah, my bad. I don’t remember the motherboard; this was 20 years ago. Sadly, I haven’t built my own since…probably 2002? I’m so out of the loop it’s not even funny.
(Unless you count putting a Raspberry Pi in a little plastic case as “building your own machine”. If so, then…it’s still been a few years.)
oh that’s quite OK. The BP-6 was quite famous in that era for allowing SMP with celerons that were built to disallow it. It was quite a popular choice for x86 BeOS at the time.
Always self-assembled desktop PC. Currently 2-years old AMD Ryzen 1600 / 16GB RAM/ 240GB SSD + 480GB M.2 SSD / AMD Radeon 550.
what mobo are you using?
That time I found one modest board that was missing all these ‘gaming’ blink blinks - MSI B350 PC Mate. After two years I’m quite satisfied - system is rock solid with bit of RAM/CPU overclocking.
(Seriously asking - I’m a C++ newbie.)
What’s not to like about Qt?
I think the fact it is popular works against it slightly, as it’s old enough and established enough not to have a cool factor.
It’s built in C++, which as another pointed out makes it trickier to write language bindings. They do exist, however - the Python bindings being decent, I hear.
I think it’s hard to find reasons not to use it. It has been used for PDA (remember those?), phone and car dashboard applications - and an entire desktop environment with all its applications, so it scales up and down a long way.
I see there is a complaint appearing that it makes a large binary. That’s pretty much a C++ issue - and has effects which don’t affect most applications - runtime linking speed, size on disk. When you’re at KDE scale, those issues become important, but for most use cases, these are less than a non-issue.
Qt is not a lightweight dependency. It is massive.
It’s a huge amount of code, and in some contexts, like compiling to WebAssembly or AppImage, it will result in a huge executable, which might cause problems, like long download times in a web application.
There is a high learning curve due to the size and complexity of Qt. Qt has reinvented all of the wheels. Like, it replaces the C++ standard library with idiosyncratic Qt replacements that you need to learn. Eg, std::string is replaced by QString. See also, code bloat.
Many C++ developers prefer to use a collection of small libraries with minimal dependencies. Each library solves one problem, and for each library, you get to pick the best of breed, the best tool for your job. Qt seemingly attempts to replace the entire C++ library ecosystem.
Qt begun development (1991, according to wikipedia) when STL was not yet available (1994, according to wikipedia). That may answer the question why is it so huge and bloated. I guess today when we have c++11/14/17 it would look so different.
Again please forgive my newbie question - doesn’t everything do this? When I worked with MFC in the 90s it did, and I know BOOST does, and as you say so does Qt…
No, not all C++ libraries are as huge and multi-faceted as Qt, Boost and MFC. The vast majority of the C++ libraries that you will find on github are in fact much smaller, much simpler, and much more specialized. This is natural, because most C++ libraries are created and maintained by a single author to solve a specific problem.
In a previous response, @drs said: “My preference is smaller and lighter toolkits that have a minimal set of dependencies or can be statically linked into a small (less than 2 megabyte) executable”. I have the same preference. Static linking and executable size become important considerations if you need to ship your application in a self-contained format that contains all of your dependencies. It’s particularly important if you are compiling your code into WebAssembly.
Since I’m writing a GPU-centric application, I chose GLFW to create windows and manage keyboard/mouse input, OpenGL for rendering 2D and 3D graphics, and ImGui to create GUI widgets. Note that these are three separate libraries, which are developed independently of each other. GLFW and ImGui allow you to replace OpenGL with other graphics APIs, and ImGui allows you to replace GLFW with other window/io APIs. This is quite different from how giant monolithic GUI libraries work.
Thanks very much for explaining your choices and preferences, super useful background! I hadn’t heard of GLFW. Interesting that it’s a part of the OpenGL family.
4GB RAM and SATA port and these little beasts could handle casual daily work without any issues - web browsing, libreoffice/google docs, some games (via Android).
The nanoPC T4 has exactly this although no SATA - This is replaced with an M.2 slot.
So even better. m.2 to pcie 4x raiser and you can get rtx2080 on board :)
Haha you could! I wonder if there are ARM drivers, this could be an interesting project!
Adding some new theme to well know system doesn’t make it new OS. Are there any specific goals or features that will distinguish OS108 from other Linux/*BSD distributions? All are secure and open source. By looking into website I was not even interested to try iso image in VM. This is quite sad.
So, after many years, web browser from Microsoft may be available on platforms other than MS Windows.
I wasn’t exactly holding my breath.
Having recently imported a Trabant, I’m facinated by the people of the DDR’s need to do more with less.
One of my curiosities, being also interested in early computers, is what computing was like in the DDR.
EDIT: Actually, I should correct myself. Not so much “do more with less,” as “getting by with less.”
If it was like in the communist Poland, then in homes mostly done over smuggled computers from UK and US.
That’s interesting. I can look up systems created in the DDR and USSR, but what did people actually have access to? That’s a different question. I hadn’t even thought of smuggling in western computers.
In Poland there were special shops where you could buy western items for $$$ (called Pewex and Baltona). There were computers like Atari XE/XL or Commodore 64 offered. Additionally there were flea markets or specialized computer markets organized once per week usually in schools where regular people were selling and buying computers, equipment and (non-genuine) software in local currency (Złoty). These hardware and software items were usually smuggled with help of families on the West or sailors. Sometimes people just got permission and passport to travel to the western countries and with money that was saved they were buying computers. I read somewhere that someone smuggled computer in a freezer. Selling such computer could give money to survive half or whole year. There were official (state) companies that were building and selling computers but for regular people they were very hard to buy. Usually their products were exported to other countries and only items that didn’t met quality standards were sold in local market. Sometimes regular person could buy computer or some electronic equipment in special shops for scouts (Składnica Harcerska).
I remember that my father obtained second hand Timex 1000 (clone of Sinclair ZX81) with 16kB expansion pack in the mid of 80’s - I guess it cost lot of money that time. For sure it was USA version as there was special transformer attached to its power supply. Later (around 1989-1990) they bough me and my brother Atari 65XE. My colleague had a family in the West Germany and thus he had got Amstrad/Schneider CPC 464 - actually not very popular machine in Poland that time.