CGI scripts have fallen out of favor because they are insecure by default.
FTFY ;)
Seriously though, CGI scripts even fell out of favor for tasks where performance wasn’t a significant constraint. That should tell us that performance shouldn’t be listed first in any summary of their downfall.
There are some inherent security flaws in the HTTP abstraction provided by CGI.
Most recent example which I remember is how http request headers are normalized and presented to the CGI binary by using underscores instead of dashes. This allows a cross-origin attacker to supply otherwise forbidden headers by supplying them as underscored instead of dashed header names, bypassing CORS restrictions altogether.
Now that does speak for (against) CGI itself. Thank you for the insight. Likely, I am inadvertently making a mistake by lumping CGI and FastCGI together and thinking of the latter as a sped-up version of the former in my mind.
Do you know if FastCGI has similar flaws to ones you’ve mentioned about CGI?
Yes, all CGI variants do the same kind of header normalization. I never considered this specific thing before, I think it actually is a legitimate worry! Perhaps CGI should be amneded to … at least reject headers with underscores.
I don’t know how recent that was discovered, but I just now tried it with Apache (latest version) and it failed. Normally, Accept-Encoding will be turned into HTTP_ACCEPT_ENCODING, so I tried Accept_Encoding and HTTP_Accept_Encoding. Both failed, so that vector might not be an issue. I also sent a completely bogus Accept-Encoding header and yes, that got through intact, so how you parse the headers might still be an issue.
I think CGI lost mindshare because of its close association with Perl. For non-CGI web stuff, PHP was in a better position to work with non-Apache web servers than mod_perl. Python and Ruby were able to learn from Perl’s mistakes, so they both avoided the horrible choice between tight coupling to a particular web server, or slow process startup for every request.
I understand you are joking when saying CGI scripts are insecure by default, but I don’t understand: where does this sentiment come from? In one way, they are intrinsically safer, because each request is handled in a privacy of its own process and memory space.
CGI’s use of environment variables encourages a halfarsed approach to parsing untrusted input. Naïve programmers are less likely to step on every rake in the grass if they are using a proper HTTP parser library, which is more likely in a non-CGI server.
I see it as a fame issue that attracts aforementioned naïve programmers, rather. Aren’t we seeing a similar pattern with PHP, then Python, now Node.js?
Naïve programmers can run into less trouble if the popular tools guide them along a safer path. I prefer to think of their mistakes a symptoms rather than causes.
PHP was also a security trash fire, but they changed the defaults to be less unsafe, and it’s much better now. (Although Wordpress is its own unique collection of vulnerabilities.)
I don’t think Python ever had a reputation for being as dangerous.
Node and npm, dunno, their security problems seem to be more to do with an ecosystem that’s many orders of magnitude bigger and busier than Perl and PHP were 20 or 25 years ago. Weird that I hear more about npm supply chain attacks than node.js RCE vulnerabilities.
PHP did some spectacularly stupid things back in the day. The two that I remember most are:
There was a magic globals option that turned every parameter in the request into a global. This was great for simple scripts because you just referred to things as globals, but it also let an attacker clobber other things.
It encouraged creating SQL via string concatenation and had a magic quotes option that would try to automatically escape things in the right way, this worked well for non-malicious inputs but was very easy for an attacker to bypass (the point where the quotes were inserted didn’t really know which inputs came from where).
By default, in the early 2000s, the blast radius of a compromised CGI program was anything the web process could touch. There were a lot of ways to limit the blast radius but you had to opt into each one and hope you didn’t make any configuration mistakes. Those were easy to make, particularly because the tools of the day encouraged implicit type coercion from the env var strings. Recipes for dropping privileges were hard to come by and blunt. It usually consisted of dropping into a user with nearly no privileges but then later someone would fix a bug by modifying the user in an insecure fashion. e.g. I once found a compromised system where one mitigation was to set the CGI user’s shell to something like /bin/false but someone had been frustrated with an error message and gave the user a shell in a desperate moment of debugging. Bad practices all around, but those were common.
A lot of people here are comparing CGI scripts to PHP. IME CGI scripts were replaced by mod_perl, mod_python, wsgi, etc. PHP felt comfortable with it’s circa-v4 insecure defaults because CGI scripts and everything else was insecure by default.
I think a secure-by-default CGI web server could do quite well today. Particular if it launched everything in bwrap and introduced something like JSON-CGI (or similar) so not everything was string-typed.
It’s a problem comparing to what, though? To no CGI? To a custom server extension? To a custom server?
Just to put it in perspective. I don’t know when that particular Q&A was written, but the linked document goes back to 1997. PHP 3 came out in 1998. J2EE in 1999. ASP.NET in 2002. I actually don’t know and am quite curious how people did web programming besides CGI?..
I don’t know the exact timeline, but the AOL webserver ran an embedded TCL interpreter. And Paul Graham famously wrote Viaweb in Common Lisp (probably with a custom server in that language).
ColdFusion, for some incredibly bizarre reasons, ended up in the control systems for some missile platforms. I have no idea who thought that was a good idea but the EOL caused a lot of problems for the US and a few allied militaries.
AOLServer / NaviServer is really old, possibly the oldest web server that is still maintained. (Depending on whether you count Apache’s NCSA HTTPD prehistory, I guess.)
I switched away from CGI scripts entirely for performance reasons: PHP (and then later Python via things like mod_python) avoided the overhead of starting a new process on every incoming request.
Maybe 20 years ago I wasn’t thinking so hard about security, but the impression I had back then was that CGI scripts that had security flaws in them were insecure - same as for PHP.
Regarding performance, removing overhead of starting a new process was one of the reasons FastCGI had been created.
I believe, CGI and FastCGI have their place still to this day. You may do dynamic web logic from virtually any programming language without a hassle of doing networking and HTTP handling (to an extend). It’s a nice glue gateway to paste existing web-unaware software into Internet. Take a text-based programme, sprinkle it with “Content-Type: plain/text\n\n”, wrap it in slowcgi(8), and plug it into one of the numerous FastCGI-enabled web servers out there. Done!*
I have the same recollection of those years; I can’t remember anyone arguing for fastcgi or reverse proxying to replace cgi for nonperformance reasons, mostly due to application boot.
Cross-OS for its own sake is fair enough I suppose, though I’d still be interested to know what about Linux you think is unfortunate as the OS of choice for this project, or why you think another would be better.
My take on this is: it’s a piece of software designed for industrial control with hard real-time requirements. As one of my mentors liked to say, a mill or a lathe isn’t the kind of equipment that just hurts you, it rips your arm off and beats you to death with it. I’m glad that they’re limiting their scope to a single kernel. Last I read an article about achieving hard real-time on Linux, it wasn’t exactly lacking nuance or pitfalls. Add more supported kernels and you multiply the chances that you introduce a bug, miss timing, and destroy the work/machine/operator.
I’d also like to point out that they don’t owe anyone cross-OS support. Including Linux in the name actually emphasizes their right to build what they want. The creators set out to create a CNC machine controller for Linux. If you want support for another OS, the source is GPLv2 :)
A brief look at the project reveals that they ship Linux kernel modules for controlling hardware via supported interfaces, and for any use with real hardware Linux kernel with -rt patchset is needed on the controlling computer. This surely makes moving to another kernel quite a large effort, as realtime drivers need to be ported. And the recommended installation method is a Debian-derivative GNU/Linux distribution.
So I would expect getting a good enough port to be a large undertaking, with benefits of using another OS inside that black box even harder to realise because the number of users with hardware access matters for testing.
So it is forcibly an ugly design, as we know it has to resort to kernel modules as Linux itself is ill-suited to abstract the hardware and enable doing the heavy lifting in user space. Noted.
Sure, they do have a userspace implementation, and apparently recommend the standard Preempt-RT patchset with user-space control implementation for some cases. It’s just that for many cases RTAI kernel and a kernel-space driver yield lower latency. Indeed, Linux is not a microkernel, so context switches have some costs.
Sure, making something realtime on top of a general-purpose non-realtime OS has its drawbacks, and the best latency will need some amount of special-case work. But it improves the chances of reusing some computer that is already around.
User-space real-time API-wise they claim to support Preempt-RT, RTAI and Xenomai kernels (all Linux-based).
Such machines rely on hard realtime controls and are very often done in microcontrollers. Latency is a huge constraint of the project. The project itself is very old: it dates back to the mid 90s: there wasn’t a lot of CPU power back then (especially on the ARM targets) and less knowledge and development to achieve hard real-time.
Yes, I meant this repository as the main development line, and it doesn’t seem to have firmly decided to never get to 5-axis support, so there is a clearly stated missing feature with respect to which it could be active but isn’t. Doesn’t matter for use cases where it fits, of course.
Yet my suspicion is that, despite the name, there’s technically not much tying this project to Linux specifically.
I’m unfamiliar with the real-time / CNC space. What other non-GPL/OSS kernel systems have support for the kind of real-time performance required by these machines?
What other non-GPL/OSS kernel systems have support for the kind of real-time performance required by these machines?
As a reminder, Linux isn’t exactly awesome at real time, particularly awful without rt patchset (and project seems to work w/o that), thus the requirements can’t be that strict, and there should be plenty of systems that meet them.
Are you coming at it from a principle perspective or practical?
From a practical point, several of the newer commercial control systems like Heidenhain and Siemens and probably more I can’t remember, have Linux base for the user interface.
Both works fine in daily use at the day job.
And is Windows really a better option?
I know Datron’s Next control is lovely and easy to use on the commercial side. Others include Centroid, UCCNC, Kflop, Planet CNC and Mach3 & 4.
All require specific electronics anyway that usually does the heavy lifting in serious (non hobby) use.
Taking apart and cleaning up a horizontal milling-machine, it is quite filthy.
So I expect to spend lots of time with soapy water, de-greaser and various cleaning implements.
After that, reassembly and checking squareness and alignment. I expect both to be quite horrible.
And perhaps using my lathe to create a holder for my lathes top-slide, to enable the use of the top-slide to bore or grind out the dramatic gouges in the spindle taper.
To be slightly more clear, I need to run the machine spindle without a tool mounted in the expected place and mount a different kind of tool and traverse it diagonally and not in-line with the normal machine travels. Also I’ll be cutting in to the machine using itself using a boring-bar or a grinder.
But realistically cleanup and reassembly done would be enough to make me feel like I accomplished something worthwhile, because the assembled parts to be mounted weigh a few hundred kilos and requires the use of a hydraulic engine/shop crane to lift.
The mill is without any manufacturers markings and looks like this.
I am still confused how it is even accepted in the USA that a banking website allows you to do whatever to your account by means of some password people are likely to use in many places. Here in Belgium we also have online banking, but you are provided a card reader by your bank to use it.
In order to do anything online to your account, you need to have your card, the card reader and your card’s pin code. Logging in requires you to receive a challenge number from the bank’s website, you slide your card in the reader, enter the challenge number and your pin code in the reader and receive another number that you enter in the bank’s website. You repeat this process when making a payment or something similar.
I know, it’s more cumbersome. I do feel rather secure with it though.
Imagine that in the near future you were burgled without obvious signs of breaking and entering…
Tenant: But I locked my door when I left on friday! I’m sure I did!
Insurance bureaucrat: But your lock, windows and door are all whole and the lock-log doesn’t have a break-in alert logged, it just show that you sir, entered your apartment at just before midnight on saturday.
Tenant: But I was on a freakin' airplane at the time!
Insurance bureaucrat: Our records clearly show that you entered your home and therefore can’t pay you for this supposed burglary and items you claim are missing.
Tenant: GAH!
Insurance bureaucrat: If you persist in claiming this was a burglary, we will have to sue you fraud. And according to our terms of service you agreed not to mention any insurance claims in public, regardless of their outcome. If you do, it’s slander and we’ll sue.
My guess would be a zillion CPU and GPU cores and easily expandable fast storage. Seems to fit the bill. Six thunderbolt ports makes it even easier to add storage than internal hard drive trays.
Sure, there is plenty of expansion via thunderbolt, but what about any internal expansion cards such as specific cards for video editing?
And while it has two GPUs, it, as far as I can see on Apples images, has only one CPU. And the written information also only mentions processor in singular form.
Sure you can get a pci-express to thunderbolt adapter in an external box. But that’s ugly and yet more money on top of a new machine. Or they can wait for their vendors to release specific thunderbolt interfaces for their needs. Which will cost both money and time.
But since I am not the target market for the new Mac Pro, I obviously can be wrong.
And perhaps those that really need more gpu power and faster cpus will buy them. But I think some will have to consider switching to PCs for their specific needs and will be vocal about their disappointment in Apples new Mac Pro.
There is also unclutter, which has been around for a while (since 1992 or so…) and does the same thing. This project have shorter and generally nicer code in my opinion, but unclutter has a man-page, probably has had most of it’s possible bugs removed and is already available in many package repositories. Debian and Ubuntu makes it just an apt-get install unclutter away.
FTFY ;)
Seriously though, CGI scripts even fell out of favor for tasks where performance wasn’t a significant constraint. That should tell us that performance shouldn’t be listed first in any summary of their downfall.
There are some inherent security flaws in the HTTP abstraction provided by CGI.
Most recent example which I remember is how http request headers are normalized and presented to the CGI binary by using underscores instead of dashes. This allows a cross-origin attacker to supply otherwise forbidden headers by supplying them as underscored instead of dashed header names, bypassing CORS restrictions altogether.
Best stay away from CGI.
Now that does speak for (against) CGI itself. Thank you for the insight. Likely, I am inadvertently making a mistake by lumping CGI and FastCGI together and thinking of the latter as a sped-up version of the former in my mind.
Do you know if FastCGI has similar flaws to ones you’ve mentioned about CGI?
Yes, all CGI variants do the same kind of header normalization. I never considered this specific thing before, I think it actually is a legitimate worry! Perhaps CGI should be amneded to … at least reject headers with underscores.
I don’t know how recent that was discovered, but I just now tried it with Apache (latest version) and it failed. Normally,
Accept-Encodingwill be turned intoHTTP_ACCEPT_ENCODING, so I triedAccept_EncodingandHTTP_Accept_Encoding. Both failed, so that vector might not be an issue. I also sent a completely bogusAccept-Encodingheader and yes, that got through intact, so how you parse the headers might still be an issue.CGI is no worse than PHP for security.
I think CGI lost mindshare because of its close association with Perl. For non-CGI web stuff, PHP was in a better position to work with non-Apache web servers than
mod_perl. Python and Ruby were able to learn from Perl’s mistakes, so they both avoided the horrible choice between tight coupling to a particular web server, or slow process startup for every request.I understand you are joking when saying CGI scripts are insecure by default, but I don’t understand: where does this sentiment come from? In one way, they are intrinsically safer, because each request is handled in a privacy of its own process and memory space.
Matt’s Script Archive
CGI’s use of environment variables encourages a halfarsed approach to parsing untrusted input. Naïve programmers are less likely to step on every rake in the grass if they are using a proper HTTP parser library, which is more likely in a non-CGI server.
I see it as a fame issue that attracts aforementioned naïve programmers, rather. Aren’t we seeing a similar pattern with PHP, then Python, now Node.js?
Naïve programmers can run into less trouble if the popular tools guide them along a safer path. I prefer to think of their mistakes a symptoms rather than causes.
PHP was also a security trash fire, but they changed the defaults to be less unsafe, and it’s much better now. (Although Wordpress is its own unique collection of vulnerabilities.)
I don’t think Python ever had a reputation for being as dangerous.
Node and npm, dunno, their security problems seem to be more to do with an ecosystem that’s many orders of magnitude bigger and busier than Perl and PHP were 20 or 25 years ago. Weird that I hear more about npm supply chain attacks than node.js RCE vulnerabilities.
PHP did some spectacularly stupid things back in the day. The two that I remember most are:
There was a magic globals option that turned every parameter in the request into a global. This was great for simple scripts because you just referred to things as globals, but it also let an attacker clobber other things.
It encouraged creating SQL via string concatenation and had a magic quotes option that would try to automatically escape things in the right way, this worked well for non-malicious inputs but was very easy for an attacker to bypass (the point where the quotes were inserted didn’t really know which inputs came from where).
By default, in the early 2000s, the blast radius of a compromised CGI program was anything the web process could touch. There were a lot of ways to limit the blast radius but you had to opt into each one and hope you didn’t make any configuration mistakes. Those were easy to make, particularly because the tools of the day encouraged implicit type coercion from the env var strings. Recipes for dropping privileges were hard to come by and blunt. It usually consisted of dropping into a user with nearly no privileges but then later someone would fix a bug by modifying the user in an insecure fashion. e.g. I once found a compromised system where one mitigation was to set the CGI user’s shell to something like
/bin/falsebut someone had been frustrated with an error message and gave the user a shell in a desperate moment of debugging. Bad practices all around, but those were common.A lot of people here are comparing CGI scripts to PHP. IME CGI scripts were replaced by mod_perl, mod_python, wsgi, etc. PHP felt comfortable with it’s circa-v4 insecure defaults because CGI scripts and everything else was insecure by default.
I think a secure-by-default CGI web server could do quite well today. Particular if it launched everything in bwrap and introduced something like JSON-CGI (or similar) so not everything was string-typed.
https://www.w3.org/Security/Faq/wwwsf4.html Q1: What’s the problem with CGI scripts?
“The problem with CGI scripts is that each one presents yet another opportunity for exploitable bugs.”
Sounds scary at first but to be fair that’s true for most public facing software.
So IMO mostly FUD.
It’s a problem comparing to what, though? To no CGI? To a custom server extension? To a custom server?
Just to put it in perspective. I don’t know when that particular Q&A was written, but the linked document goes back to 1997. PHP 3 came out in 1998. J2EE in 1999. ASP.NET in 2002. I actually don’t know and am quite curious how people did web programming besides CGI?..
Custom server extensions, like this one I originally wrote in 1999 and is still available today.
I thought of Apache httpd mod-extensions, but I wasn’t sure how common it was to write a bespoken mod for a very specific need. Nice!
I don’t know the exact timeline, but the AOL webserver ran an embedded TCL interpreter. And Paul Graham famously wrote Viaweb in Common Lisp (probably with a custom server in that language).
And there was also ColdFusion around in 1995.
Still is, in the form of Lucee
Ah, I remember hearing about ColdFusion around that time, but I have never touch nor seen it in action. I didn’t know about the others, thank you.
That also reminds me about MS IIS and FrontPage. Was IIS extensible? Was FrontPage merely a WYSIWYG editor akin Dreamweaver?
ColdFusion, for some incredibly bizarre reasons, ended up in the control systems for some missile platforms. I have no idea who thought that was a good idea but the EOL caused a lot of problems for the US and a few allied militaries.
IIRC FrontPage included some server components but it could be used as just an authoring tool.
AOLServer / NaviServer is really old, possibly the oldest web server that is still maintained. (Depending on whether you count Apache’s NCSA HTTPD prehistory, I guess.)
https://en.m.wikipedia.org/wiki/AOLserver
https://en.m.wikipedia.org/wiki/NaviServer
I switched away from CGI scripts entirely for performance reasons: PHP (and then later Python via things like mod_python) avoided the overhead of starting a new process on every incoming request.
Maybe 20 years ago I wasn’t thinking so hard about security, but the impression I had back then was that CGI scripts that had security flaws in them were insecure - same as for PHP.
Regarding performance, removing overhead of starting a new process was one of the reasons FastCGI had been created.
I believe, CGI and FastCGI have their place still to this day. You may do dynamic web logic from virtually any programming language without a hassle of doing networking and HTTP handling (to an extend). It’s a nice glue gateway to paste existing web-unaware software into Internet. Take a text-based programme, sprinkle it with “Content-Type: plain/text\n\n”, wrap it in slowcgi(8), and plug it into one of the numerous FastCGI-enabled web servers out there. Done!*
* YMMW
I have the same recollection of those years; I can’t remember anyone arguing for fastcgi or reverse proxying to replace cgi for nonperformance reasons, mostly due to application boot.
Only to return as AWS Lambda.
Rottenest Island, and then flying back to the USA.
Is that the island where the Quokkas live?
I see the dependency on Linux (even the name!) as unfortunate.
What are your main concerns? What would you do differently?
I’d care about supporting non-Linux OSs, particularly the OSS ones, and avoid including Linux in the project name.
Patch or gtfo then :)
Cross-OS for its own sake is fair enough I suppose, though I’d still be interested to know what about Linux you think is unfortunate as the OS of choice for this project, or why you think another would be better.
My take on this is: it’s a piece of software designed for industrial control with hard real-time requirements. As one of my mentors liked to say, a mill or a lathe isn’t the kind of equipment that just hurts you, it rips your arm off and beats you to death with it. I’m glad that they’re limiting their scope to a single kernel. Last I read an article about achieving hard real-time on Linux, it wasn’t exactly lacking nuance or pitfalls. Add more supported kernels and you multiply the chances that you introduce a bug, miss timing, and destroy the work/machine/operator.
I’d also like to point out that they don’t owe anyone cross-OS support. Including Linux in the name actually emphasizes their right to build what they want. The creators set out to create a CNC machine controller for Linux. If you want support for another OS, the source is GPLv2 :)
I’ll say as a start I don’t think there’s anything terribly wrong with doing CNC with Linux.
Yet my suspicion is that, despite the name, there’s technically not much tying this project to Linux specifically.
Which makes the name truly unfortunate.
A brief look at the project reveals that they ship Linux kernel modules for controlling hardware via supported interfaces, and for any use with real hardware Linux kernel with -rt patchset is needed on the controlling computer. This surely makes moving to another kernel quite a large effort, as realtime drivers need to be ported. And the recommended installation method is a Debian-derivative GNU/Linux distribution.
So I would expect getting a good enough port to be a large undertaking, with benefits of using another OS inside that black box even harder to realise because the number of users with hardware access matters for testing.
So it is forcibly an ugly design, as we know it has to resort to kernel modules as Linux itself is ill-suited to abstract the hardware and enable doing the heavy lifting in user space. Noted.
Maybe the name is not wrong, in hindsight.
Sure, they do have a userspace implementation, and apparently recommend the standard Preempt-RT patchset with user-space control implementation for some cases. It’s just that for many cases RTAI kernel and a kernel-space driver yield lower latency. Indeed, Linux is not a microkernel, so context switches have some costs.
Sure, making something realtime on top of a general-purpose non-realtime OS has its drawbacks, and the best latency will need some amount of special-case work. But it improves the chances of reusing some computer that is already around.
User-space real-time API-wise they claim to support Preempt-RT, RTAI and Xenomai kernels (all Linux-based).
Such machines rely on hard realtime controls and are very often done in microcontrollers. Latency is a huge constraint of the project. The project itself is very old: it dates back to the mid 90s: there wasn’t a lot of CPU power back then (especially on the ARM targets) and less knowledge and development to achieve hard real-time.
Are you aware of an open effort using microcontrollers?
GRBL, I guess? Somewhat limited and recently inactive, but apparently usable for many usecases.
There seems to be a maintained fork. There’s not much in terms of changes, but I suspect that’s because it reached a “just works” point.
No strange latency surprises to be had with these microcontrollers.
Yes, I meant this repository as the main development line, and it doesn’t seem to have firmly decided to never get to 5-axis support, so there is a clearly stated missing feature with respect to which it could be active but isn’t. Doesn’t matter for use cases where it fits, of course.
The project wiki (section 4) describes why they chose not to use an external microcontroller as a motion controller.
It also says there was a hard fork which adds support for a particular external microcontroller.
I see… ultimately they do like their scope and meters, on their computer.
I’m unfamiliar with the real-time / CNC space. What other non-GPL/OSS kernel systems have support for the kind of real-time performance required by these machines?
As a reminder, Linux isn’t exactly awesome at real time, particularly awful without rt patchset (and project seems to work w/o that), thus the requirements can’t be that strict, and there should be plenty of systems that meet them.
In the system requirements documentation:
So, non-RT Linux allows to do a subset of work that doesn’t involve driving hardware.
Are you coming at it from a principle perspective or practical?
From a practical point, several of the newer commercial control systems like Heidenhain and Siemens and probably more I can’t remember, have Linux base for the user interface.
Both works fine in daily use at the day job.
And is Windows really a better option? I know Datron’s Next control is lovely and easy to use on the commercial side. Others include Centroid, UCCNC, Kflop, Planet CNC and Mach3 & 4.
All require specific electronics anyway that usually does the heavy lifting in serious (non hobby) use.
From a principle point, I’d like to hear more.
Lots of nice new stuff!
Nothing IT related at all.
Taking apart and cleaning up a horizontal milling-machine, it is quite filthy. So I expect to spend lots of time with soapy water, de-greaser and various cleaning implements.
After that, reassembly and checking squareness and alignment. I expect both to be quite horrible.
And perhaps using my lathe to create a holder for my lathes top-slide, to enable the use of the top-slide to bore or grind out the dramatic gouges in the spindle taper.
To be slightly more clear, I need to run the machine spindle without a tool mounted in the expected place and mount a different kind of tool and traverse it diagonally and not in-line with the normal machine travels. Also I’ll be cutting in to the machine using itself using a boring-bar or a grinder.
An approximation on a very different kind of mill.
But realistically cleanup and reassembly done would be enough to make me feel like I accomplished something worthwhile, because the assembled parts to be mounted weigh a few hundred kilos and requires the use of a hydraulic engine/shop crane to lift.
The mill is without any manufacturers markings and looks like this.
I pay for newsblur.com
I don’t do backups currently.
I forgot about this post… It also took me over the 50 point limit, so thanks everyone!
xmonad as a windowmanager in XFCE, when I get to run something other than MS Windows. Which I need to do much to often.
A cleaner examle than my own config.
Fantastically horrible! I love it.
Excel + VBA can do really horrible things if one puts ones mind to it.
I am still confused how it is even accepted in the USA that a banking website allows you to do whatever to your account by means of some password people are likely to use in many places. Here in Belgium we also have online banking, but you are provided a card reader by your bank to use it.
In order to do anything online to your account, you need to have your card, the card reader and your card’s pin code. Logging in requires you to receive a challenge number from the bank’s website, you slide your card in the reader, enter the challenge number and your pin code in the reader and receive another number that you enter in the bank’s website. You repeat this process when making a payment or something similar.
I know, it’s more cumbersome. I do feel rather secure with it though.
Some swedish banks use as similar system.
Thanks @conroy for the invite!
The user-tree is interesting, I think.
Sounds like a nightmare.
Imagine that in the near future you were burgled without obvious signs of breaking and entering…
Tenant: But I locked my door when I left on friday! I’m sure I did!
Insurance bureaucrat: But your lock, windows and door are all whole and the lock-log doesn’t have a break-in alert logged, it just show that you sir, entered your apartment at just before midnight on saturday.
Tenant: But I was on a freakin' airplane at the time!
Insurance bureaucrat: Our records clearly show that you entered your home and therefore can’t pay you for this supposed burglary and items you claim are missing.
Tenant: GAH!
Insurance bureaucrat: If you persist in claiming this was a burglary, we will have to sue you fraud. And according to our terms of service you agreed not to mention any insurance claims in public, regardless of their outcome. If you do, it’s slander and we’ll sue.
Have a good day sir and good bye.
Beautiful! clever!
But I don’t think it is what most of the current mac pro owners want. But perhaps Apple doesn’t see them as the primary target market?
What do mac pro owners want?
My guess would be a zillion CPU and GPU cores and easily expandable fast storage. Seems to fit the bill. Six thunderbolt ports makes it even easier to add storage than internal hard drive trays.
Sure, there is plenty of expansion via thunderbolt, but what about any internal expansion cards such as specific cards for video editing?
And while it has two GPUs, it, as far as I can see on Apples images, has only one CPU. And the written information also only mentions processor in singular form.
Sure you can get a pci-express to thunderbolt adapter in an external box. But that’s ugly and yet more money on top of a new machine. Or they can wait for their vendors to release specific thunderbolt interfaces for their needs. Which will cost both money and time.
But since I am not the target market for the new Mac Pro, I obviously can be wrong.
And perhaps those that really need more gpu power and faster cpus will buy them. But I think some will have to consider switching to PCs for their specific needs and will be vocal about their disappointment in Apples new Mac Pro.
There is also unclutter, which has been around for a while (since 1992 or so…) and does the same thing. This project have shorter and generally nicer code in my opinion, but unclutter has a man-page, probably has had most of it’s possible bugs removed and is already available in many package repositories.
Debian and Ubuntu makes it just an
apt-get install unclutteraway.If you read the readme, you’ll note that I wrote xbanish because unclutter doesn’t even work properly anymore.
Ah, sorry, I completely missed that. Oops…
Anyway I use unclutter like this:
unclutter -idle 1 -rootThis only shows my cursor when I move it, which is what I thought xbanish did and is what I want to happen.