1. 11
  1.  

  2. 8

    Or use cut.

    # Only display 80 columns:
    grep whatever myfile  |  cut -c 1-80
    
    1. 9

      Ignoring the 2edgy schtick below, there is a good point to be made about learning the tools we have.

      I watched a pretty good developer write ~50-75 lines of C# to iterate through all files in a directory, open only those ending in .XML, and change <foo>bar</foo> to <foo>baz</foo>. This took him about 15 minutes, between reading the manual (was String.Replace needle, haystack or haystack, needle…), testing it, etc. He was stunned when I showed him that the Cygwin shell on his Windows box let him do this in literally one line.

      I also remember being schooled by an old UNIX-head who, after seeing my hand-rolled Ruby based data transformer, proceeded to bang out one that was faster, more robust, and probably a tenth of the code with judicious use of paste, cut, tr and sed.

      There’s nothing wrong with reimplementing a tool for the purposes of learning, for sure - but I think lots of people could benefit from digging in to the tools we’ve got at hand and learning them well before writing something new.

      1. 3

        I went through a period of using the traditional Unix tools for as much as possible, and while I still have a soft spot for them, I’ve found myself not really using them much anymore for new projects, for a few reasons:

        1. The base set of standard and quasi-standard tools has some really awkward omissions. Something like Google’s crush-tools can fill some of the holes, but the base Unix set doesn’t really come “batteries included”.

        2. Too much data copying through pipes can sometimes cause poor performance (though it depends a lot on what you’re doing). You can sometimes spend all your time just pushing terabytes of data through pipes. This is exacerbated if you end up having to make diamond-shape flows in your pipeline with tee and paste/join to get the desired semantics, or else have to insert sort and reverse in various places to munge data in the way a subsequent call wants. Especially common if you don’t use something like crush-tools and insist in sticking to base POSIX functionality, but it ends up sometimes necessary even with the addition of niceties like funiq.

        3. Base performance is often not great, unless you’re comparing to something really slow. I’ve even found Perl to be faster than the awk/sed/tr combination for many things, and Perl isn’t really a speed king. Also, performance varies hugely between implementations and platforms, which adds another annoyance if you run stuff on more than one platform: your stuff can speed up or slow down by 10x because you moved from GNU/Linux to FreeBSD or vice versa. This is all especially bad if you want Unicode support. If you don’t, the LC_ALL=C hack can often make things more reasonable.

        4. Entirely subjective, but once you get past one-liners, I find the scripts and pipeline chains more difficult to read and maintain than a program where you get actual data structures. I have to have comments everywhere reminding myself that at a particular point in the pipeline, the data is sorted numerically by column 3, and by the way what’s in columns 1, 2, and 4 here anyway? When we paste some things that had to be teed for separate processing, are we sure that the two sides of the paste are sorted the same way so we’re not making Frankenstein records? Etc.

        Those are all of course less of a worry if you’re doing small-scale, one-off stuff though.

        1. 2

          Indeed. That said, I believe using Cygwin on Windows to do regexp replace is no less an “anti-pattern” than using C#. Modern Windows ships with a robust shell (PowerShell) that can do such things easily. Additionally, for a good C# developer it gives access to the power of .net they already know and (maybe) love.

          Installing a beast of a package to emulate an environment you are more comfortable with feels little better than using C#, and is suffering from the same problem of “not learning the right tool for the job {on the platform}”. For a long time I fought learning PowerShell because I am an old grumpy graybeard, but in the end I realized how ridiculous I was being – that would be like refusing to learn paste, cut, tr and sed on Linux. Being willfully ignorant or the tools your platform provides is silliness. So, I uninstalled my crutches on Windows and learned PowerShell – and now on a random Windows box – I have console tools I can use – no installations needed.

          foreach ($file in (gci . *.xml -r)) { (gc $file.PSPath) | foreach { $_ -replace “<foo>bar</foo>”, “<foo>baz</foo>” } | sc $file.PSPath }

          1. 1

            Brilliant!

            I’ve dodged Windows work for a while, but when I finally have to bite that bullet I’m happy to see that there’s a wonderfully expressive set of console tools there.

          2. 2

            “Digging in to the tools we’ve got at hand” is literally a never-ending task, and you’d never build anything if you stopped to do that before writing something new.

            I’m all for people getting familiar with the /usr/bin of their particular distribution, but let’s remember the significant variance of implementations of those “standard” tools between distributions… many developers like to idealize just how good/reliable/standard the unix toolset is.

            The thing I like about building things in shells / command-lines is that there’s absolutely nothing wrong with building a handy utility like the OP’s. And we needn’t dismiss it as a “learning project” - it’s working software - kudos to the author.

          3. -10

            I was just about to write it. I can’t +1 your comment more. ^^ To completely replace “sll”, use:

            cut -c 1-1024

            Are people really this retarded not to come up with just using the tools provided by the system? Where have we gone? …

            EDIT: I think this is just a troll. Let’s just move on. :P

            1. 11

              kb is right that your tone is abusive and dismissive, even though your comment does contain a useful, if redundant with jlarocco’s comment, implementation of sll. Your comments will be more welcome in the future if you leave out the personal abuse and name-calling (“Are people really this retarded”, “dirty bunch”, “Cultural Marxist”, “SJW’s”) and indirect personal abuse by implication (“Where have we gone?”, “Crying like a sissy”, “the glibc-devs…are scared of simple…interfaces”) and dismissal of people’s learning process as “just a troll”.

              Please don’t write such comments here any more. Instead, write better comments that don’t include personal abuse.

              1. 8

                Good day FRIGN.

                I’m glad to see /u/sin invited you to lobste.rs. I’ve gotten a lot of value out of suckless.org, and I appreciate being in the same community as a member of that site. Given how productive I’ve found suckless.org to be, I’m surprised by how unproductive you recent comments are here.

                I’ve had the experience of searching for a piece of software, only to find some projects that sucks so bad they are fully expressed as a configuration switch somewhere else in my stack. Why would someone waste so much time writing useless garbage like that? Well, even I write useless garbage, and doing so was an incredible learning experience. On par with what I’ve learned browsing suckless.

                You and I are both well aware that all software sucks, and all hardware sucks. It’s not what we’re here to debate. Please make this site better for your participation by improving the quality of your comments, as I’d love if you would continue to contribute to this site with us.

                1. 6

                  Further, please don’t use the word “retarded”, it’s offensive to many people, including me. Your tone and the words you choose are not fostering a welcome community or advancing the discussion.

                  1. -10

                    Live with it.

                  2. 5

                    Hey, no, it’s not a troll. Just recognized a problem and wrote a solution for it. I added a section to the README where you can add your own implementation: https://github.com/kevinburke/sll#other-ways-to-implement-this

                    Don’t forget to call it “pointless” in your PR

                2. 4

                  The ReadLine() method in go’s bufio package is actually probably not what you want. Not only is the API awkward, but it allocates. Your code will probably be faster and simpler if you use a scanner and then slice the resulting output.

                  1. 3

                    egrep -v .{1024}

                    1. 1

                      What’s the advantage to using this over a simple script written in something like awk?

                      1. 3

                        none really, thought it would be a fun exercise, & easier to reason about what i’m doing than an awk script :)