1. 1

      Forgot to read your write-up when you posted it originally. Thanks for the reminder, it’s great stuff.

    1. 1

      Sounds like a lot more cding than I do. I tend to cd only once per shell session to a project directory and then run my editor, build tool, git and whatever else from there. I’m sure I spend a lot more time typing make and git than cd.

      1. 40

        Whenever I read tech articles about reducing keystrokes I tend to roll my eyes. cd‘ing directories already takes up a very small portion of my time—optimization will never be worth it. Now if you can tell me how to make roadmap estimations that don’t put my team in peril, now that’s going to help me to not waste my time!

        Edit: It’s a cool tool, just maybe the article is touting it as more of a life saver than it actually is.

        1. 12

          I mean, I do too, but people do actually take this kind of thing seriously. I’ve had several people say they wouldn’t use ripgrep because the command was too long to type, but upon hearing that the actual command was rg, were much more satisfied. Maybe I missed their facetiousness, but they didn’t appear to be joking…

          1. 5

            Could they not have just alias’d the command if it was “too long”?

            1. 4

              The people in question don’t sound clever enough for that.

              1. 1

                Are you asking me? Or them? ;-)

              2. 4

                I wonder if these are different people than the ones who complain about short unix command names and C function names…

              3. 9

                For those of us with RSI, these little savings add up, and can make for a pretty big difference in comfort while typing.

                1. 8

                  Oh please. If you’re really worried about a couple of words and keystroke saving, you’d setup directories and make aliases that will take you specifically where you want to go. Assuming it was even a GUI you were using with a mouse, you’d still have to click through all the folders.

                  Overall, paying close attention to your workspace setting and ergonomics can go a long way in helping improve your RSI situation than this little jumper will ever do

                2. 4

                  My thoughts exactly. I have often wasted time trying to optimize something which took so little time to begin with, even if I reduced the time to nothing it would have no significant impact on overall performance. And the less-obvious trap is optimizations like this add additional complexity which leads to more time spent down the road.

                  1. 9

                    All right, buddy. Cool.

                    Did I say it a “life saver”? Nope. Did I say it could save you a lot time? Yup. If cd'ing into directories doesn’t waste your time, cool. Move along, read the next blog post on the list.

                    I’m sorry about your roadmap estimations. Sounds like you’ve got a lot on your chest there.

                    1. 31

                      Let me just take a step back and apologize—nobody likes negative comments on their work and I chose my words poorly and was insensitive. I’m rather burnt out and, in turn, that makes me appear more gruff online. I’m positive that someone will find this useful, especially if they’re managing multiple projects or similar use cases.

                      1. 23

                        I really appreciate you saying that. The whole point of this piece was to share something that literally makes me whistle to myself with joy every time I use it. I hope you find some time to take care of your burn out. It’s no joke and I’ve suffered from it quite a bit in the past three years myself. <3

                        I know it’s easy to look at everything as “this is just like X but not quite the way I like it” and I don’t blame you for having that reaction (like many here). AutoJump is to me the epitome of simple, delightful software that does something very simple in a humble way. I wish I had spent more time extolling the virtues of the simple weighted list of directories AutoJump stores in a text file and that ridiculously simple Bash implementation.

                        The focus on characters saved was a last minute addition to quantity the claim in the title. Which I still think will be beneficial to anyone who remotely has frustrations about using cd often and may suspect there is a better way.

                      2. 6

                        If only there was a way to optimize crank posting. So many keystrokes to complain!

                      3. 2

                        the parent tool is probably overkill but a simple zsh function to jump to marked projects with tab completion is pretty awesome to have.

                        alias j="jump "
                        export MARKPATH=$HOME/.marks
                        function jump {
                        cd -P "$MARKPATH/$1" 2>/dev/null || echo "No such mark: $1"
                        }
                        
                        function mark {
                        echo "mark name_of_mark"
                        mkdir -p "$MARKPATH"; ln -s "$(pwd)" "$MARKPATH/$1"
                        }
                        
                        function unmark {
                        rm -i "$MARKPATH/$1"
                        }
                        
                        #if you need it on another os.
                        #function marks {
                        #ls -l "$MARKPATH" | sed 's/  / /g' | cut -d' ' -f9- | sed 's/ -/\t-/g' && echo
                        #}
                        
                        # fix for the above function for osx.
                        function marks {
                        \ls -l "$MARKPATH" | tail -n +2 | sed 's/  / /g' | cut -d' ' -f9- | awk -F ' -> ' '{printf "%-10s -> %s\n", $1, $2}'
                        }
                        
                        function _completemarks {
                        reply=($(ls $MARKPATH))
                        }
                        
                        compctl -K _completemarks jump
                        compctl -K _completemarks unmark
                        
                        1. 1

                          I’ve tried this, but I keep end up making shortcuts and forgetting about them because I never train myself well enough to use them until they’re muscle memory.

                          I think I’ll just stick to ‘cd’ and also extensive use of ctrl-r (preferably with fzf)

                          1. 1

                            And then you go to a work mates computer, or su/sudo/SSH and it’s unusable :)

                            1. 1

                              well this is one of the most useful shortcuts in my arsenal. type j <tab> or jump <tab> and it completes all the marked directories. If you get over the initial forget to use it curve it’s amazing and simple (just a folder in your home dir with a bunch of symlinks. and a few helpers to create those.)

                        1. 3

                          Remember when being Unix-like meant “everything was a file”? It was nice.

                          Linux is still better than most, honestly (sysctl talks to /sys, for example, instead of being its own thing) but Linux still has netlink and other stuff that aren’t files.

                          Of course, it’s easy to judge, but X and Wayland solve hard problems: it’s not enough to write to a file to put pixels on the screen, you have to multiplex/mediate access, allow for manipulation of potentially broken clients, etc.

                          Plan 9 was really the only one who got it right, and even then it got a few things wrong.

                          1. 5

                            Why is it nice?

                            I hate Linux’s “everything is a virtual filesystem” approach. Looking at mount output on a modern Linux box just feels disgusting. 12 lines (!) of cgroups spam, and then devpts pstore securityfs debugfs configfs hugetlbfs OMGWTFfs. And how could I forget the infamous efivarfs!

                            Also, most files in sysfs are text, so you have the overhead of parsing strings just to read system information.

                            1. 1

                              Also, most files in sysfs are text, so you have the overhead of parsing strings just to read system information.

                              That’s the advantage. I can grep for information, cat it, sort it, etc, etc using tools that I already know how to use, because they’re just files containing text.

                              In the vast majority of applications the slight overhead for doing the string parsing isn’t going to have a significant effect on performance.

                              1. 4

                                Sure, but then when you want to do something less ad-hoc with it, it becomes a pain. The canonical source of that information shouldn’t be text, it should be structured data that can be dumped to text when necessary.

                                1. 3

                                  But I can do the same with the output of sysctl(8). I don’t need a hundred mounts for that.

                            1. 3

                              Dunno why would someone not just use libSDL in this situation? And then, of course libSDL_grafx, etc.

                              1. 12
                                SDL2-2.0.7$ cloc src | grep SUM | grep -o '\d\+$'
                                161068
                                
                                bin$ cloc fbclock.c | grep -o '\d\+$'
                                86
                                

                                🤔

                                1. 10

                                  I see several reasons, one being education. I had no idea how linux framebuffer system was working before reading this post.

                                  Great post, thanks for writing it!

                                1. 4

                                  This is really neat! It makes me want to dive into writing my own framebuffer utilities. Some thoughts:

                                  For things like a clock or a battery indicator, tmux has a ‘status’ option and screen has a ‘hardstatus’ option. Both of these tools make the a console-sans-xorg experience quite enjoyable.

                                  For other framebuffer tools, try jfbview (pdf viewer) or libxine’s fbxine (video player).

                                  1. 3

                                    I’ve forgone tmux and screen because (at least tmux) adds noticeable input lag, and I find neovim’s terminal emulator more convenient (one set of keys for managing windows, unified “clipboard” vim registers). If I really need a detachable session, I wrote another simple tool for that.

                                    I’ve used jfbview (or maybe a fork) to read Intel manuals. I don’t think there are sound drivers for my Chromebook so watching videos probably isn’t going to happen. Framebuffer tools really don’t get enough love, though!

                                    1. 3

                                      (at least tmux) adds noticeable input lag

                                      I’m not sure if it’s the only input lag you were noticing, but the biggest annoyance in this respect for me goes away if you add

                                      set -s escape-time 0

                                      to your .tmux.conf. By default tmux pauses for a half-second after ESC before sending it through, in order to allow using ESC+key as equivalent to Meta+key for tmux bindings (like emacs does). Which is probably fine if you don’t use vim, but is very annoying in vim. Setting the delay to 0 does of course mean that you can’t use ESC+key sequences for tmux bindings.

                                    1. 1

                                      This is a different post right?

                                      1. 4

                                        Yes. This post is somewhat of a followup to that one.

                                    1. 7

                                      3 GB/s is impressive, but I’m more curious to hear about the application that gives a Y/n prompt 1,500,000,000 times per second.

                                      1. 16

                                        yes is useful for more than just interfaces: it’s effectively a more flexible /dev/zero.

                                        Optimizing it is obviously golf, but on the other hand, it’s unlikely to hurt anything.

                                        1. 4

                                          There are plenty of applications that write loads of data through pipes, so while this example is kind of useless in of itself, it does provide a good platform to experiment with pipe perf. The Reddit discussion linked even has some good discussion about kernel internals.

                                        1. 1

                                          Ran on macOS just to see what happens:

                                          Architecture:            x86_64
                                          Byte Order:              Little Endian
                                          Total CPU(s):            4
                                          Model name:              MacBookPro11,1
                                          

                                          I appreciate graceful degradation!

                                          1. 1

                                            I update the code, so it can run on macOS now:-). When you have time, you can try it, thanks!

                                          1. 2

                                            Instead of memfd_create() you can use the POSIX standard shm_open(), so

                                            memfd_create("queue_region", 0)

                                            becomes

                                            shm_open("queue_region", O_RDWR|O_CREAT, 0600)

                                            Add ‘-lrt’ to your LDFLAGS and remember to shm_unlink() it when you’re done. Everything else stays the same, including the performance.

                                            1. 2

                                              I vaguely recall it being less effort to simply open /dev/zero and use a private mmap()ing of that.

                                              Of course if you are using this as an IPC between two processes you’ll have to use a regular file.

                                              1. 1

                                                I don’t think a private map would work here:

                                                MAP_PRIVATE

                                                Create a private copy-on-write mapping. Updates to the mapping are not visible to other processes mapping the same file, and are not carried through to the underlying file.

                                                1. 1

                                                  Meant to say “use a regular file with MAP_SHARED”, good catch. :)

                                                2. 1

                                                  It doesn’t seem like you can mmap /dev/zero. I get ENODEV “Operation not supported by device” when I try. (macOS)

                                                  Edit, showing my work:

                                                  #include <err.h>
                                                  #include <fcntl.h>
                                                  #include <stdlib.h>
                                                  #include <sys/mman.h>
                                                  int main() {
                                                      int fd = open("/dev/zero", O_RDWR);
                                                      if (fd < 0) err(1, "open");
                                                      void *map = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
                                                      if (map == MAP_FAILED) err(1, "mmap");
                                                      return 0;
                                                  }
                                                  
                                                  1. 1

                                                    You’re confusing /dev/null with /dev/zero.

                                                    1. 1

                                                      Oops, I had used /dev/zero when I first tried it then accidentally swapped it for /dev/null when I came back to give some code. Either way, the result is the same: ENODEV.

                                                      1. 1

                                                        Must be some MacOS specific breakage, because it works on Linux.

                                              1. 5

                                                I think this is a long winded way of saying directory entries are access controlled by directory permissions? There was a bit too much narrative for me to know if the point was that this was surprising or wrong or what.

                                                1. 4

                                                  a long winded way of saying directory entries are access controlled by directory permissions

                                                  Yeah, this is the TL;DR but I posted it mainly because it was fun to read.

                                                  1. 3

                                                    Sure, though i think the presentation obscures the obvious corollary, assuming the goal is to annoy the user: mkdir root-dir; touch root-dir/root-file really will leave you with an unremovable file.

                                                    1. 1

                                                      Oh yeah, and I haven’t heard of chattr +i <filepath> before, which can be used to make files immutable. This could be quite handy.

                                                    2. 2

                                                      I think it is great. It describes a situation which you rarely hear about anymore: when your user and root aren’t actually the same person!

                                                      I used to live in this situation. My sysadmin and I were always messing with permissions–inside my $HOME and outside, too. He believed in giving each user as much power as was safe. ‘Safe’ meant “can’t bring down the system or read other users’ data”. I learned a lot from him!

                                                      1. 1

                                                        I also recently came across this. It’s neither surprising nor wrong, just something I hadn’t thought about before.

                                                      1. 1

                                                        I wish I understood how all this worked. Why exactly is a mod operation slow? Why exactly is it faster to do this via page tables? Is it because the kernel is already doing this and it effectively requires zero additional work? Is it because the CPU can handle this in hardware?

                                                        I guess I’ve got some research to do.

                                                        1. 4

                                                          Mod isn’t super slow, but you can avoid mod entirely without the fancy page tricks by defining your buffer to be a power of 2. For example, a 4KiB buffer is 4096 = 2^12, so you can calculate the wrap-around with ( cur + len ) & 4095 without using mod.

                                                          You would still need two separate memcpy’s, and a branch for the wrap-around non wrap-around cases (which is normally not a big deal except when you’re racing against the highly optimized hardware cache in your MMU…)

                                                          1. 3

                                                            Branches, conditionals such as if/switch statements, can cause performance problems so if you can structure things to avoid this sort of thing you can get a considerable bump in speed.

                                                            A lot of people look to software tricks to pull off speedups but this particular data structure can benefit directly from calling upon hardware baked into the CPU (virtual memory mapping).

                                                            Most of the time you have a 1:1 mapping of a 4kB continuous physical memory block to a single virtual 4kB page. This is not the only configuration though, you can have multiple virtual memory pages mapping back to the same physical memory block; most commonly seen as a way to save RAM when using shared libraries.

                                                            This 1:N mapping technique can also be used for a circular buffer.

                                                            So you get your software to ask the kernel to configure the MMU to duplicate the mapping of your buffer (page aligned and sized!) Immediately after the end of the initial allocation.

                                                            Now when you are at 100 bytes short of the end of your 4kB circular buffer and you need to write 200 bytes you can just memcpy()-like-a-boss and ignore the problem of having to split your writes into two parts. Meanwhile your offset incrementer remains simply:

                                                            offset = (offset + writelen) % 4096
                                                            

                                                            So the speedup comes from:

                                                            • removing the conditionals necessary to handle writes that exceed the end of the buffer
                                                            • doing a single longer write, rather than two smaller ones

                                                            So it is not really that the CPU is handling this in hardware and so it is faster, the hardware is doing actually no more work than it was before. The performance comes from more a duck-lining-up excercise.

                                                            1. 2

                                                              Modulo and division (usually one operation) are much slower than the other usual integer operations like addition and subtraction (which are the same thing), though I’m not sure I can explain why in detail. Fortunately for division by multiples of two, right shift >> and AND & can be used instead.

                                                              For why doing this with paging is so efficient, it is because the MMU (part of the CPU) does the translation between virtual and physical addresses directly in hardware. The kernel just has to set up the page tables to tell the MMU how it should do so.

                                                            1. [Comment removed by author]

                                                              1. 2

                                                                That’s not really a reasonable criticism. That is the author’s thesis statement. They then go on to use the rest of the article to argue in favor of it. The rest of the argument may be (in my opinion, is) flawed, but the thesis statement itself is just the conclusion presented a priori, so you can see what the author is arguing for. (This is normal in most writing. In scientific papers, we traditionally call it the “abstract”.) The mere fact that its grammar admits humorous substitutions is uninteresting, as that is true of almost all sentences.

                                                                1. 1

                                                                  “LaTeX fetish” is a pun though.

                                                                  1. 1

                                                                    Yeah, I get the impression the author wouldn’t have spoken so strongly if they didn’t have such a good pun to back it up with.

                                                                1. 3

                                                                  Wish the pictures were bigger.

                                                                  1. 4

                                                                    We can just use more ML for that! https://github.com/nagadomi/waifu2x

                                                                  1. 2

                                                                    Couldn’t this be implemented entirely in user space on top of unix domain sockets?

                                                                    1. 4

                                                                      Yeah, of course. It’s “doable”/emulatable with any IPC mechanism, but my guess is that a first class kernel implementation provides much more efficient interactions since it’s happening without extra context switches.

                                                                    1. 6

                                                                      Surprised me as well, however I really like it as a formal standard including subdomains so one can use it reliably. Can see myself having apps listening to appname.localhost giving you a meaningful and rememberable name. Of course to my current knowledge this would still need data routing through an app proxy if the same TCP port is used.

                                                                      1. 5

                                                                        Ideally we could map .localhost. subdomains to different addresses in 127.0.0.0/8 and use them without conflict.

                                                                      1. 26

                                                                        exa is written in Rust, so it’s small, fast, and portable.

                                                                        -rwxr-xr-x  1 root      wheel    38K 28 Apr 20:31 /bin/ls
                                                                        -rwxr-xr-x@ 1 curtis    staff   1.3M  7 Jul 12:25 exa-macos-x86_64
                                                                        

                                                                        ?

                                                                        1. 9

                                                                          Stripping it helps a bit… but not much though.

                                                                          $ du -hs exa-macos-x86_64  
                                                                          1.3M	exa-macos-x86_64
                                                                          $ strip exa-macos-x86_64     
                                                                          $ du -hs exa-macos-x86_64  
                                                                          956K	exa-macos-x86_64
                                                                          

                                                                          More fun is what it links to:

                                                                          $ otool -L /bin/ls            
                                                                          /bin/ls:
                                                                          	/usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0)
                                                                          	/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
                                                                          	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)
                                                                          $ du -hs /usr/lib/libutil.dylib /usr/lib/libncurses.5.4.dylib /usr/lib/libSystem.B.dylib
                                                                           28K	/usr/lib/libutil.dylib
                                                                          284K	/usr/lib/libncurses.5.4.dylib
                                                                           12K	/usr/lib/libSystem.B.dylib
                                                                          $ otool -L /tmp/exa-macos-x86_64
                                                                          /tmp/exa-macos-x86_64:
                                                                          	/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
                                                                          	/System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 57740.60.18)
                                                                          	/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1349.8.0)
                                                                          	/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.8)
                                                                          	/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)
                                                                          $ du -hs /usr/lib/libiconv.2.dylib /System/Library/Frameworks/Security.framework/Versions/A/Security /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation /usr/lib/libz.1.dylib /usr/lib/libSystem.B.dylib
                                                                          1.6M	/usr/lib/libiconv.2.dylib
                                                                          9.3M	/System/Library/Frameworks/Security.framework/Versions/A/Security
                                                                          9.7M	/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
                                                                           96K	/usr/lib/libz.1.dylib
                                                                           12K	/usr/lib/libSystem.B.dylib
                                                                          
                                                                          1. 6

                                                                            To be fair, exa is a self-contained executable, while ls probably has a dependency to libc, which it loads dynamically. If Rust ever becomes very popular and its runtime is installed by default everywhere, its executables will also be a few KB only.

                                                                            1. 4

                                                                              FWIW, linking ls from GNU coreutils statically with musl-libc on x86_64 gave me a 147K ELF with no shared object dependencies.

                                                                              1. 3

                                                                                For that to be true Rust would have to have well defined and stable ABI. Which it doesn’t have right now.

                                                                                1. 3

                                                                                  Rust binaries actually do dynamically link to libc. Its standard library, which calls libc, is statically compiled into binaries.

                                                                              1. 4

                                                                                The ‘Bare bone’ part is very nice introduction about how to start writing x86-64 OS in any language. Other thing I noticed is x86 assembler. I do not have much experience with it, but I noticed that even it is CISC processor (at the end), still it is used in some kind of RISC manner: move some constant to the register and then move it to the other register - look at e.g. enable_paging procedure. I always had an impression that in x86 it could be done as a one assembler instruction.

                                                                                1. 2

                                                                                  I think in a lot of cases this is necessitated by the instruction encoding. x86_64 uses 3 or 4 bits to represent a register, which works well for the 16 general-purpose registers, but to access other registers you need separate instructions.

                                                                                1. 6

                                                                                  Now if only it were of any use to most developers, as it seems to be tightly coupled to macOS.

                                                                                  1. 8

                                                                                    While I know plenty of developers who are on Windows or Linux, I think implying it’s of no use to most developers is a bit of a stretch. The overwhelming majority of web developers I know use Macs, as do most of the mobile developers I know. They combined may be a strict minority, but it should still easily get plenty of use.

                                                                                    1. 5

                                                                                      Anecdata: of the ten laptop backs I can see, all have apple logos on them. They’re not all programmers, but there was definitely some syntax highlighting in the mix when I walked by.

                                                                                      1. 1

                                                                                        Web and mobile developers are irrelevant here, and a minority. Okay, you have a bias from what you see. But most programmers are on Windows, and a version for that alone would help Linux developers, too, because of WINE. I just see these people battling with git and this is a solution that will be completely useless to them. Maybe next time.

                                                                                        1. 3

                                                                                          I suspect that most Windows-using developers don’t use Git and that most Git users actually use either MacOS or Linux.

                                                                                          1. 2

                                                                                            Maybe, I couldn’t find any data now, but even if, then it’s gradually changing in favor of git, as people are leaving CVS, Subversion and such. And the ratio of Windows to $anything_else developers is huge. I know several places where they use git on Windows.

                                                                                          2. 2

                                                                                            Software isn’t required to target the majority.

                                                                                            1. 2

                                                                                              Yeah, I didn’t respond to that particular item, but going off that logic, virtually all Linux GUI software wouldn’t “be of any use to most developers.”

                                                                                            2. 1

                                                                                              Web and mobile developers are irrelevant here,

                                                                                              I genuinely don’t get what you mean. Are you under the misapprehension that they don’t use Git, or that we don’t have mobile and web developers on lobste.rs?

                                                                                              1. 1

                                                                                                I see it as an expression of bias and nothing more, i.e. why mention them at all?

                                                                                                Though let’s not continue in this thread, it’s unproductive and began as a sigh.

                                                                                          3. 3

                                                                                            I do wonder if someone has useful stats on what kinds of systems programmers use… Like, if npm, cargo, rubygems, pip, etc. would keep track of what platforms people are on when they install packages (although the numbers would probably be a bit skewed, since a decent chunk on package installations happen on server systems on which no development is being done).

                                                                                            It wouldn’t surprise me if there was a good amount of people on macOS, simply because there are quite a number of amazing macOS-only developer-centric apps. I mean, do you think Kapeli would sell as many Dash licenses if there was hardly any developers out there using the platform? At least me, Dash has almost revolutionised my workflow.

                                                                                            1. 3

                                                                                              Brew collected some usage stats, but used Google analytics, creating a giant shitstorm of outrage.

                                                                                          1. 5

                                                                                            SEO shitbags rank with email spammers as the absolute lowest pigshit dirtfuck dregs of humanity.

                                                                                            Is this really the standard of article we want to see here?

                                                                                            The author seems pretty ill informed as well:

                                                                                            If people don’t want to see my site with random trash inserted into it, they can choose not to access it through broken and/or compromised networks.

                                                                                            Earlier you recommended letsencrypt, and now suddenly you want me to pick a competent certificate authority

                                                                                            1. 2

                                                                                              The author seems pretty ill informed as well:

                                                                                              Reposting the “ill informed” opinions without refutation or explanation doesn’t really have much value.

                                                                                              Is this really the standard of article we want to see here?

                                                                                              You’re new…maybe wait a bit and contribute more before hand-wringing. :)

                                                                                              1. 2

                                                                                                The article states those opinions without refutation or explanation…

                                                                                                1. 1

                                                                                                  Reposting the “ill informed” opinions without refutation or explanation doesn’t really have much value.

                                                                                                  I included two quotes from the article to explain my point and I think they speak for themselves. Misquoting someone has negative value.