1. 1

    Note: to do this, we would group order, and then whichever one of use got the group order package would handle shipping it out to everyone else.

    1. 4

      Maybe using Amazon Merch would be a better option? Then people don’t have to coordinate shipping.

      Also I think the placement should be a bit better, maybe larger text with larger margins so it doesn’t go to the edge of the shirt? It also seems a bit low.

      1. 3

        Thanks for the feedback :)

        I hadn’t heard of Amazon merch, I’ll look into it. As far as placement and margins go, I’ll fix it tonight when I get home.

        1. 2

          I think teespring (I think it’s teespring) also has a “crowdfunded” shirt option, where if enough people commit to buy the shirt a certain price, they’ll ship them all out.

          1. 1

            That’s the same with customink, but they want a minimum of 50 shirt goal as far as I can tell.

    1. 4

      Devise is actually really complex and heavy, and I don’t particularly recommend it for complete beginners. In fact, for a trivial project, even the the Devise github page recommends users first learn how to implement their own authentication systems.

      Devise is a beast in its own regards, and for anything beyond the default settings can quickly become a pain. That said, it really is a great tool, and I do use it quite a bit.

      1. 3

        That said – if you’re building a production app, use devise (even if you don’t really know what you’re doing).

        Building your own authentication system is rife with pitfalls and traps, it’s only appropriate for apps that ultimately won’t cost your company/customers money and you your job, IMO.

        Devise is complicated, but a lot of that complexity is a necessary evil of doing authentication generically and appropriately.

        1. 3

          There’s also the built in has_secure_password.

      1. 7

        That name is so clever.

        1. 2

          Can someone explain it for me? I get the .rs part, but not the whole thing. (Hint: I’m not so proficient in English.)

          1. 7

            It’s the Revised5 Report on the Algorithmic Language Scheme or the r5rs for short.

          2. 2

            Even more clever than Cherry.py

          1. 1

            A friend and I paired together and built a Brainfuck interpreter in Nimrod about a year ago. The source is up on on Github.

            I thought it was a fun afternoon project, and the language certainly has a lot of interesting features.

            1. 3

              It would be nice to see more wiki pages explaining the state of the thing, and define the scope of its ambitions.

              Also, videos of the OS in action would be nice. I don’t have a Pi, but I would like to see this. And many people probably won’t go through the hassle of setting up a build environment, build the thing, and flash it to a card just to see if it does anything yet.

              1. 1

                It hasn’t been updated since 2012. The project might be dead.

              1. 4

                At my university, scheme is taught in more than a handful of classes, including the introduction to computer science class. It had, overall, very mixed reviews. Beyond the validity of the teacher, kids had two major complaints at first: “Why would we learn scheme when we could be designing apps?” and “(((() ((() () (((())))))))) is so hard to read.”

                By the end of the semester, when most people had learned how to write decently clean code, with good indentation, most kids stopped complaining about the latter, but a ton of people still complained about the former. Maybe it’s just because they were actually forced to sit down for an entire semester and type more parenthesis than they ever have before, and they got used to it. Or maybe some of them began treating the code like a tree structure, as in this article. Either way, personally, Lisps have always been very readable for me, and I thought this article described why perfectly.

                1. 5

                  Did you find the lisp more readable than the python examples? E.g. in the example of:

                  vec = [10, 20]
                  vec[0] = 7
                  vec[1] += 2
                  print(vec)
                  

                  versus

                  (setq vec (vector 10 20))
                  (setf (elt vec 0) 7)
                  (cl-incf (elt vec 1) 2)
                  (print vec)
                  

                  It seems to me that, while the lisp may not be bad, it’s noticeably less readable: extra noisy brackets at the start/end of the lines, method names that are neither English words nor standard symbols and require domain knowledge to understand, the cumbersome (elt vec x) that blends into the rest of the line. But I suppose that could be an artifact of knowing Python.

                  1. 4

                    I wouldn’t say it’s more or less readable, but I certainly don’t struggle with it either.

                    I do think that example is perfect at showing where lisp can be difficult to read, especially when you’re quickly skimming. On one hand, it is very easy to miss the elt parts in the middle of lines, but on the other hand

                    (setq vec
                          (vector 10 20))
                    (setf (elt vec 0)
                          7)
                    (cl-incf (elt vec 1)
                             2)
                    (print vec)
                    

                    is a little obnoxious in vertical space consumption.

                    EDIT: hit tab and then space and accidentally submitted. Formatting was bad too.

                  2. 3

                    First: I completely agree about readability. The parentheses are a short-term annoyance when learning, but quickly become second nature.

                    Second: some students (in anything, not just computing) are frustratingly opposed to learning fundamentals. When learning a new instrument, any teacher worth their salt will make you learn the notes, the scales, how to read music, etc. And many new students (particularly young ones) are likely to find this frustrating. “Why can’t I just play X?” they’ll ask. And some of them will drop the lessons and learn to play on their own, getting the sheet music or tab off the internet and copying the styles and songs of their favorite artists. Some of them will even get pretty good at it, maybe make their own songs, maybe even hit it big. And then they’ll go into interviews and talk about how they did it without real training, and it’ll teach a whole bunch of kids learning it that maybe they don’t need training either. So they’ll give it up.

                    What they’ll miss is that the people who made it big still likely had to work at it. They’re looking for a shortcut. They know “I want to play Van Halen” or “I want to make an app,” and they don’t care about all the other stuff. They want to do that one thing. And the same attitude that told them they don’t need lessons will leave them frustrated when the songs they’re learning are too hard. The shortcut doesn’t work because they won’t.

                    In the end, when they’re complaining about learning Lisp, they’re complaining about doing something new and hard that they weren’t expecting. It doesn’t seem to fit the plan of what they wanted, and it’s so foreign, so weird. And they reject it.

                    Edsger Dijkstra, in “On the Cruelty of Really Teaching Computer Science” (EWD 1036), discusses at length the nature of these “radical novelties” and the ways in which people respond to them. That is all that’s happening here, and it’s not surprising. The only mistake made in these situations is giving in and switching to something “practical” which placates the students because it fits their conceptions and isn’t challenging. It is sad to me that so many universities do so (even UT Austin, where Dijkstra has worked, which switched to Java from Haskell as their introductory language after Dijkstra’s death). This is a capitulation to the whims of the crowd, forgoing the fundamentals of logic that underlie all computing and jumping instead to the mechanical and “pragmatic” to the detriment of all.

                    1. 1

                      I agree one hundred percent, and I love the transcript, thank you.

                      I think you’re completely right, and I think that’s why things like tablature (for the most part) exist: the average 16 year old boy trying to woe a classmate with a guitar won’t be bothered to learn musical theory (chords, progressions, etc), and instead will just get the tablature from the internet because it’s easy. They don’t care about how chords come together to work, he just wants to hear the song, a final product.

                      I think this attitude is extremely prevalent in computer science education today, especially for the kids in my introduction to programming class. I would be working in the lab, and I would see 5-6 kids at any one time just searching for possible answers on StackOverflow. They didn’t care about the process of learning scheme, they cared about the results and their grade. But without the process, their results will (probably) be bad. Same thing goes with apps. There are so many iOS questions on StackOverflow and so many tutorials that someone could reasonably design and build an app without understanding anything they type into Xcode. It’s just kind of disappointing.

                      1. 2

                        That’s definitely the downside of the wide availability of code examples, app tutorials, and the like. On the one hand, when read carefully they provide very useful direction. On the other, a person can simply skim through, grab the code, and miss the far more important text around it.

                        For the Web Programming course I help lead (I am the Teaching Assistant, and helped write the lesson plans and online textbook) the professor and I work very hard to make students read instead of just grabbing any available code. We’ve found that requiring reading write-ups and making assignments more specific (make a site that does X using Y, rather than make a site that uses Y) has helped curtail some of the skimming. It’s not perfect, but it’s something.

                        1. 1

                          I agree that on a university level, that attitude of just searching for the fix without understanding the problem is harmful. But at a high-school or middle-school level, I think there’s definitely validity to the model of just getting something working even if the final product isn’t great. As software developers, I look at a pile of crap code and think about the maintenance overhead, how unreadable it will be to someone in the future, and how difficult it is to make simple modifications. But for a kid who’s just putting something together for fun, the code is disposable and the output is the point.

                          This is the coding equivalent of the kid who knows just enough chords to have fun when he brings his guitar along to the beach; just because he’s never going to become an expert doesn’t mean he’s wasting his time.

                    1. 5

                      It’s weird, earlier this year I switched to emacs as my primary editor after about a decade of using mostly vi(m) (and set -o vi in the shell); however, I’ve never used evil mode. That’s probably because I didn’t know about it when I was switching, but I think going cold turkey might have helped (forced) me to learn emacs better. Anyone else do the switch and find otherwise?

                      1. 4

                        When I switched to emacs I would get extremely frustrated by the drop in productivity and having to look-up-everything. It really is hard to learn something new that is a covering for something you are already proficient in.

                        Evil Mode is a great escape hatch for not having to launch vim. See once you launch vim you can stay in vim (also true for evil mode). Switching to Evil mode is a more unobtrusive context switch.

                        I would stay in emacs, attempt to solve the problem and if I couldn’t do it in under 30 seconds to a minute, I would switch to Evil. Solve it with vim keybindings and then make a note to myself to solve this problem in emacs while I wasn’t under pressure.

                        For me, Evil helps.

                        1. 4

                          When I made the switch about a year ago, I decided not to use Evil mode, because I would rather take the productivity hit and learn the emacs methods than fall back on vim keybindings, a tool from which I was trying to migrate. Similar to when people switch to vi(m), everyone says to take the productivity hit and always use h, j, k, and l rather than the arrow keys, until you learn it well enough where there’s no productivity gap. Same thing with emacs.

                          I just had a legal pad of paper next to my desk, and would keep around ten functions/commands (for instance: find and replace was on my list for a while [M-%]) scratched down. Whenever I decided I was proficient with one command, I would cross it off the list and add a new one. It made it easy to learn the things I would use pretty consistently, and I ended up with a list of 10 various commands that were extremely useful, but didn’t use them enough to have them memorized. It turned into a pretty handy reference!

                          1. 2

                            Great technique, it looks like we took two paths to get to the same destination.

                        2. 3

                          Hi! I’m the original author of the article. I guess the biggest obstacle for me wasn’t not knowing Emacs. I can use it without Evil Mode just fine, but with it I’m much much much faster. My style of work generally involves writing/moving the code around as I think, instead of just staring at the screen, and VIM is much faster at that (or VIM keybindings).

                          There is god-mode for Emacs, but I never really tried that, since Evil mode works perfectly for now :) It might be worth investigating though.

                          1. 1

                            I hadn’t thought of that; in retrospect, it very much makes sense. Thanks for pointing that out!