1. 2

    I’m reading “New Rules for Marketing & PR”, trying to get better at how to market my side-projects and to broaden my understanding. It’s.. good, but lord it could have been a third as long. Does anyone have a marketing book you enjoyed or recommend?

    At work I’m prepping a workshop for our SRE team on JVM memory; some mix of theory (mmap -> malloc -> JVM Heap | GC) and practice. Eg. why YourKit and similar profilers are to be avoided, and how to use tools like JFR, and HeapDumpOnOOM + MAT to find allocation issues.

    Oh! And I’m trying to find time to revisit mmap/msync. A long time ago we shunned OS memory mapping at Neo4j, for many reasons and many of which I now know were misunderstandings on my part. One of the remaining issues I have is about controlling flush ordering; having a user space virtual memory impl allows us to do stuff like “hey, if you need to flush this page, then please notify me because there are other pages that must have gotten flushed before it’s safe for this one to go to disk”. That and IO error handling. Spiking some toys in Rust to try to learn and explore ideas.

    1. 1

      Does anyone have streams you recommend watching?

      1. 5

        Also it’s not twitch, but you gotta tune into Andreas Kling hacking on SerenityOS :) https://www.youtube.com/channel/UC3ts8coMP645hZw9JSD3pqQ

          1. 2

            Some that I have enjoyed:

            And finally one because the question was not specifically code related: https://www.twitch.tv/broxh_

            I have found twitch a really interesting platform for creator - viewer engagement. Seeing my username on a subscribe popup and getting a “thanks for the follow” for the first time was a little surreal.

          1. 3

            I’m trying to do something useful around climate change. Working on the solar system at our cabin highlighted some utility in tooling for finding solar equipment that fit $SPEC ORDER BY price; the space is moving super quickly.

            This weekend I put up https://www.analyzesolar.com/. It lets people that are into this stuff describe their storage needs, and then it’ll generate possible configurations from real inventory and price lists at major vendors.

            It looks like garbage, but it works! So this week, my intent is to add copy text, make the form friendly, and if I have time wrap the React app in a Hugo website. So the app does this dynamic stuff, Hugo does the main site, content management etc.

            Would be interested in UI feedback and ideas from people here. Not so much on the current “design”, but rather if you have tips of websites to borrow ideas from. I was considering copying the form you get if you pick “tabular view” on the Swedish price comparison site Price Runner, here: https://www.pricerunner.com/ct/38/Computer-Memory-(RAM)

            Also.. the tech stack is interesting; it’s a set of python crawlers that output timestamped JSON files directly into the source repo. Those JSON files are combined into a single master JSON file that a React app consumes. So there’s no backend, but it is.. dynamic. You run a single make command to run the crawlers, re-generate the site, push it to S3 and invalidate the CloudFront distribution.

            1. 3

              Just converted some data structures in my green threading/fibres library to intrusive doubly linked lists and now the performance is good enough that I can move on to the next step: hooking it all up to io_uring/epoll/etc. Should be fun!

              1. 2

                What language and do you have this online somewhere?

              1. 3

                This works in Java as well. I had a bad habit of using it in the Neo4j code base for a long time, thinking I was super clever. Someone eventually had enough of those antics and got rid of most of them, but you can still find a few loops like this in the code, albeit nowadays split up into while a-- > 0, like https://github.com/neo4j/neo4j/blob/4.2/community/record-storage-engine/src/main/java/org/neo4j/internal/recordstorage/PhysicalLogCommandReaderV3_0_10.java#L582

                It’s really a horrible thing to do, actively hostile to whoever is reading it since it’s not at all obvious what’s happening.

                1. 2

                  The logging framework may not be a bottleneck, and other lies your laptop may tell you

                  Fixed that for you. Our main software absolutely and positively runs a lot faster without debug logs and this isn’t some poor choice of logging, it’s simply not meant to be run in debug mode. And tweaking logging to not be a bottleneck can be quite important and also hard, depending on the language/framework/threading model you use.

                  Most consumer machines have a single socket with RAM DIMMs located around it. Accessing any part of RAM has, roughly, uniform latency.

                  Again, I won’t debate “most” overall, but most developer laptops I know have 2 sticks (1 internal). Also isn’t it really about the bus and not the stick?

                  Maybe the headline is just not fitting the text in the best way?

                  1. 11

                    but most developer laptops I know have 2 sticks (1 internal). Also isn’t it really about the bus and not the stick?

                    I think(?) we are talking about two different things because I’m using the term “socket” in a vague way.

                    What I mean is that most servers will have multiple physical CPU sockets. Any given RAM stick will be connected to only one socket; threads that run on that socket will have fast access to that memory. Threads that run on other sockets will need to go “via” the “owner” socket to access those RAM sticks, not via the regular memory bus. On consumer hardware there’s generally just one physical CPU socket - even if you’ve got 16-32 cores on that socket.

                    On the ThinkStation I bought, there are 12 RAM sticks; 6 of them are connected to one CPU socket, 6 to the other.

                    Accessing “remote” memory roughly doubles the access times, give or take. Hence, a program that allocates memory in one thread and uses it in another will appear fine on a laptop, but may have catastrophic performance problems on server hardware, particularly for long sequences of dependent random access, like pointer chasing an object graph in an OO language.

                  1. 6

                    I feel like a lot of what is being discussed here has already been talked about at length in various other posts. Is it not odd that there seems to be a collection of C/C++ users who are misrepresenting Rust’s capabilities? Steve already talked about this in his post You can’t “turn off the borrow checker” in Rust which is mentioned in this article. I’ve seen many false statements across Reddit, HN, Discord, etc, that could easily be resolved by reading the documentation. What is causing this? It’s not like Rust’s documentation doesn’t spell out what it restricts.

                    All Rust checks are turned off inside unsafe blocks; it doesn’t check anything within those blocks and totally relies on you having written correct code.

                    This is objectively false! Granted the original video is in Russian, but if you’re giving a talk about Rust it seems like it would make sense to learn what unsafe actually does before preventing your idea of it as fact.

                    My greater question is: why does this happen this much? Am I disproportionately seeing more false comments about Rust than most people, or is there a real issue here? In contrast, people voicing their opinions on Go are founded on Go’s actual flaws. Lack of generics, error handing, versioning, et al. are mentioned, but when it comes to Rust, the argument shifts. Rust has flaws, and they are discussed, but there is quite a lot of misrepresentation. IMO

                    1. 14

                      It seems like a fairly normal human reaction, I think. People have invested large portions of their life towards C++ and becoming important people in C++ spaces. In that group of people, most are deeply sensible geeks that have reasonable reactions to Rust. But there will be some that have their own egos tightly coupled with C++ and their place in the C++ community, that see the claims made by Rust people as some form of aggression - attacking the underpinning of their social status.

                      And.. when that happens, our brains are garbage. Suddenly the most rational person will say the most senseless things. We all do this, I think.. most of us anyway. Some are better than others at calming down before they find themselves with all the lizard brain anger organized on a slide deck, clicking through it on stage.

                      1. 2

                        While I love this explanation, I do want to point out the complexity and length of the list of actions one must do to build a misleading slide deck and speak on stage about it with absurd confidence.

                        1. 1

                          Hm that might be true, I think this also happen to a lot of people attacking graphQL, they do not want to accept an alternative to REST.

                        2. 6

                          I think these are different crowds. People who use Go instead of X vs. C/C++ people looking into Rust. Based on my very limited experience talking to C/C++ developers they got this sort of Stockholm syndrome when it comes to programming languages and they always try to defend the shortcomings of their favorite language. UB is fine because… Overflows are fine because…. They does not see any value in Rust because their favorite language has all. I do not know that many Go developers, but the ones I know are familiar with the shortcomings of Go and do not try to downplay it. All of this is anecdotal and might not represent reality but one potential explanation of what you observed.

                        1. 3

                          This is not really fork. For a start, fork implies copy-on-write mappings. If a process has a MAP_SHARED mapping (of a file or [anonymous] shared memory object) then both the parent and the child will see the same thing and it will be explicitly synchronised. You could do this via RDMA, but it wouldn’t be cheap.

                          Ignoring file descriptors also means ignoring the most difficult part of doing this right. VM migration is orders of magnitude easier than POSIX process migration because the amount of state in the hypervisor for each VM is vastly less than the state in a *NIX kernel for each process. A VM typically has a handful of virtual or emulated devices, often just a disk and a network. The only state of the disk device (other than the backing store itself) is the queue of pending requests, which is easy to transport. The only state of the network device (other than the external routing tables) is the set of pending requests and in-flight responses, which are easy to migrate. In contrast, each UNIX file descriptor has an underlying object and an unbounded amount of stream state associated with it. Migrating this properly is difficult for threereasons. First, there’s no introspection to automatically copy the state associated with the object. Second, state is shared. If I open a file and fork, then both processes will share the same file descriptor and reading with one will alter the state of the other. Third, the objects are often intrinsically local. For example, you can copy a file from the local filesystem, but the filesystem is a shared namespace and so you then alter the sharing behaviour between that process and any other process that has the file open.

                          I find it difficult to imagine this being generally useful because any nontrivial process is going to find itself in an undefined state after telefork. The UNIX process model is not the place to start if you want to end up with an abstraction like this. In fact, given the later use cases, an RPC server that runs some WebAssembly provided in the RPC message is closer.

                          1. 7

                            I feel like I explicitly said that handling file descriptors correctly is super hard, although CRIU and DMTCP make attempts that work for the common cases. I also mentioned possible extensions to do both lazy copying and using a MESI-like protocol to do shared memory of pages across machines. What I have is just a fun demo to show what’s possible if you ignore the hard parts, and I say as much.

                            1. 3

                              Just to have said it: That this was a limited tech demo was indeed abundantly clear in the post. Not sure why people are acting as if you’re claiming this to be production grade ready-to-ship software..

                              I really enjoyed reading the article, I can physically feel the excitement you must’ve felt when you first got this demo working. Thanks for writing it up :)

                              1. 1


                              2. 1

                                I’m sorry if I came across as overly critical. It is a neat demo. I’ve done something similar in the past and rapidly hit the limitations of the approach quite quickly. I’ve also read a bunch of research papers trying to do something similar as a complete solution and they all hand-waved away a load of the hard bits, so I’m somewhat prejudiced against the approach.

                            1. 2

                              For larger scripts, I find this useful in the preamble, so you get stacktraces:

                              trap 'stacktrace "Error trapped rc=${PIPESTATUS[@]}, see trace above"' ERR
                              stacktrace() {
                                local frame="${#FUNCNAME[@]}"
                                echo >&2 "Stacktrace:"
                                while [[ "${frame}" -gt 1 ]]; do
                                  echo >&2 "  File ${BASH_SOURCE[$frame]}#L${BASH_LINENO[$((frame - 1))]} (in ${FUNCNAME[$frame]}())"
                                if [[ "$#" -gt 0 ]]; then
                                  echo >&2 "$1"
                              1. 1

                                Thanks. TIL, I will try it out :)

                              1. 3

                                What is the mechanism that keeps a compromised process from making additional pledges and calls to unveil? ie. if there was an RCE in some program, could I not include in my payload to first call pledge/unveil before I go off and do evil things?

                                1. 8

                                  pledge calls are one way, meaning they can only ratchet further down (become more restrictive). If I call pledge("stdio") (pseudo call), any subsequent calls to pledge will fail and the kernel will kill the app.

                                  Similar with unveil, I set the access and call pledge("unveil"), now any attempts to unveil will cause a pledge violation. Also calling unveil with null args will prevent further manipulation.

                                  unveil(2) pledge(2)

                                  1. 2

                                    Super clever, thanks for taking the time to explain!

                                    1. 3

                                      I made a mistake, that pledge call can be reduced: pledge("") would be the most ratcheted down.

                                      Also pledge("unveil") allows for making calls to unveil, one would remove “unveil” from the list to further prohibit modifications.

                                  2. 2

                                    Pledge and unveil work together or separate depending on the app.

                                    First pledge promises can only be removed not extended once the first call to pledge was made.

                                    Some apps only have pledge promises like stdio, dns, net. When there is no rpath, wpath, cpath, fattr the app can not access the file system anyhow, no need for unveil. Exception: You can read and write to filedescriptors opened before the pledge call.

                                    Unveil works the other way the first call to unveil hides the entire fs except for the unveiled path.

                                    Further calls to unveil make more files in the filesystem visible until you call unveil(NULL, NULL) which locks it down or you call pledge without the unveil promise which will kill the program when it tries to call unveil in the future.

                                  1. 7

                                    I’m working on building simulation. There’s an interesting optimization problem when you build off-grid around insulation/batteries/solar panels; what’s the ideal combination? There are a lot of rules of thumb around insulation thickness that I think are wrong, because the price of solar panels has collapsed since those rules were instituted.

                                    If you wanted to build a building with it’s own energy supply in the 90s, the way to do it was with massive amounts of insulation, like in the R-100 range. Builders of high efficiency houses in my area still go by that, but I think it’s wrong; the right mix in 2020 is way less insulation, way more solar panels and batteries.

                                    Problem being that in case you’re wrong you’ve built a really expensive freezing cold box.

                                    I’m using the DOEs physics simulation engine, Energy+, with a Go frontend, to simulate different scenarios of the cabin I’m building here in Missouri.

                                    1. 1

                                      This sounds cool. Out of curiosity, are you factoring in the lifetime expectation of the solar panels and batteries, or are you assuming they last the life of the cabin (however long that is)?

                                      I’d love to build a cabin someday, so this is relevant to my interests.

                                      Edit: jeez that video is terrifying. There’s reasons whole villages got together to raise the frame of a new building, I guess, more labor could have really really made it less precarious to handle. Glad you got it done safely.

                                      1. 1

                                        I intend to, but I’m not factoring in lifetime expectation yet. Same applies to insulation; there’s another axis of which type of insulation you use, some lose a lot of their efficiency as they age, some don’t. It’s a really interesting space, but it seems super easy to make a simple mistake in the simulation and come up with totally wrong answers.

                                        Yeah I’m definitely not raising giant A-frame triangles like that ever again. There’s an ongoing debate in the family of whether the ones that fell would’ve killed me if they hit me in the head; I think I would’ve lived, but it’s a bit uncomfortable that there answer isn’t clear.

                                    1. 1

                                      I’ve done some creative substitution work to create good file-based targets in the past and take advantage of Make’s laziness. The sentinel file, though, is a very nice hack that gets some of the benefit without much work. I appreciate that tid-bit. As for the rest, I agree with the others that going all in on GNU and Bash can be helpful, but it cuts down on portability. People still do run BSD systems, after all.

                                      1. 2

                                        Writing this post I was made aware of the empty target, which I now think is the traditional name for this pattern. Though it’s not documented as being useful for rules that output multiple files.

                                      1. 1

                                        Since there is a deploy: test rule in the example. Will this actually depend on a recipe for test if that is made .PHONY? Or will a file name test still satisfy the rule?

                                        1. 1

                                          If you were to declare test phony, then any rule that declared test as a prerequisite would always be re-run, and would not look for a file named test.. I’m not sure if there’s a good use case for doing that, vs just using sentinel files for the test target.

                                          What we do if we want a “user friendly” name for a target like testing is to have a sentinel target that other rules depend on, and then a dedicated phony target for direct calls:

                                          tmp/.tests-ran: <source files>
                                          > touch out/.tests-ran
                                          # This is now just an alias to make it possible to run `make test` for human users
                                          test: tmp/.tests-ran
                                          .PHONY: test
                                          deploy: tmp/.tests-ran:
                                          > ..
                                          .PHONY: deploy
                                        1. 2

                                          It’s one of those questions I hate[0] in beginner programming courses. There are a ton of possible solutions and sometimes the stupid hacks are the best ones. Or maybe not. Or you think “this can’t possibly be it” and then it is.

                                          >>> a = 12345
                                          >>> print len("{}".format(a))


                                          [0]: When I started studying and had already programmed for years we got some sort of chess question, the details are not important, something with how many variations a knight can move. We thought hard about this question for hours, and dismissed “brute force” - the officially accepted version was “brute force”.

                                          1. 0

                                            This isn’t the best solution. Here’s an example of how to do better, assuming that you’re thinking of Python. First, timing for your example:

                                            $ pypy -m timeit -s 'x = 2 ** 1024' 'len("{}".format(x))'
                                            100000 loops, average of 7: 4.95 +- 0.0606 usec per loop (using standard deviation)
                                            $ pypy -m timeit -s 'x = 2 ** 1024' 'len(str(x))'
                                            100000 loops, average of 7: 4.95 +- 0.092 usec per loop (using standard deviation)

                                            To do better, notice that we don’t need to format a string with all of the digits of the number; we only need to know how many digits would have to be allocated. We can do a bit of arithmetic with logarithms, and the builtin .bit_length() method of integers:

                                            $ pypy -m timeit -s 'import math; r = math.log(2)/math.log(10); x = 2 ** 1024' 'int(r * x.bit_length()) + 1'
                                            10000000 loops, average of 7: 26.3 +- 0.247 ns per loop (using standard deviation)

                                            This shows that int(r * x.bit_length()) + 1, while not especially attractive or readable, is quite a bit faster than the obvious solution. You can also verify for yourself that when x = 0, int(r * 0) + 1 == 1 and we avoid the pitfall mentioned in the article. Indeed it is defined on all integers.

                                            The main distinction between my approach and the article’s approach, while both involve logarithms, is that rather than using logarithmic operations directly on the value of the input integer, I am taking a data-structure operation on the data type of integers, which happens to itself yield an integer, and then using further arithmetic on that new derived value.

                                            1. 6

                                              This isn’t the best solution.

                                              What you are demonstrating is a faster solution, which is not synonymous with “better”. The simple solution wink includes is, as you demonstrate, reasonably fast and has an important upside: It’s simple to understand.

                                              If you need something that can do the job faster - by all means. But if this isn’t a bottleneck, you’re doing future maintainers a disservice by choosing a more cryptic solution over a simple to read one.

                                              1. 2

                                                I’m not claiming my solution is best or in any way good - but the scope was an event for beginners in programming.

                                                You can solve it mathematically or pragmatically if you know len() in python and have worked with strings for longer than 3 hours (print "Hello, {}".format(name) is very often the second thing you’ll write after hello world).

                                                If you’re a beginner it’s probably not even on your mind to think of benchmarking or trying to guess how fast python or any other interpreter will run a series of commands/instructions. How many instructions are even involved for each solution? :)

                                            1. 3

                                              Pretty sure this is the first time anyone had ever described Congress as “elegant”. :D

                                              That said this is really about how the rules for running a bunch of people trying to get stuff done, as formalized through things like Robert’s Rules of Order. Which is a very interesting idea, given it’s basically a social program executed by humans.

                                              Also reminds me a bit of Magic The Gathering, another human-executed stack machine where mutating the stack is a datum that goes on the stack.

                                              1. 4

                                                Haha, well you gotta call a spade a spade, even if it happens to be a spade everyone agrees is dysfunctional :) The underlying design is really simple and elegant, to me.

                                                As I understand it the design of the US Senate is the foundation for all the other US-based variants of this. So Jefferson modeled the Senate after the UK Parliament, and then the House copied that and then Robert’s Rules and the Mason’s Manual followed.

                                                I’ve spent so many times in audiences at formal meetings just utterly dumbfounded about what was going on. Implementing this parser was such an enjoyable learning experience, and a little dive into history. I didn’t realize Magic also implements a stack machine - do you know if anyone’s written about it from that perspective?

                                                Also exactly yes - it’s social code. There’s another aspect here which is the code to understand these procedures also really, really smells like code for implementing Paxos or Raft. I suppose that’s not terribly surprising since it’s all about consensus.. but it makes me wonder if there are algorithms used parliamentary procedure that could be applied in distributed computing..

                                                1. 4

                                                  I didn’t realize Magic also implements a stack machine - do you know if anyone’s written about it from that perspective?

                                                  Been a few years since I played MTG, but IIRC the rules themselves name the concept of “the stack,” and spells resolve in LIFO order accordingly. Probably the most classic example of this is Counterspell, which lets a player negate their opponent’s spell while it’s still on the stack… but then someone could counter that counter etc etc.

                                                  The game is also Turing Complete.

                                              1. 4

                                                Really enjoyed this! Super relevant to some of the stuff we’re doing at work currently, thanks for taking the time to research and write this up.

                                                1. 2

                                                  Glad to hear! I’m hoping to get some of the code I wrote while doing this research into production this month, and then we’ll see if the theory’s any good. :-P (Either way, I should make another post later with what I find, and a usable extract of the code.)

                                                  If you end up trying out a traffic-based approach, I’d love to hear how it works out.

                                                  1. 3

                                                    The bulk of the routing we’re doing is on the database protocol we built for Neo4j, this one I’m not directly involved in developing that anymore, but very much am involved in load balancing with it.. one of the downsides of custom protocols is that you don’t get built-in introspection ability in the big load balancers like you do with HTTP. Maybe it’s time to write some haproxy extensions.. :)

                                                1. 5

                                                  I’m stuck in a weird loop where I want to work on more code/projects on the side, but I’m not sure if it’ll just be another false start that I abandon. So I’ll contemplate that.

                                                  1. 3

                                                    I meditated and I think the reason is that:

                                                    1. I’ve worked for a long time (up to 5 years in some cases) on my existing open source projects
                                                    2. Therefore, they look polished (compared to their initial state) - unit/integration tests, good README, docs, performance optimizations
                                                    3. When I start a brand new project, naturally after just the initial handful of commits, is an unpolished PoC
                                                    4. I feel like they’re bad projects, especially compared to the old ones
                                                    5. I abandon them

                                                    I might need to release something that looks hacky, and just persist in committing to it and letting it evolve over time.

                                                    1. 2

                                                      Really feel this.. I’ve been stricter on myself the last two years or so, not giving in to as many whimsical projects. At the same time, I’ve both enjoyed working on and learned a lot from false starts in the past. My thinking is stuck in this jamb where on the one hand I want to work on things to completion and to do things that seem “important” - but on the other hand, if I get joy in the moment from some random rabbit hole, is that so bad?

                                                    1. 3

                                                      As for what to do to improve CO2 levels at Recurse Center, one option would be to hire an HVAC person to install CO2-driven ventilation. Depending on how the building is currently ventilated, you may be able to install a simple damper that opens to let the HVAC system pull in fresh air when CO2 levels get too high, and and that closes when they are reasonable, trying to strike a balance between energy efficency and healthy air.

                                                      Something like this: https://hvacsolutionsdirect.com/catalog/Duct-Smoke-CO-Detectors/CO-Duct-Detector-Kit/Young-Regulator-DEMAND-AIR-CO2-FRESH-AIR-Damper-SKU2669

                                                      There’s fancier variants as well if energy efficiency is a concern.. but the basic idea anyway is to have something that regulates fresh air intake based on CO2

                                                      Or.. you know, if someone happened to have a Raspberry Pi with a CO2 sensor laying around, just buying a motorized damper and controlling it with the Pi :)

                                                      1. 1

                                                        Don’t do that. You want an ERV or an HRV. A simple air exchanger will cause humidity issues. As a bonus you’ll get filtered air with an ERV or an HRV

                                                        1. 1

                                                          Nah. HRV does nothing for humidity, it’s the same as regular ventilation as far as that goes. ERV will help retain whatever the indoor humidity is. So if you’re in a cold climate with no A/C or dehumidification, it’ll raise indoor humidity since it’ll “hold in” humidity that otherwise would have moved to balance with outside air.

                                                          Not necessarily a bad thing if you’re in a dry climate, but not something that helps with moisture problems.

                                                          If you’re in a hot/humid climate and you have dehumidification, then an ERV will help offload the dehumidifier/AC.. but then the A/C was already doing the work to keep mold at bay.

                                                          Basically once you start mixing in energy recovery, humidity becomes complex. It’s good stuff, but if you just want to keep CO2 levels down then just a regular old fresh-air intake for the HVAC is much cheaper and simpler.

                                                      1. 1

                                                        I just ordered Arduino kit to replace the burned out control board for my guest house A/C.. I had been hoping to have it chat with a Zigbee temp/humidity sensor indoor, but Zigbee seems much more complex than I thought it was. There is like.. Zigbee and then use case oriented standards on top, but no info on sensors about what Zigbee standard they are using?

                                                        I couldnt work out if an off the shelf Zigbee temp/humidity sensor would work with the XBee stuff, and the XBee sensors are 5x the price.

                                                        So, I just ordered a regular wired sensor for now, the smart home will apparently have to wait.. still excited though!

                                                        1. 5

                                                          Busy removing about 7 m³ / up to 1 m of hard granite bedrock behind the barn to be able to pour concrete for a floor there. Since ‘professionals’ like to charge several arms and legs for this type of work I’m tackling it myself with a hefty drill, a number of 20-24-34 mm drill bits, a set of rock splitting wedges fitting those hole sizes and a few large hammers. I’m using my ancient Fordson Major backhoe to grapple with the (sometimes rather large and heavy) segments of rock which I manage to split off the (shrinking) hillock.

                                                          1. 1

                                                            What drill, specific bit and wedges do you recommend? Im keen to improve drainage in a crawlspace I put over red granite and have been pondering how to split the rock..

                                                            1. 3

                                                              I’m using a drill which is on - or below - the lower end of what is advisable for the type of work I’m doing with an impact energy of 9J. 15J or higher is what normally is called for, something like an Atlas Copco Cobra Combi ([1], a petrol-driven jackhammer/rock drill). I planned to rent a Cobra but could not get hold of one, instead I bought an Einhell TE-RH 38E rotary hammer drill [2] with SDS MAX chuck (I’ve broken too many SDS+ drill bits to rely on those for heavier work) for about the same money as I’d have spent to rent a Cobra. The work takes a bit longer with this machine but a) that is unavoidable given the dearth of machines for rent and b) not much of a problem as there is no rental time pressure.

                                                              The splitting wedges [3] look like the ones offered on eBay [4] for a few $ but they are far larger. They more or less work as advertised as long as you make sure to either drill loads of holes or drill a few at chosen spots and in the direction of the natural fault lines in the rock. The bigger (34 mm) versions I use should be hammered in with a large maul, for smaller versions (20 mm and 24 mm) a 1.5kg maul is sufficient. Drill holes at ~20 cm distance and of sufficient depth (how deep depends on the rock to be split and the wedge to be used), insert the wedges and hammer them in. Wait a bit, hammer a bit more, wait, a bit more and the rock will start to crack - it can take a few minutes for the cracks to appear, be patient and don’t try to force it. Repeat this for the next layer and the next, etc.

                                                              [1] https://www.atlascopco.com/sv-se/construction-equipment/products/handheld/breakers/petrol-breakers/cobra-combi

                                                              [2] http://products.einhell.pt/pt_en/tools/rotary-hammer/te-rh-38-e.html

                                                              [3] https://xn--sten-sprckning-dib.se/wp-content/uploads/2013/03/NY_Modell_av_34MM_KIL.jpg

                                                              [4] https://www.ebay.co.uk/itm/14Pcs-9-16-Plug-Wedges-and-Feathers-Shims-Quarry-Rock-Stone-Splitter-Hand-Tool/322519602703?epid=2118156785&hash=item4b17aa960f:g:c8AAAOSwiiVb0Xqu