1. 3

Abstract: “In the cloud computing era, internet resources are becoming ample and abundant; hence the rising gap between remote application users and the virtualization resources. Considerable efforts have been invested in distributed file system and web caching with different optimizing strategies to shrink and ultimately eliminate the gap, but few existing studies have combined the two together to enhance the performance of the web server systems. In this paper, we make a solid effort to reveal the feasibility of the integration. After challenging ourselves with significant difficulties in design, we manage to prototype an evaluation system which could bring higher throughput, lower response delay as well as integrated functionalities of file system, web caching mechanism and web services. Our experience in exploring the method of web accelerations will be taken as useful reference for researchers working on high performance web services and FPGA applications.”

  1.  

  2. 2

    Not a bad idea to have a dedicated hardware module tailored towards the web. In particular, for distributed systems it would be particularly helpful. We’ve had a problem before with pages in memory deadlocking everything because the OS was taking so long to defrag memory and taking down the whole app.

    1. 1

      I was thinking one could also simultaneously improve performance and reduce attack surface, too.

      1. 1

        Does it reduce the attack surface because hardware is more easily verifiable?

        1. 1

          There’s less functionality in there since it includes just what you need. The hardware is FSM’s converted to logic. Both support strong, automated verification. Finally, the hardware implementation might allow you to do things like simultaneously input check all headers since it’s inherently parallel. That might further let you do more checks or protections that would have too much slowdown on general-purpose CPU. Some approaches do 50-70% hit in software but 1-10% in hardware.