A week ago, I received my Raspberry PI through the mail. Due to real life issues, I wasn’t able to do testing on this small ARM computer.

raspberry package

So after buying a cheap usb wifi dongle and assembling the casing, I decided to try a real life probable use of the PI.

raspberry assembled

On slow networks, traffic optimization is a necessity. It may not be as useable in our advent of fast broadband connections, but it is good to know how to do this.

For connecting to web sites with slow connections, we use a program named ziproxy.

Ziproxy is a Forwarding, non-caching, HTTP proxy targeted for traffic optimization. This open source application works by compressing pictures and other data.

Aside from traffic optimization I wanted to add caching to the mix (to deliver already visited sites without refetching the data from the actual site). Since ziproxy is a non-caching HTTP proxy, I decided to add the ever-popular Squid caching proxy to the mix.

This is the simplest way to do it (take note that I didn’t take into account on how to secure the Squid and ziproxy settings, as my Raspberry PI was behind a tomato flashed router so it wasn’t really accessible from the outside).

Let us install ziproxy and squid:

# sudo apt-get install ziproxy squid

open up ziproxy.conf:

# sudo nano /etc/ziproxy.conf

Uncomment the lines below and change it as follows:


3128 is the default port of the Squid proxy server. Let us leave the default values as is.

We need to restart the ziproxy daemon:

# sudo service ziproxy restart

We are now ready to test this out, depending on the the assigned IP address of your raspberry pi, change the settings of your browser to point to your raspberry pi (the settings below is from a firefox screenshot):

firefox settings

We can see that the proxy on your raspberry pi is working as expected (I used a link to my blog as a test bed):

This is a screenshot without passing through the proxy:


And this is the screenshot while passing through the proxy:


8 thoughts on “Running a Caching Proxy Server with Traffic Optimization Using Ziproxy and Squid In Raspberry PI”
  1. I ran a Squid VM virtually all the time when I was on Smart (Rocket Wifi, but in Pasay its just plain useless). Our point to point radio outside of the NCR is a bit more usable, but still nothing compares to a DSL land line… If we ever do move to the Philippines for a longer time that will be one of my main issues! But you’re right, the beauty of being a geek is the ability to optimize and scale! For cheap!

  2. Hey Ronald,

    Sun broadband in my experience is much better (Globe and Smart, are not really fast in my area – although Smart is quite fast in Panglao, Bohol – 🙂 ). However, nothing beats a DSL land line setup, like you said. 😀

  3. Hey, can you tell me why are you chaining zip proxy to squid and not vice versa?

    I believe if we chain squid to zip proxy then you’ll be able to get already compressed images from cache. Otherwise, each image (even the one coming from cache) gets ‘reprocessed’…

  4. Hi Aleksey (sorry for the late comment, was so busy), I based this on a documentation on a WAN accelerator with squid in the zip proxy website (using only one zip proxy instead of the 2 shown – the squid proxy was placed in between the two zip proxies). I placed squid on the first section because I wanted an option to connect some other computers in my network not to pass through the zip proxy (without compressing of the images) but have the advantage of caching as well. It’s just a preference on my part, but feel free to do what you suggested as well.

  5. I still can’t figure out when, when level 3 switches became the commonly known ‘router’ they lost the cache that had once been a common built in element. It is in the best intersts of the worlds ISPs to reduce repeated data xfers to a give local network . In fact an element of the push for routers tah have cachibg proxies built in. It lowers the quantity of data that is unnecessarily resent. Your local net only needs (for instance) to get elements of the main google search page when they change on the site. Antything more is extra unneeded load on the infrastructure… including the ISP’s max network bandwidth.

    If fact the cable modem backbone (which as buit for speed) does included intelligent caching to increase overall speed by reducing redundant data transfers. Imagine how often people in a home or a business get the same image files … partially because the machines on the local net each has its own cache with do unified cache … and partially because at the very same time that high speed broadband and unlimited accounts became common in the USA (which is less than 1/21st of the earths population and not representative of most of the planets web access) the quality of browser based web caching began more and more to suck beyond belief… making things as simple as the google logo image, load from the google server every single time the page is visiten even though the file path and content are the same. This is why there are so many browser plugins that make the newest versions of common browers, have caching that functions in the way it did in the version 1 and 2 browers. (I.e. – properly).

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

This site uses Akismet to reduce spam. Learn how your comment data is processed.