Difference between revisions of "How to cache openSUSE repositories with Squid"

From Perswiki
Jump to: navigation, search
(fetcher206)
(fetcher206)
Line 148: Line 148:
  
 
  startproc -s -q /usr/bin/fetcher206
 
  startproc -s -q /usr/bin/fetcher206
 
== fetcher206 ==
 
This is the daemon that works with squid to retrieve complete copies of files that were otherwise retrieved with segmented download.
 
The explanation is that although squid is unable to assemble individual segments into a complete file, it is able to satisfy partial requests
 
once a complete copy of a file has been retrieved.
 
 
In other words, the solution is to make sure squid gets a complete copy of every file that is retrieved with segmented download. fetcher206 does this by reading a squid logfile and fetching complete copies of files using wget.
 
 
fetcher206?? Well, the daemon had to have a name, and as it's looking for completed partial HTTP requests, and these are indicated by an HTTP status code 206, I ended up with fetcher206.
 
 
For now the daemon is written in PHP - at some point I want to rewrite it in C, but I find PHP is very useful for fast prototyping. There is room
 
for improvement, but it does a pretty decent job as it is. 
 
 
Daemon pseudo-code:
 
 
read config
 
while true
 
  check jobqueue, joblist
 
  if logfile has data
 
    look for TCP/206, if host is an openSUSE mirror, update joblist.
 
  done
 
done
 

Revision as of 11:11, 17 May 2012

Summary: how to make your local squid web cache work with openSUSE repositories and the openSUSE network installation process. In effect, how to run a fully autonomous local repository mirror.

Background

In my company, we do quite a lot of testing of openSUSE, and over the last three-four years, we have increasingly switched to installing over the network. Prior to that, we would install from DVD images over NFS served by a local server. However, over last couple of years, we've been working a lot more with Factory and the regular snap-shots that lead up to a final/gold release. With those it is much easier to just point the installation process to the right URL and have everything downloaded there and then.

When we're testing installation or new hardware, we often have to repeat the installation process many times on different machines. Not because it doesn't work as such, but because we might be testing or debugging our own add-ons or to collect diagnostics. Sometimes we install on virtual machines, sometimes on desktops, more often on server hardware in our downstairs datacentre. We have a local squid web cache, but after having switched to doing network installs more frequently, I have often been annoyed by the lack of effectiveness for caching the openSUSE repository. When I've already done one installation, the downloads for a subsequent one should obviously happen a lot faster, in fact at wire speed. Well, they don't and that's annoying when you know they should have been cached.

The immediate alternative would be to run a local copy of the openSUSE repositories, but it requires a process for keeping a the local mirror up-to-date, plus a bit of manual interaction (adding the right URL when installing. This is all entirely feasible, but I thought using squid would be a more elegant and (hopefully) fully autonomous solution. so I decided to figure out why our squid wasn't coping.

Well, squid and the openSUSE network installation process just don't work together very well. Not out-of-the-box anyway. The repository at download.opensuse.org is served by a load-distribution system combining mirrorbrain and metalinks. I won't go into any further detail, suffice to say that this means packages are downloaded using segmented downloading spread over multiple mirrors, which together makes it impossible for squid to do much caching.

The problem

Well, two problems really:

  • the openSUSE repository is mirrored around the world. Mirrorbrain does a good job of picking the most suitable mirrors depending on your location, which also means a good distribution so individual mirrors aren't overloaded. However, squid does not know that multiples mirror sites serve the same file, so caching is rendered largely ineffective.
  • the segmented download means a package is downloaded in bits from multiple mirrors. This is good for speeding up the download and making good use of the available downstream bandwidth. The problem is that squid is only able to cache whole files, not parts of files, so now caching is completely useless.

I have solved both of these problems:

  • using a squid url rewriter, I map all the mirror locations on to a single one.
  • using a squid logfile and a custom written daemon, I do complete downloads of all the files that are being fetched with segmented downloading.

Summary

For anyone, an individual or a group of people, doing repeated ad-hoc installations of openSUSE (typically Factory), using this squid setup means

  • significantly faster installation due to downloads at wire speed
  • significant bandwidth savings due to a working cache
  • less load on openSUSE mirrors due to a working cache
  • zero local mirror management (assuming a working squid setup).
  • no need to worry about where to install from

Others doing e.g. repeated updates or adding software, should enjoy similar benefits (once the packages have been cached).

Download

For the impatient, I've tar'ed everything into a single download. This contains the daemon code, one sample config files and the scripts for keeping up with the list of openSUSE mirrors. It's not as easy as just plonking another package into your openSUSE system with YaST or zypper, but the following step by step guide will hopefully help.

fetcher206.tar.gz

Step by step

Squid

The Squid web-proxy is the key element in this setup, so step one is setting up a working Squid installation. The setup here works for Squid 2.7. If you already have a working Squid installation, skip to step 2.

Setting up Squid is not as complicated as it may appear, but you'll have to consult squid documentation, it's outside the scope of this article. Whether you prefer directing access using environment variables http_proxy et al, or if you run a transparent proxy (like I do), is not really important.

jesred

jesred is the URL rewriter. It's fairly mature, but fully functional. (original webpage). I had to make a couple of changes to make it fully compatible with squid 2.7:

For the moment, it does not come packaged, you'll have to build it from scratch:

tar xzvf <tarball>
cd jesred-1.3
make

Installation: when you're done, copy the binary jesred into /usr/bin.

Configuration: add the following two lines to /etc/squid/squid.conf

storeurl_rewrite_program /usr/bin/jesred
storeurl_rewrite_children 5

The config file for jesred: /etc/squid/jesred.conf

allow = /etc/squid/redirector.acl
rules = /etc/squid/opensuse-redirect.rules
redirect_log = /var/log/squid/redirect.log
rewrite_log = /var/log/squid/rewrite.log

Using /etc/squid/redirector.acl you can control which clients' requests the rewriter should process:

# rewrite all URLs from
192.168.0.0/21

Logfile for fetcher206

Amend /etc/squid/squid./conf as follows:

logformat f206 %{%Y-%m-%dT%H:%M:%S}tl %Ss/%03Hs %rm %ru %mt
access_log /var/log/squid/fetch206.log f206

This log will be read by fetcher206.

To prevent it growing too big, add the following to /etc/logrotate.d/ :

/var/log/squid/fetch206.log {
   compress
   dateext
   maxage 365
   rotate 5
   size=+4M
   notifempty
   missingok
   create 640 squid root
   sharedscripts
   postrotate
    /etc/init.d/squid reload
   endscript
}

squid delay pool

This is an optional step - depending on your available downstream bandwidth, you may want to restrict what is used by fetcher206 for retrieving the repository files. This prevents

  • slowing down the current installation and
  • abuse of the internet connection
delay_pools 1
delay_class 1 1
delay_access 1 allow localhost
delay_parameters 1 1000000/1000000

Add the above to /etc/squid/squid.conf - it defines one delay_pool, only accessible from localhost (which is where fetcher206 will be running wget) with a maximum bandwidth of 1MByte/sec.

If you have other http/proxy traffic originating from localhost, you could just add another 127.0.0.x address, and use that specifically for fetcher206.

mirror database

We need a current list of the available openSUSE mirrors. This can be retrieved from mirrors.opensuse.org.

mkdir -p /var/lib/fetcher206
cp tarball/Makefile /var/lib/fetcher206
make -C /var/lib/fetcher206
cp tarball/opensuse_mirrors /etc/cron.d

reload squid

When you've come this far, it's time to reload squid with

squid -k reconfigure

fetcher206

fetcher206 is, for the time being, a PHP script. Install it by simply copying it into /usr/bin. It has a few hard-coded options, such as number of wgets to run concurrently, name of logfile etc.

fetcher206 does not yet have a systemd service unit, nor an LSB init-script. For the time being, you simply start it with:

startproc -s -q /usr/bin/fetcher206