Can a LAN have an update server?

Asked by LEGOManiac

For clarification, I'm thinking along the lines of Microsoft's Windows Update Server (WUS). A WUS exists on a LAN and downloads the various Windows Updates. Local admin's then decide what updates they want to publish to the local LAN. Published updates are then downloaded to workstations taking advantage of the LAN's substantially higher speed rather than all the workstations downloading over the internet. It also takes a load of Microsoft's servers.

So, does such a thing exist for Ubuntu?

I have 6 machines here that have spent the better part of the last 8 hours downloading 9.10. I'd like to just download it once to my server and then have it distributed locally.

Is this possible? If so, how do I do it? The server, BTW. is running 8.04LTS.

Question information

Language:
English Edit question
Status:
Solved
For:
Ubuntu apt-cacher-ng Edit question
Assignee:
No assignee Edit question
Solved by:
marcobra (Marco Braida)
Solved:
Last query:
Last reply:

This question was reopened

Revision history for this message
Best marcobra (Marco Braida) (marcobra) said :
#1

Sure install and configure your client to use everyday apt-cacher-ng ...

Package: apt-cacher-ng
New: yes
State: installed
Automatically installed: no
Version: 0.4-1
Priority: optional
Section: universe/net

Description: Caching proxy for distribution of software packages
 Apt-Cacher NG is yet another implementation of a HTTP proxy for software packages, primarily targeted at Debian/Ubuntu packages but may also be used with others types.

 It follows similar principles as others (Apt-Cacher, Apt-Proxy, Approx) and serves the same purpose: a central machine presents the proxy for a local network and clients
 configure their APT setup to download through it. Apt-Cacher keeps a copy of all useful data that has been passed through it and when a similar request appears, the old copy of
 the data is delivered without redownloading it from the Debian mirrors.

 Apt-Cacher NG is more than a simple rewrite of Apt-Cacher. It was redesigned from scratch and is written in C++ with main focus on maximizing throughput with low requirements on system resources.

Hope this helps

Revision history for this message
marcobra (Marco Braida) (marcobra) said :
#2

Please read the main developer site to configure it... http://www.unix-ag.uni-kl.de/~bloch/acng/

I use it...

Hope this helps

Revision history for this message
marcobra (Marco Braida) (marcobra) said :
#3

You don't need to install apt-cacher-ng on the clients, on the clients you need simply a configuration file.

Please install the server only on one pc and configure it (here some of my configuration):

sudo vi /etc/apt-cacher-ng/acng.conf

Port:3128

BindAddress: localhost 192.168.1.40

save and exit then start the service type:

sudo /etc/init.d/apt-cacher-ng start

Then configure the server itself and all the clients to use it... simply create into all pc in the /etc/apt/apt.conf.d/ a proxy file

sudo vi /etc/apt/apt.conf.d/proxy

and put a row like this:

http::proxy "http://192.168.1.40:3128/";

Hope this helps

Revision history for this message
LEGOManiac (bzflaglegomaniac) said :
#4

Thanks for all the clarification. Before I close this, I just want to clarify one thing:

I'm running Squid on my server. Does this imply that the Update-Manager is going to use my proxy server to download the updates? I've been wondering A) why updating 4 systems to 9.10 is taking sooo looong and B) why my server seems to be getting hammered. It's a PIII and it doesn't take much to load it. I had been assuming that Update Manager was going directly to the internet to download updates. It never occurred to me it might be using my proxy server.

Revision history for this message
marcobra (Marco Braida) (marcobra) said :
#5

Obviously you need to differentiate the listen port for the apt-cacher-ng from the squid port if you have squid and apt-cacher-ng on the same ip (same pc).

The proxy value into the /etc/apt/apt.conf.d/proxy file of clients are used only when you download deb packages from the net.

I use a crontab on the server to download (only download and not install) upgrades at night and to have already cached packages when i choose to upgrade it (the server).

Here the crontab row it download upgrade for the server at 7.15 in the morning...

sudo crontab -e -u root

15 7 * * * /usr/bin/apt-get update; /usr/bin/apt-get upgrade -d -y; /usr/bin/apt-get dist-upgrade -d -y

----

Then i set a client (the client with more installed packages on it) that start every day with wakeup bios event at 6.00 a.m. and in that client i set a crontab row like this:

# m h dom mon dow command
# at 6.15 a.m. start the download of upgrades of this client every day
15 6 * * * /usr/bin/apt-get update; /usr/bin/apt-get upgrade -d -y; /usr/bin/apt-get dist-upgrade -d -y

# only saturday and monday shutdown this system at 7:00 a.m
0 7 * * 6,0 /sbin/shutdown -h now

Hope this helps

Revision history for this message
marcobra (Marco Braida) (marcobra) said :
#6

Sorry the row...

# only saturday and monday shutdown this system at 7:00 a.m

must be

# only saturday and sunday shutdown this system at 7:00 a.m

Hth

Revision history for this message
marcobra (Marco Braida) (marcobra) said :
#7

Sometimes i use this terminal command on the clients to upgrade them:

export http_proxy=http://192.168.1.40:3128/; sudo apt-get update; sudo apt-get upgrade

to get also the repository index content cached from the apt-cacher-ng cache, not only the deb files.

This might create some issue if a meta package need to download some http content, usually meta packages (for example msttfcorefonts).

If the http_proxy variable is set to point the deb cache (the deb cache usually not permit to be used as real http_proxy) them fails to install...

So after the failure setting the http_proxy to the real squid proxy and then giving the sudo apt-get -f install make the fail packages to be installed so i use a more complete command like this:

export http_proxy=http://192.168.1.40:3128/; sudo apt-get update; sudo apt-get upgrade; export http_proxy=http://192.168.1.40:3142/; sudo apt-get -f install

Hope this helps

Revision history for this message
marcobra (Marco Braida) (marcobra) said :
#8

Obviously a different and more elegant way is to setup Squid to keep the repository index cached...

Hth

Revision history for this message
LEGOManiac (bzflaglegomaniac) said :
#9

Thanks for the help. I'm trying to implement it. I've downloaded the latest .deb (0.4-1) and installed it on port 6809. It runs. Netstat reports that apt-cacher-ng is listening on the correct IP on port 6809.

Yet when I try to connect with my browser (http://192.168.0.201:6809/acng-report.html), the page times out. Any idea why?

On the off chance that it required something in the cache to report on, I ran on one of the workstations:

sudo export http_proxy=http://192.168.0.201:6809/; sudo apt-get update; sudo apt-get upgrade

and used Synaptic to install a package. I expected to see an entry in /var/cache/apt-cacher-ng/ but it was empty.

Then I thought that perhaps Synaptic does it's own thing and found the proxy settings under Settings->Preferences->Network and set the proxy IP and ports accordingly. I then downloaded another package and Synaptic hung.

I rebooted the server (more out of desperation than because I thought it would change anything) but still no-go.

Revision history for this message
LEGOManiac (bzflaglegomaniac) said :
#10

Bang, Bang, Bang (Me banging my head on the desk)

In my defense, it was after 1:00 in the morning when I wrote the above message.

Coming back to my desk this morning, it immediately occurred to me that I had installed apt-cache-ng on the firewall and as such, I needed to define a service port for it and a firewall rule allowing hosts to use that service port. Duh, Duh, Duh,

It works now, at least as far as the web browser connecting to it is concerned.

I now have an unrelated DNS problem so once I get that resolved, I'll check that the updates are being cached.

Revision history for this message
LEGOManiac (bzflaglegomaniac) said :
#11

Thanks marcobra (Marco Braida), that solved my question.