Showing posts with label Server. Show all posts
Showing posts with label Server. Show all posts

Monday, July 12, 2010

Speeding up your Updates with Ubuntu and APT Cacher NG

If you are in a building with more than one computer running Ubuntu, you may have wondered why everyone has to download updates separately. The answer is, they don't. Here's how to accomplish it with a tool called apt-cacher-ng.

apt-cacher-ng is a fork of a project called apt-cacher, which in turn is an alternative to apt-proxy which is poorly maintained and upon my testing, was unreliable.

Setting up the APT Cacher Server

Begin by installing apt-cacher-ng from Synaptic Package Manager, or from the Terminal with the following command:

sudo apt-get install apt-cacher-ng

By default, the version of apt-cacher included with Ubuntu does not currently include security updates. We can easily add the security updates in, however, by following these steps:

As root, create a new file called /etc/apt-cacher-ng/ubuntu_security

sudo nano /etc/apt-cacher-ng/ubuntu-security

This file will be a list of mirrors from which the updates may be downloaded. We only want to insert a single line, the location of Ubuntu's official security updates server:

http://security.ubuntu.com/ubuntu

Save and close the file.

Next, edit the file /etc/apt-cacher-ng/acng.conf

sudo nano /etc/apt-cacher-ng/acng.conf

While we're here, you will see a couple of lines like this:

# Set to 9999 to emulate apt-proxy
Port:3142

I recommend that you follow the instructions and set the port to 9999 to emulate apt-proxy. Not only will this make your server compatible with systems expecting an apt-proxy server, but it will also make the port number easier for you to remember.

Next, look for this section:

# Repository remapping. See manual for details.
# In this example, backends file is generated during package installation.
Remap-debrep: file:deb_mirror*.gz /debian ; file:backends_debian
Remap-uburep: file:ubuntu_mirrors /ubuntu ; file:backends_ubuntu
Remap-debvol: file:debvol_mirror*.gz /debian-volatile ; file:backends_debvol

We're going to add one more line at the end of this section to describe the Ubuntu Security repository that we are adding (make sure the following all goes on one line):

Remap-ubusec: file:ubuntu_security /ubuntu-security ; http://security.ubuntu.com/ubuntu

Now, save changes to this file and restart the apt-cacher-ng service.

sudo service apt-cacher-ng restart

If all went well, the server is now working. No you may proceed to setting up the clients. I strongly recommend making the server into a client of its own apt-cacher, as there is no reason for that system to download the updates from the Internet twice.

Setting up an APT Cacher Client

The basic idea of how to set up a client is to change all the lines in /etc/apt/sources.list to point at the local apt-cacher-ng server instead of at the internet servers. I recommend taking an additional step first so that you can easily flex between different servers. We will create a hostname alias called "apt-cacher" that points at your apt-cacher-ng server so you can simply re-point the hostname whenever you want to switch servers.

sudo nano /etc/hosts

We will add our own entry just after these two lines:

127.0.0.1 localhost
127.0.1.1 your-computer-name

If the apt-cacher-ng server is running on the same computer which you are setting up as a client, its IP address will be 127.0.0.1, otherwise, you need to find (or set) the static LAN IP address for your server. I will assume it is 192.168.1.10 in this example because that's what it is in our building here at CCC. The line you will add will look like this:

192.168.1.10 apt-cacher

Save and close the hosts file.

Now we will replace the entries in the sources.list to point at the cacher:

sudo nano /etc/apt/sources.list

The easiest way to handle this will be a search and replace.

First, replace every instance of "us.archive.ubuntu.com" with "apt-cacher:9999" (no quotes on either) If you are using nano, this is accomplished by pressing Ctrl+W, Ctrl+R, then entering the search string and pressing enter, then entering the string to replace it with, and pressing enter again, then when prompted, press "A" for all.

Next, replace every instance of "security.ubuntu.com/ubuntu" with "apt-cacher:9999/ubuntu-security" (no quotes on these, either.) If you are using nano, use the same steps as given above.

(In this example, we won't be handling the partner repository or any third party repositories which you might have installed.)

Save and close your sources.list. Your final sources.list if you are running Ubuntu 10.04 Lucid Lynx will look something like this:

# deb cdrom:[Ubuntu 10.04 LTS _Lucid Lynx_ - Release i386 (20100429)]/ lucid main restricted
# See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to
# newer versions of the distribution.

deb http://apt-cacher:9999/ubuntu/ lucid main restricted
deb-src http://apt-cacher:9999/ubuntu/ lucid main restricted

## Major bug fix updates produced after the final release of the
## distribution.
deb http://apt-cacher:9999/ubuntu/ lucid-updates main restricted
deb-src http://apt-cacher:9999/ubuntu/ lucid-updates main restricted

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team. Also, please note that software in universe WILL NOT receive any
## review or updates from the Ubuntu security team.
deb http://apt-cacher:9999/ubuntu/ lucid universe
deb-src http://apt-cacher:9999/ubuntu/ lucid universe
deb http://apt-cacher:9999/ubuntu/ lucid-updates universe
deb-src http://apt-cacher:9999/ubuntu/ lucid-updates universe

## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu
## team, and may not be under a free licence. Please satisfy yourself as to
## your rights to use the software. Also, please note that software in
## multiverse WILL NOT receive any review or updates from the Ubuntu
## security team.
deb http://apt-cacher:9999/ubuntu/ lucid multiverse
deb-src http://apt-cacher:9999/ubuntu/ lucid multiverse
deb http://apt-cacher:9999/ubuntu/ lucid-updates multiverse
deb-src http://apt-cacher:9999/ubuntu/ lucid-updates multiverse
## Uncomment the following two lines to add software from Canonical's
## 'partner' repository.
## This software is not part of Ubuntu, but is offered by Canonical and the
## respective vendors as a service to Ubuntu users.
# deb http://archive.canonical.com/ubuntu lucid partner
# deb-src http://archive.canonical.com/ubuntu lucid partner

deb http://apt-cacher:9999/ubuntu-security lucid-security main restricted
deb-src http://apt-cacher:9999/ubuntu-security lucid-security main restricted
deb http://apt-cacher:9999/ubuntu-security lucid-security universe
deb-src http://apt-cacher:9999/ubuntu-security lucid-security universe
deb http://apt-cacher:9999/ubuntu-security lucid-security multiverse
deb-src http://apt-cacher:9999/ubuntu-security lucid-security multiverse

After you have have saved your sources.list, run the following command:

sudo apt-get update

This will download the package index from the apt-cacher server. You should see lots of "Get" "Hit" and "Ign" lines coming from http://apt-cacher (and other lines for partner and 3rd party servers) if everything is working right. Remember that the update command will still be running at the normal speed because it has to fetch the indexes from the Internet every time to determine if the cache needs to download any new files. Also, the first time a given package is downloaded will still be at normal speed, as well.

To try it out, run sudo apt-get upgrade to download any available updated packages, or just install a new package of your choice. The apt-cacher should be functioning from the command line tools, from Update Manager, and from Synaptic Package Manager. Just remember not to adjust your repository checkboxes under the "Ubunto Software" tab in Synaptic's Settings:Repositories menu, because it no longer knows which repositories we've enabled (they are visible, however, on the "Other Software" tab, and you may adjust them there.)


How To Switch Locations

If you are using a laptop or netbook, you may appreciate the ability to quickly switch from one loication to another, and even to be able to download updates when you are not near your regular apt-cacher-ng server at all.

I have come up with a strategy to accomplish this. All you need to do (and you must do this while your current apt-cacher-ng server is accessible), is repeat the above steps for "Setting up the APT Cacher Server" on your local machine, then you can simply edit your /etc/hosts file and change the entry for apt-cacher to point at 127.0.0.1 when you're on the run, or back to your server's IP when you are at home or work. Making that one line change will allow you to continue using all of the APT tools smoothly.

There is one disadvantage, especially for netbooks or devices with smaller hard drives, and that is that you will be storing two different caches on your hard drive for files which were downloaded while "on the go", one in the apt-cacher folder, and the other in the system's regular apt system cache under /var/cache/apt

Because of this, you may want to occasionally issue the command:

sudo apt-get clean

This will clean the system cache of installed packages out of /var/cache/apt, saving some disk space. This is generally a good idea to do on any Ubuntu system with disk space limitations, whether or not you are running apt-cacher-ng.

Also, if you ever need to delete the files under /var/cache/apt-cacher-ng, you may safely do so, and apt-cacher will download them again the next time they are needed.

Congratulations on successfully setting up apt-cacher-ng -- I hope! It has saved hours and hours of downloading here in our classroom.

Monday, January 8, 2007

Ubuntu Edgy Eft Xvnc Disconnect Problem

Diagnosis of Problem:

I installed the Automatic Updates on Ubuntu Edgy Eft recently, sometime around January 06 (2007-01-06), and after a reboot, my Xvnc running through xinetd stopped receiving connections. It disconnects immediately after connecting, or immediately after receiving the password. Log files turn up almost nothing, there is a "xinetd[nnnn]: warning: can't get client address: Transport endpoint is not connected" error showing up in /var/log/daemon.log and /var/log/syslog and upon telnetting to the vnc port I received nothing but RFB 003.008 (the usual VNC protocol greeting) followed by an immediate drop of the connection. xinetd does pass the connection to VNC, we know this because the greeting is given, but an examination of the running processes will not show Xvnc in the list because it closes immediately after opening.

Running Xvnc server manually with the appropriate options and connecting to it with vncviewer resulted in a gray screen (so-called "root-weave") with an X or a watch cursor on it, and the gdm (Gnome) session never starts.

The Solution:

Temporary solution: This happened because of an upgrade in the vnc4server package. Run synaptic package manager, search for vnc4server, click on it, go to Packages, Force Version and choose the previous version. Downgrade to the previous version and you should be alright for now. Wait until the next version comes out before you attempt to update this package again.

How I Found the Solution:

It took me many hours, but I found the answer on https://launchpad.net/ubuntu/+source/vnc4/+bug/78282

The person who first reported the bug incorrectly listed the date of the upgrade as 2006-01-06 (happy new year feranick).

I hope this helps! Drop a comment to let me know if this post eased your pain.

Wednesday, December 6, 2006

DNS with bind9 on Ubuntu

I just finished setting up bind on Pericles, and it wasn't too bad.

Bind was already installed by using the LAMP option on the Ubuntu Server disc. The configuration files for it are found in /etc/bind. I have a tool written as a Windows console application that dynamically cranks out my forward files for me based on templates, so I ported that over and ran it with wineconsole. It worked. I had to make a change in the named.conf.local file because for some reason bind on Linux seems to require a full path in the zone lines:
zone "whatever.tld" IN { type master; file "/etc/bind/forward/whatever.tld.zone"; };

On windows, I didn't need /etc/bind/ prefixing those, because the paths were relative to the conf file. No big deal, however--it was an easy change.

I dropped a script into /usr/local/sbin called redns

It does the following:
/etc/init.d/bind9 stop
/usr/local/sbin/dnsgen.sh
/etc/init.d/bind9 start

This simply stops bind, regenerates the forward files using my tool (the dnsgen.sh file launches it with wineconsole), and then starts up bind again.

If you get an rndc error, here's how I fixed it.

I will port the dnsgen tool over to a native application at some future point, but I'm in a hurry right now because my WAMP server is starting to have MySQL blackouts requiring a reboot. It seems to be something to do with a file handle getting a lock stuck on it, because stopping and starting the MySQL daemon doesn't improve the situation.

Configuring PHP

I'm now trying to get PHP working on Pericles.

The first thing I should point out, is that I started by doing a vanilla LAMP install from the Ubuntu Server disc. This means I already had Apache2 and PHP5 installed "out of the package", but they aren't configured adequately for my needs.

I dropped a simple file in /var/www that would echo the output of phpinfo() so that I could compare it with my existing WAMP server's setup.

If you don't have Apache working yet...

You can use php5-cli (install this with Synaptic) as an alternative to view the phpinfo() at a shell prompt:

php
<?php echo phpinfo(); ?>
(Press CTRL+D)


A plain text rendering of phpinfo() will appear, which you can scroll back to view in your console buffer.

Here are the major differences that I need to adjust:

magic_quotes_gpc needs to be turned off. It's a stupid thing, blast them for making it default to on. post_max_size and upload_max_filesize need to be increased, because I have people uploading large megapixel images through http forms. 2M doesn't quite cut it any more these days. The gd module needs to be enabled. The zip module, or a substitute for it, needs to be enabled (I use this to automatically unpack files uploaded to a designated FTP account for daily processing.) I notice a few other differences, but I think they're minor. If I run into problems with them later, I'll follow up with details on how to fix them.

I discovered that by installing the php5-gd package with Synaptic, and then restarting Apache I gained gd2 support. That was easy. To restart Apache:
sudo apache2 -k graceful

This does a graceful restart (it won't force any connections to close that are still opened). I realize this is a new box, so there won't be any connections hanging open anyway, but it is good to get into this habit early on.

I reloaded the phpinfo() test file, and there is now a section for gd which says version: 2.0 or higher, and everything looks enabled (freetype, t1lib, gif, jpg, png, wbmp)

I think the zip module I was using was part of pecl. I know pecl is similar to pear, and now that I think about it, I know I'll need pear support too, so if it hasn't already been done, php5-cli and php-pear should now be installed with Synaptic. The reason we need to install php5-cli is because pear is a command-line utility, and requires the command-line version of PHP in order to run. Don't worry: php5-cli and libapache2-mod-php5 peacefully coexist. I opened a shell, typed pear, and there it is!

On a whim, I typed pecl, and it also runs. It looks like pear and pecl come as a pair (no pun intended).

Back to Synaptic, we need to install php5-dev because we will need a tool called phpize in order to complete the next step. php5-dev has several dependencies that it will automatically install.

Now, the magic command:
sudo pecl install zip

When its finished, it will say: You should add "extension=zip.so" to php.ini
cd /etc/php5/apache2
sudo editor php.ini

Do what it says. Add the line extension=zip.so at the very end of the file, because that's where the automatically added extensions (mysql, mysqli, gd) ended up. You may want to also add the same line into the /etc/php5/cli/php.ini so that you have zip support when you use php for shell scripting.

Save your changes and:
sudo apache2 -k graceful

If you look at the phpinfo() output again, you'll now see the zip section near the bottom. This is
really easy.

While we're editing these two ini files, lets search for and change the following lines to these new values:

memory_limit = 16M
post_max_size = 16M
magic_quotes_gpc = Off
upload_max_filesize = 10M

Remember to set these for both /etc/php5/apache2/php.ini and /etc/php5/cli/php.ini

Another tool I sometimes use (if I need to programatically submit a post):
sudo pear install HTTP_Request

Well, I think that's everything I use. Restart apache2 one last time and see if it all works.

Tuesday, December 5, 2006

Firewall on Ubuntu using iptables

I decided to start by adding a firewall to Pericles, and a little searching revealed that iptables is exactly what I need for the very simple setup I am planning on running. Even if you run a separate firewall or router as a gateway, it may not be a bad idea to install iptables on your machine as well so that you can have full control over what goes in and out in the event that you ever have any guest machines connected on the network inside of the firewall.

The Ubuntu server distribution came with iptables preinstalled, I just had to create scripts to set up the firewall and get them to automatically start when the machine boots up.

I started here:

Easy Firewall Generator for IPTables

I generated a simple script, enabling SSH, DNS, Web Server, and a couple of other services I use on the first Ethernet interface (eth0), copied and pasted it into an editor (running under sudo) and modified it slightly:

I searched and found the line for the HTTP service:

$IPT -A tcp_inbound -p TCP -s 0/0 --destination-port 80 -j ACCEPT

I copied and pasted this and changed the port number to a few other ports I need open for specialized purposes. (Since I do more than just basic web hosting, I have clients using custom software that connect to specific ports.)

If you need to open a range, use something like 3000:3010 in place of the 80 in the above line.

Try to open as few ports as possible. That's kind of the point of a firewall.

Also, search for "ping", and you'll find a note on a line you can uncomment to allow pinging to your server. I prefer to allow pinging, you may choose not to. If you want pinging, uncomment it so it looks like this:

$IPT -A icmp_packets -p ICMP -s 0/0 --icmp-type 8 -j ACCEPT

Now save the finished file as /etc/init.d/iptables (which did not exist when I started)

Set the permissions so that it matches the rest of the files in /etc/init.d:
sudo chmod 755 /etc/init.d/iptables

Please test your firewall by running ./iptables start from the shell prompt. Remember, it won't close any ports that are already opened, so try opening a second ssh session or whatnot to verify that you can still access your box before deciding to make this firewall permanent. I recommend leaving some distinguishable port closed so you can verify that iptables is working--for example, I disabled icmp ping, and when I pinged the box and saw Request timed out, I knew that my firewall was working, so then I edited the iptables script to enable pinging again.

Once you are satisfied that it is working according to your desires, you need to add iptables to the list of daemons to automatically start for the various runlevels when your machine is booted up:
sudo update-rc.d iptables defaults

Finally, reboot your system and make sure the firewall comes up:
sudo reboot now

WAMP to LAMP

I'm starting a series to document my progress on converting a very specialized WAMP server over to a LAMP server.

WAMP = Windows, Apache, MySQL, PHP
LAMP = Linux, etc.

I am going to name this box Pericles (Mostly for the sake of giving it a tag in the blog so that you can read all about its life by clicking Pericles here or on the sidebar.)

To begin with, I want to explain why I was using a WAMP server to begin with. Most people either go all Microsoft or all Open Source. This server was born to fulfill one pressing need: A friend of mine had a website already written in ASP that he needed a new hosting provider for, and neither of us had time to deal with anything as serious as a rewrite at that time. So, we found a neat plug-in for Apache on windows that allowed it to execute ASP code. This worked, and our small Web-hosting business was born.

Fast-forward two years.

We now host around twenty websites on this box, and the need for ASP is gone because the original site has been rewritten in PHP. Furthermore, the specs on that box were outdated when we started, and it is time for a faster CPU. We bought a new system at the Day After Thanksgiving sale and we are now ready to go Open Source.

We've installed Ubuntu 6.10 (Edgy Eft) from the Server disc, and then added the ubuntu-desktop package manually to get a GUI desktop. We need the GUI desktop for a couple of reasons: 1) We're new at administering Linux and it helps us feel a little more confident. 2)
I've developed a few tools over the past two years that handle frequent back-end tasks as Windows applications. I don't have time to port them all at once, so I am going to have to do it piece by piece, thus I will run the unported tools by using Wine in the meantime.

Once I got Edgy booting smoothly and configured for our graphics card (it required a resolution tweak using dpkg-reconfigure xserver-xorg to get it looking correct on our LCD) this is our official starting point.

To follow our ongoing drama, click on the Pericles category/label in the sidebar.