My daily routine involves helping customers with programming / scripting issues. Normally support for applications written by anyone outside of our company are not supported by us, however I don’t mind going through people’s code to help out when I get a few spare minutes.
One of our Linux managed hosting customers was creating a PHP script to connect to an FTP server at their office. That should be simple and trivial, right? The important code that they were using looked like:
// create a basic connection $ftpconn = ftp_connect($ftp_ip, $ftp_port); // login with username and password ftp_login($ftpconn, $ftp_user, $ftp_pass); // Switch into passive mode (required for servers behind firewalls / NAT devices) ftp_pasv($ftpconn,true); // Print a directory listing print_r ftp_nlist($ftpconn, "/");
Unfortunately the code kept timing out at the ftp_nlist() command. After a few minutes the script would timeout and return a warning:
Warning: ftp_nlist() [function.ftp-nlist]: php_connect_nonb() failed: Operation now in progress (115) in /home/script.php on line 27
My first thought was that there must be a firewall issue with the FTP server that was blocking certain ports. I decided to check if I could connect to the server directly from my local computer using FileZilla. To my surprise, I was able to connect, browse, upload and download without any problems. Now I was intrigued. I decided to see if I could implement the FTP client using PHP’s curl extension – and again I was able to connect, browse, upload and download without any problems. So, given that I was able to connect to the FTP server with FileZilla and curl I knew that the issue was somewhere in PHP’s ftp implementation and not with the FTP server. But I still had no idea what the problem was.
I turned my attention to the FTP logs in FileZilla and noticed:
Command: PASV Response: 227 Entering Passive Mode (10,1,2,3,79,212) Status: Server sent passive reply with unroutable address. Using server address instead.
The ‘Status’ line clearly hinted to us that FileZilla knew that something was wrong – the FTP server has sent back a reply containing a private RFC1918 IP address (10.X.X.X 192.168.X.X etc) which is not routable over the internet. To understand what is happening here, it is helpful to know about Active FTP vs Passive FTP. The important thing to know is that when using FTP in passive mode, the FTP client sends a ‘PASV’ command to the server. When the server receives a ‘PASV’ command it opens up a random network port for the client to send further data to. The FTP server returns to the client an IP address and port that the client should use for further communications. The problem is, that when connected behind a NAT devices, the FTP server only knows about it’s local RFC1918 IP address (10.X.X.X, 192.168.X.X 172.16.X.X etc) and not it’s public internet IP address – therefore the server responds with the only IP address that it knows about, the RFC1918 IP address. This is problematic because RFC1918 IP addresses are not routable over the internet so any FTP client using the unroutable IP address returned by the FTP server will be unable to send further data packets to the server.
So, I was pretty confident that the problems this customer was seeing were due to PHP attempting to communicate with the FTP server on the private 10.1.2.3 RFC1918 IP address. To test that theory, I fired up tcpdump to take a look at packets going through the network interface and sure enough I found the server attempting twice to send packets to 10.1.2.3:
04:49:42.134381 IP X.X.X.X.36202 > 10.1.2.3.20426: S 1619271447:1619271447(0) win 5840 <mss 1460,sackOK,timestamp 3080816617 0,nop,wscale 7> 04:49:45.133886 IP X.X.X.X.36202 > 10.1.2.3.20426: S 1619271447:1619271447(0) win 5840 <mss 1460,sackOK,timestamp 3080819617 0,nop,wscale 7>
And yes, the time stamp says 4:49AM – I have much more fun debugging than I do sleeping :)
So now we understand why the problem was happening, but how do we solve it? Happily FileZilla told us exactly what it did to solve the problem – ‘Using server address instead’. So FileZilla was basically ignoring the IP address returned by the FTP server in response to the PASV command.
I scoured the PHP ftp extension documentation and source code in a fruitless search for any hints of a setting that would instruction PHP to ignore the IP address returned by the FTP server. Going through the C code that makes up the FTP extension code in PHP, I realized that it would be relatively simple to modify the code and add an option that causes PHP to ignore the unroutable IP addresses returned by an FTP server.
I’ve created a patch for PHP’s FTP extension that adds the USEPASVADDRESS option. When this option is set to TRUE (which is the default setting), PHP will behave as it does now – continuing to use the IP address returned by the FTP server in response to the PASV command. When this option is set to FALSE, PHP will ignore the IP address returned by the FTP server in response to the PASV command and instead use the IP address that was supplied in the ftp_connect() (or ftp_ssl_connect() call)
This option can be set and retrieved using the ftp_set_option() and ftp_get_options() functions:
ftp_set_option($ftpconn, USEPASVADDRESS, true); echo "USEPASVADDRESS Value: " . ftp_get_option($ftpconn, USEPASVADDRESS) ? '1' : '0'; ftp_pasv($ftpconn, true);
The option should be set before calling ftp_pasv($ftpconn, true);
You can download the patch here – it was written for and tested on php 5.3.8 however it also works against PHP 5.2
To apply the patch:
cd /usr/src/php-5.3.8 patch -p0 < ftp_usepasvaddress.patch
Bug #55651 has been submitted to the PHP developers to hopefully include in future releases.
Hosting nearly 13,000 domains on our shared hosting and reseller hosting platform means that we investigate dozens of compromised accounts each month. Once we find a hacked account we could just notify the account owner that their account has been hacked and restore the account from a backup taken before it was hacked with a warning to the customer that they must update the scripts in their account. However the security of each account is closely related to the security and stability of the entire server. A hacked account can be used to launch an attack on other websites, or it can monopolize CPU/memory/bandwidth resources and degrade service for other customers or worse, the hacking of a non-privileged account on a server (ie a customer account) can lead to a server-wide root exploit if the attacker stays quiet until a privilege escalation in the operating system is found. For these reasons, we investigate each and every security incident and hacked account that we find to ensure that the attack vector is closed and the account is secured. Thankfully our backup system, centralized logging systems and other security systems allow us to investigate and recover from such attacks on our hosting servers. The ultimate goal of any hacking-related investigation is to locate & fix the attack vector (how the hackers got in) and restore the customer files from a time before the attack (we can usually go back up to a year or so thanks to our R1Soft Backup System!).
Since we do not offer SSH access, generally there are 2 ways through which accounts on our web hosting platform are compromised – both of which will allow attackers to take full control of the compromised account:
- FTP login credentials (username and password) have been stolen by a trojan or virus installed on the computer of someone who has FTP access to the account -or- if the FTP usernames and password, which are sent unencrypted over the Internet, were intercepted by an eavesdropper or hacked computer on an unsecured WIFI network, hub-based network or unsecured Internet connection.
- A Perl or PHP script uploaded by the account owner is outdated or vulnerable to exploitation or if the account owner leaves a file manager (with the ability to edit, create or upload files) on their website without password protection
With that in mind, let’s investigate a hacked account! Today I received the following notification from one of our cPanel servers (the username ‘bob’ has been changed to hide the identify of the account):
This is an automated status warning from server.
The process (24133) has exceeded defined resource limits, as such a kill signal was invoked from the process resource monitor.
– Event Summary:
PID : 24133
CMD : /usr/local/apache2/bin/httpd
CPU%: 97 (limit: 65)
MEM%: 0 (limit: 25)
PROCS: 1 (limit: 150)
What caught my eye here is the command of the running process (CMD): /usr/local/apache2/bin/httpd – that is suspicious because I happen to know that the server in question runs Apache 1.3 with the Apache web server httpd binary in /usr/local/apache/bin/httpd After logging into the server I found no running processes by the hacked user to investigate. I immediately suspended the account and copied the account website access logs (so that they are not rotated out by cPanel). If there were processes running by the hacked account, I would change into the /proc/$processID directory and look at where the process executable file really is (/proc/$processID/exe links to the running process file), look at the contents of the /proc/$processID/cmdline file to see what command line arguments where used to execute the file, I’d look at the contents of /proc/$processID/environ to see what environment variables were set and I’d look at the contents of the /proc/$processID/fd/ directory to see what files the process was reading and writing. I did a quick search of the FTP logs (/var/log/messages and /var/log/messages.*) for any FTP activity on the account. Nothing. So, at this point, confident that FTP was not the attack vector I was fairly sure that the account owner had left an outdated or vulnerable PHP or Perl script in their account which was exploited by hackers. I’ve seen this thousands of times. I then ran the following command to find any files that the hackers may have created or modified recently in the account: Read the rest of this entry »
OpenVZ is a (stripped down) free, open-source version of Virtuozzo linux virtualization software. The modified OpenVZ kernel allows server operators to partition their servers into multiple Virtual Environments running a different Linux distribution.
As of now (Oct 10th, 2006) the latest Kernel listed on the RHEL4 download page (version 2.6.9-023stab016.2) is vulnerable to a root exploit that was first reported in July of 2006. That means that OpenVZ has had the vulnerable kernel available for download for around 3 months!
Response from OpenVZ: (*UPDATE*)
The response from OpenVZ was quick & effective – we contacted them at around 10PM on Oct 10th and by 6AM on October 11th (~ 8 hours) they released an updated version (2.6.9-023stab030.1). This does not negate the fact that a vulnerable kernel was left available for download for ~3 months, but I am quite pleased with their response.
update 2: OpenVZ sent an email to their list today (October 11th) at around 1PM EST saying “Everybody using 023 kernel is advised to upgrade.” – perhaps they should have mentioned the root exploit in the email as a reason to drive people to upgrade.
This only effected the OpenVZ kernels, not the Virtuozzo kernels. Our paid Virtuozzo installations were in the 2.6.8 branch which was not affected. A handful of our OpenVZ servers running 2.6.9 were vulnerable – we’ve updated them immediately. Unfortunately we became aware of this because one of the servers was actually exploited.
Server Security & Incident Tracking:
It goes without saying that if an attacker manages to get root access to a server, somewhere a sysadmin will forgo a night of sleep trying to recover.
‘root’ access to a server is absolute – root is the ultimate Unix user. Once an attacker gains root access, he/she can do anything. Cleaning a box that has had a root exploit is a nightmare, and many will argue not even possible. Because the ‘root’ user has the ability to modify anything on the system, any system binary can be replaced with a trojan’d version. Any configuration file can be changed to allow an attacker access through an unexpected port, ssh keys can be added to let an attacker in and cronjobs can be put in place to ensure that their exploits will stick around even if a sysadmin deletes them. An attacker can add a new user to /etc/passwd with uid ‘0’ (root). The list goes on (and I don’t want to give malicious people any more ideas!)
Having a malicious entity gain ‘root’ access to a server is a worst-case scenario for any system administrator.
How do you know if you were rooted?
There are many obvious signs:
A few months ago we noticed that after updating either Virtuozzo or OpenVZ utilities, we would no longer be able to reboot Redhat 7 virtual environments (VEs – or VPS [virtual private servers]).
We tracked this down to the fact that Virtuozzo and OpenVZ have the code:
CP='/bin/cp -f --preserve=mode,ownership'
in the files: dists/scripts/redhat-add_ip.sh & dists/scripts/functions
The above scripts are executed in the VEs to setup networking – the problem is that RedHat 7 supports only 'cp –preserve' and not 'cp –preserve=…' and therefore the startup scripts can't run and setup networking in the VEs.
The solution is easy:
Just modify: dists/scripts/redhat-add_ip.sh & dists/scripts/functions (in /etc/vz for OpenVZ or /etc/sysconfig/vz for Virtuozzo) and remove the "=mode,ownership" text. That will fix it.
Important: You must manually make the above changes after any time that Virtuozz or OpenVZ releases a new version of the virtuozzo tools because they will override this.
(update) Bug Report submitted to OpenVZ:
We opened a bug report with OpenVZ and they've responded, however we're unsure if they will fix it – we asked for clarification but we've received no response.
This post focuses on the S-word. SPAM.
How can ISPs & Web Hosts stop spam? How can we fight back? What tools can we use to fight back? What methods can be used on the server-level to protect end-user inboxes?
There are some things in life that just make me smile:
- Ice cream
- Watching Borat sing “Everybody dancing now”
- Brokeback Mountain becomes Spongebob brokeback
But this makes me smile from ear to ear:
Earthlink awarded $11Million from Spammer
Spam is a problem that is plaguing not only end-users but web hosts, ISPs, backbone providers and network administrators as well.
While the CAN-SPAM legislation is weak, it provides an essential first-step towards setting up the battle in the legal arena to fight spam. Making it illegal to forge headers and return addresses provides companies with a legal basis for prosecuting spammers in the United States. Obviously more needs to be done, but the CAN-SPAM act is better than nothing.
It is obviously every good netizen’s (net citizen?) dream to eliminate SPAM. SPAM has turned one of the quickest and far-impacting methods of communication into a daily hassle and waste of time.
SPAM is damaging the internet community in many ways. A few of the main problems caused by spam:
- End user frustration. End-users are frustrated by the amount of SPAM in their inbox and eventually, instead of experiencing a life-changing method of communicating with relatives in another country or engaging in commerce, end-users are forced to sift through myriads of messages to weed out the ones that they want to read. In extreme cases, this deluge of spam may even cause light-weight users to simply stop using E-mail.
- Hijacked computers. A large portion of bulk email is sent from hijacked and compromised computers. While there are many spammers who rent their own servers, there are networks of hijacked PCs which are sold in blocks of thousands for use by spammers. Ignoring the fact that such behavior is illegal, anyone who has used or tried to disinfect a hijacked PC knows that they often slow to a crawl, crash or they consume an entire house-holds worth of bandwidth which will result in degraded performance of other computers. Just like the point above, this frustration will lead many users to abandon use of their computers or waste money on having their computers repaired.
- Lost emails. A direct result of SPAM is the loss of legitimate and valuable emails.
- Accidental Deletion. Legitimate emails are often lost in the process of a user repeatedly clicking ‘Delete’ while clearing their Inbox of SPAM.
- Spam Filters. To combat SPAM, many E-mail service providers filter incoming email for SPAM & Virii. It is unrealistic to believe that SPAM filters will never accidentally tag a legitimate email as spam. When this happens, either the email will be discarded by the E-mail providers servers or the message will wind up in the Spam-folder where it may get discarded before the end-user can review it and realize that it was not spam.
In today’s web hosting world there is a 'de-facto' control panel called cPanel. There is a large segment of reseller hosting and shared hosting customers who look for cPanel hosts. To a certain extent, many of those looking for reseller web hosting accounts are looking for cPanel hosts.
Because cPanel is one of the most established control panels in the web hosting market, if a customer transfers to a new host, choosing a host with cPanel will make it easy for them to migrate their settings and will minimize the learning curve with the new host.
cPanel has become a force in the market – they have easily past the critical mass of customers that they need to be a dominant market power and they can charge whatever price they want, they can be slow with bug fixes, they can be slow with new features and they cna be slow with updates.
There are many problems with cPanel… a very breif list would be:
* While some of cPanel is open-source, there are a lot of encoded, compiled routines that are vital to its functioning. If you find a bug (and believe me there are many), you have to wait for cPanel to decide that they want to fix it.
* A lot of the cPanel code is compiled Perl – this makes extremely large and extremely slow binaries that need to run each time or whm is called.
* cPanel offers no clustering support (I don't call distributed name servers 'clustering')… scalable hosts need the ability to have separate email servers, MySQL servers, email list servers, etc). Because some vital routines are hard-coded into cPanel, it can't even be ported, upgraded or patched to do distributed hosting without major problems
* cPanel tries to offer everything to everyone (and run on over a dozen Linux/Unix platforms [and windows!]) you wind up with an installation that is simply bloated well beyond what most hosts will need. Can you fathom cPanel + windows? It's a sysadmin nightmare. What sane web hosting system administrator would want this burden on their shouldiers?
My advice to cPanel is simple: Stop trying to support dozens of operating environments, choose an OS, support it, fix it and maintain it.
There are simply so many bugs that are confirmed by cPanel but not fixed. For example this bug report was reported by us in November of 2005, confirmed by cPanel on Dec. 1st 2005 and it is still unresolved as of Today, Sept 14th, 2006.
Instead of spending their time fixing known (and confirmed) bugs and improving their software, cPanel decided to work on their own script-deployment system (cPAddons).. that'd be a very useful feature except that Fantastico for cPanel provides around 50 pre-installed scripts, blogs, message boards and more. *shock* – cPanel has wasted their time.
Reseller hosting customers have expectations from their providers: speed and reliability from the servers and quick resolution from the hosting company. cPanels compiled binaries & bloating have slowed our servers down, bloated them down with useless software and their (extremely) slow response times have simply forced us to give responses such as "this is a cPanel bug, our hands are tied until cPanel resolves this issue".
The above is an excellent summary as to why our shared web hosting system runs on our own in-house developed control panel, SimpleCP. Running our own control panel on our shared hosting servers gives us power, flexibility, scalability & performance that we could never dream of with cPanel. It is for those reasons as well that we will be creating a fast, clustered/distributed and responsive replacement for cPanel for our reseller customers.
Let’s be honest: I love Pink Floyd‘s old music (Dark Side Of The Moon, The Wall etc) & I had the amazing privilege of having floor seats to his performance at Madison Square Garden on Tuesday Sept 12th!
To hear Roger Waters play his old Pink Floyd music is a dream come true for a fan like me. Knowing that he has recently released new music, my sole hope for this concert was that he’d play old music and none of his new music.
I was relieved to read in the pamphlet (which was placed on my seat) that the line-up for the night was 2 sets – the first set being a random assortment of old Pink Floyd music and the second being Dark Side Of The Moon in it’s entirety with the original Pink Floyd drummer, Nick Mason!
The show started off amazingly, the music was amazing, the crowd was amazing and the song selection was amazing. Then Roger decided to let us know he’d be playing a new song“leaving Beirut” [the song was literally just a musical version of a comic strip he wrote]… ok…. the words are obviously political and slanted… then comes “Oh George! Oh George! That Texas education must have f***d you up when you were very small” WHAT?
At that point I was furious… the song ended with half of Madison Square Garden booing Roger Waters. Politics aside, no one is paying to hear Roger Waters’ political views or propaganda – everyone at MSG was there to hear his old Pink Floyd music, period.
The show progresses and during a lot of negative words/parts of songs, (“All that you hate”… “All that you distrust” etc) big pictures of President Bush were shown. Once again, politics aside, no one purchased Roger Waters tickets to hear his political views.
There is a reason that these people are musicians and not politicians. Musicians like him are simply abusing the fact that people like their music to portray their agendas and views.
I still love Roger Waters’ old work & the Pink Floyd music, but boy do I hate the fact that he thinks he can use it to push a political agenda.
The story goes that the concept for “The Wall” came to Roger after he spit in the face of an unruly fan… perhaps it’s time that the fans start spitting back.
After being first reported in April of 2006 (in this forum post), Mahesh Slaria from Kayako told the community how to fix this problem by changing some code (it is not fixed in the current stable build).
For a while now we've had customers with @yahoo.com email addresses complaining about receiving blank emails from our ticket system… after looking at the Kayako forums (we use Kayako support-suite) we found that other Kayako customers were experiencing this as well.
Finally, around 4-5 months later we get a solution:
"Blank email issue is just due to HTML encoding, you can change 'html_encoding' to '7bit' or '8bit' in en-us.php at locale folder of SupportSuite."
Sure enough, we tried it out and it fixed the problem!
The latest buzz in the datacenter & server markets these days can be summed up as: Woodcrest.
Woodcrest is Intel‘s code-name for their newest Dual Core Xeon processor – which boasts two CPU cores on 1 chip, 4MB of shared cache, a claim of 40% better performance with a 40% reduction in power used. Wow. To every datacenter-centered person that is a dream come true. With the rising power & cooling costs, increasing performance and reducing power is essential.
With the latest reviews on hardware websites showing that these new chips blow away the competition in almost every area, we decided to use the new Woodcrest processor in a new reseller web hosting server that we’re bringing online.
The first job was finding a Woodcrest process that would provide enough power & not bankrupt us (some of these Woodcrest CPUs are in the $1000 arena!)… we chose the Intel 5130 Dual Core Xeon. Then the going got tough.
All of the Woodcrest motherboards that we could find were in the $300 – $500 range. We chose the Tyan S5372 i5000VS dual Woodcrest motherboard – it looked like a good choice.
Of course the new Woodcrest CPUs & motherboards require FB-DIMM DDR2 (that is Latin for: expensive)… loading the machine 4 with 4G of RAM was a tad expensive…
We use 2U cases from an excellent provider called ServerCase.com
Luckily we keep spare cases because they ship from California and it can take up to a week to get a case to us on the East Coast.
Thanks to 3 day shipping methods, our parts were delivered very quickly. We started to build the server and ran into 3 issues:
Problem 1) The new motherboards screw holes didn’t line up with the 2U ATX case from ServerCase.com .. .there were 3 brackets coming up from the bottom of the case for screws, but there were no holes on the motherboard. Needless to say that is bad…. what to do? We don’t want to wait to buy another case and delay the building process.
Solution: We used a drill to drill through the bottom of the case and remove the 3 offending metal brackets Read the rest of this entry »