Sunday, December 9, 2012

How to properly encode email subject

When we want to send a email with non US characters in email subject it is important to properly encode it. Below is the most simple code that does just that (thanks Gasper for help).
public function email_subject_encode($in_str, $charset = 'UTF-8') {

    #remove all non utf8 characters
    $in_str = mb_convert_encoding($in_str, $charset, $charset);

    #Remove non printable character (i.e. below ascii code 32).
    $in_str = preg_replace('/[\x00-\x08\x0B\x0C\x0E-\x1F]/u',
                           '', $in_str);

    #encode email subject
    $out_str ="=?utf-8?b?".base64_encode($in_str)."?=";

    return $out_str;

Tuesday, December 4, 2012

Parsing date string

If you are non US user you will stumble on a problem how to parse date strings that are in non US format i.e. 31.12.2012. Code below does just that. First we use regular expression to extract month, date and year and if it is a valid date then convert it to Datetime object.

public function ParseForDateTimeValue ($strText) {
  if ($strText != "") {
    // RegExp taken from
        $strText, $arr_parts)) {

      $month = ltrim($arr_parts[2], '0');
      $day = ltrim($arr_parts[1], '0');
      $year = $arr_parts[3];

      if (checkdate($month, $day, $year)) {
        return new DateTime(date('Y-m-d H:i:s', 
         mktime(0, 0, 0, $month, $day, $year)));
  return NULL;

Wednesday, October 17, 2012

Bash programming tips - part 5

Combining parts 1, 2,3 and 4 we almost have a script. Now to parameter parsing. There are more than one way to parse input parameters. You should check this post for excellent tutorial on parameters parsing. I will just discus my implementation. Now I will repost the snippet from first part.

while [ $# -ne 0 ]; do
   case -o|--option)
         exit 1
      (--) usage; exit 1;;
      (-*) usage; exit 1;;
          exit 1;;

As shown in code above we are looping trough the script parameters one by one, until there are none left. We are matching the parameter to the expected options (-o for example) and shifting the parameter value if option demands it. We display usage if we have unexpected option. With slight modification we can support multiple values for single parameter. I prefer shifting over i.e. getopts because this way I can support long version options (i.e. --help), but that is the matter of preference. This version does not support white spaces in parameter values.

Tuesday, October 16, 2012

Bash programming tips - part 4

In part 1, 2 and 3 we have defined script outline, set up configuration and defined helper functions. Now will discuss error handling. It is amazing that almost none of scripts I have seen have none. It is really simple, just look at code snippet bellow.

#catch script return value

  #check if commands executed successfully
  if [ $_ERROR -ne 0 ]
    debug "ERROR" "We have an error. Handle it. !!!!"

With _ERROR=$? we have stored the exit code of the last command executed to variable _ERROR. In Linux world all commands return 0 on success or positive integer on failure.

Bash programming tips - part 3

In part 1 and 2 we have presented the script outline and config section. Now to introduce helper functions.

First is usage

## Usage
function usage() {
cat << USAGE
       deploy - deploy projects to servers

       ./ PROJECT_NAME [-t|--target] TARGET [--tag] [-d|dump]
       ./ PROJECT_NAME [-t|--target] [-m|--maintenance]
       ./ PROJECT_NAME [-t|--target] [-d|--dump]

       Deploy Drupal projects (${_PROJECTS[@]}) to production.

       Mandatory arguments to long options are mandatory for short options too.

       -t, --target production or hostname for stageing
       -m, --maintenance put destination server to maintance mode
       -d, --dump make backup of project database at target
       -f, --features list of features to revert
       -h, --help display this help and exit
           --tag tag to deploy
           --force force command (i.e. dump on master)

       Deploy to production from tag on master branch and make db backup
           / project1 -t production --tag 20120829 -d

       Put production to maintenance mode
          / project1 -t production -m

       Doploy development branch to staging server
         / -t project1 --tag user/branch
       Written by Author1, Author2, Author3

Since the example is an extract of deploy script help will usage will return something related to the functionality. This function is quite simple, only cool thing about it is that we listed array elements with ${_PROJECTS[@]}, to display projects available

For input validation we can use something like script bellow since as mentioned in previous post I believe its a good practice to be able to run script without parameters and get usage.

## Validate input
function validate_input() {

   # Exit if there is some invalid arguments
   if [ $# -lt 3 ]; then
      exit 0
   #if first argument is not project name show usage
   in_array $1 "${_PROJECTS[@]}" && return 0 || usage; exit;

Function above checks if function has at least one parameter, which most be the project name

For debug output to console we can use something like:

#output messages to console
function debug() {

   local _LEVEL=0
   case "$1" in
    ERROR )
        _LEVEL=2 ;;
    INFO )
        _LEVEL=1 ;;
    DEBUG )
        _LEVEL=0 ;;

   if [ $_DEBUG -gt 0 ] && [ $_LEVEL -ge $_DEBUG_LEVEL ]
      while [ $# -ne 0 ]; do
        echo -n " $1"
   return 0

Good place to put these things is separate file in order to keep code clean.

To have full working project you will probably need at least function to become sudo or to catch user input.

Bash programming tips part 2

In part 1 we have introduced the basic script outline. In configuration section we specify variables and configuration options. Example bellow in an extract (short version) of some deployment script configuration, but it is a good example of how complex configuration can be.

# <config>


#turn debuging on/off
#two levels ERROR=2/INFO=1/DEBUG=0

#custom return codes


_PROGNAME=$(basename $0)


#local environment variables

#script specific parameters

declare -a _FEATURES=()
declare -a _INTERFACES=()

#user for deploy

# Input dependent arrays

#all possible known projects
_PROJECTS=(project1 project2 project3 project4 )

# Project source
declare -A _PROJECT_PATH

_PROJECT_WEB_SERVERS[${_PROJECTS[0]}]="server1 server2"
_PROJECT_WEB_SERVERS[${_PROJECTS[1]}]="server1 server2"
_PROJECT_WEB_SERVERS[${_PROJECTS[2]}]="server1 server2"
_PROJECT_WEB_SERVERS[${_PROJECTS[3]}]="server1 server2"
_PROJECT_WEB_SERVERS[${_PROJECTS[4]}]="server1 server2"
_PROJECT_WEB_SERVERS[${_PROJECTS[5]}]="server1 server2"
_PROJECT_WEB_SERVERS[${_PROJECTS[6]}]="server1 server2"



In script above I've used powerful bash feature - array. This allows me to easily group configurations, without having to use long names like PROJECT1_WEB_SERVER1 to keep clarity.

Bash programming tips - part 1

As industry standard all scripts that are not for single use should consist of at least 2 things - usage and configuration.

As a good practice I would expect the script when executed to show help in man format, although most of standard shell commands like ls, pwd execute immediately without requiring any parameters.

To start writing bash script we need editor (Kate, vim i.e.) and bash, which is a part of any modern Linux distribution.

Typical bash script looks something like:


## Includes
source scripts/

# Main

## Validate input
validate_input $@

# Settings
while [ $# -ne 0 ]; do
   case -o|--option)
         exit 1
      (--) usage; exit 1;;
      (-*) usage; exit 1;;
          exit 1;;

exit 0

Code above is pretty much self explanatory. Bash allows you to include external files into script and this is a good practice to keep code clean and readable. This is where you should put your configuration (at least if it is not trivial, so use your brains)

At the beginning of main section I usually put some sort of input validation (discussed in detail in later posts) to terminate script immediately if input is invalid. Next is parameter parsing either manually like above or with functions like getopts. Script the continues depending on selected options.

Friday, September 21, 2012

How to remove all non printable and non UTF8 characters from string

In PHP this is quite simple, but you can spend hours online searching for a solution, especially if you want to keep non US characters.

   $a='some string that you want to clean';
   #remove all non utf8 characters
   $a = mb_convert_encoding($a, 'UTF-8', 'UTF-8');
   # Remove non printable character (i.e. below ascii code 32).
   $a = preg_replace('/[\x00-\x08\x0B\x0C\x0E-\x1F]/u', '', $a);

I hope that I saved someones time (thanks Gregor for help).

Tuesday, August 21, 2012

Configuring Drupal search_api_solr module

In my previous post I described in detail how to set up Solr.
In order to install Drupal search_api_solr we need to install the module using Drush or some other way.
After installation is complete, we need to move schema.xml and solrconfig.xml from module to solr.
We define Solr cores in solr.xml on the Solr root /var/solr in our case.
Sample configuration is:
<?xml version="1.0" encoding="UTF-8" ?>

<solr persistent="false">

  adminPath: RequestHandler path to manage cores.  
    If 'null' (or absent), cores will not be manageable via request handler
  <cores adminPath="/admin/cores" defaultCoreName="core0">
    <core name="core0" instanceDir="core0" />

Now we move schema.xml and solrconfig.xml from module to solr.
cp module_installation_folder/search_api_solr/schema.xml /var/solr/core0/conf/.
cp module_installation_folder/search_api_solr/solrconfig.xml /var/solr/core0/conf/.
If we check schema.xml we will see that we also need to define /var/solr/core0/conf/protwords.txt
and we must download mapping-ISOLatin1Accent.txt to map non ascii chars to their ASCII equivalent.
In Drupal go to http://localhost/admin/config/search/search_api/server/solr_server/edit to tell Drupal where your Solr is.
You can test your installation by visiting http://locahost:8983/solr/ as we defined it in Tomcat context as described in previous post.
All we need now is to fill Solr with data using drush sapi-i command (thanks Gasper for help).

Installing Solr on Ubuntu

If you want to install Solr on your web server you will need to install some sort of Java application server (JBoss, Jetty, Tomcat ...) to contain the app, that is Solr.

Good place to start is of course the official documentation at Apache and I used this guide to help me get started.

I personally choose Tomcat as my container, because of previous experience.

The exact procedure is:
# install tomcat 
apt-get install tomcat6

# get Solr and extract it
cd /tmp
tar xzf apache-solr-3.6.1.tgz

#create folder for Solr and move installation to it 
mkdir -p /var/solr
cp apache-solr-3.6.1/dist/apache-solr-3.6.1.war /var/solr/solr.war
cp -R apache-solr-3.6.1/example/multicore/* /var/solr/
chown -R tomcat6 /var/solr/

#disable tomcat security
echo 'TOMCAT6_SECURITY=no' | sudo tee -a /etc/default/tomcat6

We have now installed Tomcat and Solr but Tomcat is not aware of Solr. We need to define the app context for Tomcat.

In Catalina/localhost define solr.xml (or use any name you like)

vim /etc/tomcat6/Catalina/localhost/solr.xml 
And put following data to it:
<Context path="/solr" docBase="/var/solr/solr.war"
   debug="0" privileged="true" allowLinking="true" crossContext="true">
  <!-- make symlinks work in Tomcat -->
  <Resources className="org.apache.naming.resources.FileDirContext" allowLinking="true" />

  <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" />
in this file we define that on url /solr (Context path) resides application on location /var/solr/solr.war (docBase). We also define environment variable for java runtime to use.

If you want to define port go to /etc/tomcat6/server.xml and look for Connector port

<Connector port="8983" protocol="HTTP/1.1" 
               redirectPort="8443" />
Set it to value of your desire.

To make thing simple I compiled a simple shell script to do all of the above:


echo "1. Installing tomcat6"
sudo apt-get install tomcat6

echo "2. change ownership of files"
sudo chown -R tomcat6 /var/solr/

echo "3.tomcat6 configuratin"
echo 'TOMCAT6_SECURITY=no' | sudo tee -a /etc/default/tomcat6
sudo sed -i 's/Connector port="8080"/Connector port="8983"/g' /etc/tomcat6/server.xml

sudo cat > /tmp/solr.xml  << NIZ
<Context path="/solr" docBase="/var/solr/solr.war"
   debug="0" privileged="true" allowLinking="true" crossContext="true">
  <!-- make symlinks work in Tomcat -->
  <Resources className="org.apache.naming.resources.FileDirContext" allowLinking="true" />

  <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" />

sudo mv /tmp/solr.xml /etc/tomcat6/Catalina/localhost/solr.xml

echo "4. Restarting tomcat6"
sudo service tomcat6 restart

Friday, July 27, 2012

Uhuru cloud hosting experience

I was invited by uhurucloud to test their service in order to get 1 year free hosting. So here it is:

  1. After creating my account and logging in there was the admin panel:
  2. When I decided to add new app there was the first suprise:
  3. There are very few options, and there is no Drupal :(

  4. I read some documentation and watched the video, and decided to make WordPress app, to get PHP and MySQL.
  5. Now I got 2 services:
  6. But when they were clicked nothing happened. Now I was back in documentation and realized that I can use visual studio plugin, mmc app or console access.

  7. I am a Linux user so I expected to have some sort of SSH shell access and jumped to console part. There is no Linux support, just instructions on how to install ruby on Mac and Windows and admin api. Great :(
  8. Now I will get my wife's PC to try MMC console. After sucessfuly installing .NET framework and starting the app, I sucessfully connected to service.
  9. Console is quite nice but useless. Only thing I could do was start/stop service, browse files(not upload them), open tunnels, with very little instruction on how to use them ...
  10. At this point I give up. There is no way I can even start to use the service.

    What is missing?

    • To have even the most simple Drupal site I need console access with full drush support, git, ability to download modules etc.
    • Simple way of uploading files (sftp,ftps ...)
    • For corporate use (to have something similar to our production) cloud must provide full Linux environment with root access in order to install all the necessary packages, set up environment, configure load balancers, reverse proxy etc.

Please not that cloud is at testing stage and services will improve.

Tuesday, April 3, 2012

Hot to compile pecl extension as 32 bit on 64 bit linux

  • Download the source of pecl package.
  • Untar it to some temporary location
  • Go to folder with extracted tar an run phpize to make ./config file.
  • Set flags to compile it as 32 bit application
    CFLAGS=-m32 CPPFLAGS=-m32 CCASFLAGS=-m32 ./configure
  • make
  • make install

Don't forget to install 32 bit compiler:

sudo apt-get install g++-multilib

UPDATE !!! If you connect to external services like memcached, solr etc. then you will come to a dead end, because of linking problem. There are 2 things you can do:

  • Create 32bit vitrual machine, install the 32bit lampp and compile all the pecl extesions you need and then transfer in to 64bit machine, and you will end up coping 32bit dependency librarires to 64bit machine (like I did).
  • Make your own 64bit lampp stack from source or simply use one already made like bitnami.

To conclude I've learned that you can have 32bit lampp stack on 64bit system as long as you stick do default (PHP,MYSQL, APACHE) bundle. If you need external services (memcached, solr ...) and want't to compile extensions that references them, you are done.
P.S. Don't use lampp unless you realy need it (to support some legacy code ...). Otherwise use packages that come with system