<?php public function email_subject_encode($in_str, $charset = 'UTF-8') { #remove all non utf8 characters $in_str = mb_convert_encoding($in_str, $charset, $charset); #Remove non printable character (i.e. below ascii code 32). $in_str = preg_replace('/[\x00-\x08\x0B\x0C\x0E-\x1F]/u', '', $in_str); #encode email subject $out_str ="=?utf-8?b?".base64_encode($in_str)."?="; return $out_str; } ?>
Sunday, December 9, 2012
How to properly encode email subject
Tuesday, December 4, 2012
Parsing date string
If you are non US user you will stumble on a problem how to parse date strings that are in non US format i.e. 31.12.2012. Code below does just that. First we use regular expression to extract month, date and year and if it is a valid date then convert it to Datetime object.
<?php public function ParseForDateTimeValue ($strText) { if ($strText != "") { // RegExp taken from php.net if(ereg("^([0-9]{1,2})[/\.]\s*([0-9]{1,2})[/\.]\s*([0-9]{2,4})$", $strText, $arr_parts)) { $month = ltrim($arr_parts[2], '0'); $day = ltrim($arr_parts[1], '0'); $year = $arr_parts[3]; if (checkdate($month, $day, $year)) { return new DateTime(date('Y-m-d H:i:s', mktime(0, 0, 0, $month, $day, $year))); } } } return NULL; } ?>
Wednesday, October 17, 2012
Bash programming tips - part 5
Combining parts 1, 2,3 and 4 we almost have a script. Now to parameter parsing. There are more than one way to parse input parameters. You should check this post for excellent tutorial on parameters parsing. I will just discus my implementation. Now I will repost the snippet from first part.
while [ $# -ne 0 ]; do case -o|--option) _TARGET="$1" shift ;; -h|--help) usage; exit 1 ;; (--) usage; exit 1;; (-*) usage; exit 1;; (*) usage; exit 1;; esac done
As shown in code above we are looping trough the script parameters one by one, until there are none left. We are matching the parameter to the expected options (-o for example) and shifting the parameter value if option demands it. We display usage if we have unexpected option. With slight modification we can support multiple values for single parameter. I prefer shifting over i.e. getopts because this way I can support long version options (i.e. --help), but that is the matter of preference. This version does not support white spaces in parameter values.
Tuesday, October 16, 2012
Bash programming tips - part 4
In part 1, 2 and 3 we have defined script outline, set up configuration and defined helper functions. Now will discuss error handling. It is amazing that almost none of scripts I have seen have none. It is really simple, just look at code snippet bellow.
#catch script return value _ERROR=$? #check if commands executed successfully if [ $_ERROR -ne 0 ] then debug "ERROR" "We have an error. Handle it. !!!!" fi
With _ERROR=$? we have stored the exit code of the last command executed to variable _ERROR. In Linux world all commands return 0 on success or positive integer on failure.
Bash programming tips - part 3
In part 1 and 2 we have presented the script outline and config section. Now to introduce helper functions.
First is usage
## Usage function usage() { clear cat << USAGE NAME deploy - deploy projects to servers SYNOPSIS ./deploy.sh PROJECT_NAME [-t|--target] TARGET [--tag] [-d|dump] ./deploy.sh PROJECT_NAME [-t|--target] [-m|--maintenance] ./deploy.sh PROJECT_NAME [-t|--target] [-d|--dump] DESCRIPTION Deploy Drupal projects (${_PROJECTS[@]}) to production. Mandatory arguments to long options are mandatory for short options too. -t, --target production or hostname for stageing -m, --maintenance put destination server to maintance mode -d, --dump make backup of project database at target -f, --features list of features to revert -h, --help display this help and exit --tag tag to deploy --force force command (i.e. dump on master) EXAMPLES Deploy to production from tag on master branch and make db backup /deploy.sh project1 -t production --tag 20120829 -d Put production to maintenance mode /deploy.sh project1 -t production -m Doploy development branch to staging server /deploy.sh -t project1 --tag user/branch AUTHOR Written by Author1, Author2, Author3 USAGE }
Since the example is an extract of deploy script help will usage will return something related to the functionality. This function is quite simple, only cool thing about it is that we listed array elements with ${_PROJECTS[@]}, to display projects available
For input validation we can use something like script bellow since as mentioned in previous post I believe its a good practice to be able to run script without parameters and get usage.
## Validate input function validate_input() { # Exit if there is some invalid arguments if [ $# -lt 3 ]; then usage exit 0 fi #if first argument is not project name show usage in_array $1 "${_PROJECTS[@]}" && return 0 || usage; exit; }
Function above checks if function has at least one parameter, which most be the project name
For debug output to console we can use something like:
#output messages to console function debug() { local _LEVEL=0 case "$1" in ERROR ) _LEVEL=2 ;; INFO ) _LEVEL=1 ;; DEBUG ) _LEVEL=0 ;; esac if [ $_DEBUG -gt 0 ] && [ $_LEVEL -ge $_DEBUG_LEVEL ] then while [ $# -ne 0 ]; do echo -n " $1" shift done echo fi return 0 }
Good place to put these things is separate file in order to keep code clean.
To have full working project you will probably need at least function to become sudo or to catch user input.
Bash programming tips part 2
In part 1 we have introduced the basic script outline. In configuration section we specify variables and configuration options. Example bellow in an extract (short version) of some deployment script configuration, but it is a good example of how complex configuration can be.
# <config> ### # CONSTANTS ### #turn debuging on/off _DEBUG=1 #two levels ERROR=2/INFO=1/DEBUG=0 _DEBUG_LEVEL=0 #custom return codes _YES=100 _NO=200 _LOCK=/tmp/.lock.deploy _PROGNAME=$(basename $0) _MASTER= _CLUSTER_MODE="cluster" _SINGLE_MODE="single" #local environment variables _LOCALIP= _LOCALUSER= _HOSTNAME= #script specific parameters _TARGET= _DUMP= _MAINTENANCE= _TAG= #arrays declare -a _FEATURES=() declare -a _INTERFACES=() #user for deploy _REMOTE_USER=root ### # Input dependent arrays ### #all possible known projects _PROJECTS=(project1 project2 project3 project4 ) # Project source declare -A _PROJECT_PATH #sn _PROJECT_PATH[${_PROJECTS[0]}]="/var/www/vhosts/project1/www/" #polet _PROJECT_PATH[${_PROJECTS[1]}]="/var/www/vhosts/project2/www/" #deloidom _PROJECT_PATH[${_PROJECTS[2]}]="/var/www/vhosts/project3/www/" #pogledi _PROJECT_PATH[${_PROJECTS[3]}]="/var/www/vhosts/project4/www/" declare -A _PROJECT_WEB_SERVERS _PROJECT_WEB_SERVERS[${_PROJECTS[0]}]="server1 server2" _PROJECT_WEB_SERVERS[${_PROJECTS[1]}]="server1 server2" _PROJECT_WEB_SERVERS[${_PROJECTS[2]}]="server1 server2" _PROJECT_WEB_SERVERS[${_PROJECTS[3]}]="server1 server2" _PROJECT_WEB_SERVERS[${_PROJECTS[4]}]="server1 server2" _PROJECT_WEB_SERVERS[${_PROJECTS[5]}]="server1 server2" _PROJECT_WEB_SERVERS[${_PROJECTS[6]}]="server1 server2" declare -A _PROJECT_DUMP_LOCATION _PROJECT_DUMP_LOCATION[${_PROJECTS[0]}]="/tmp" _PROJECT_DUMP_LOCATION[${_PROJECTS[1]}]="/tmp" _PROJECT_DUMP_LOCATION[${_PROJECTS[2]}]="/tmp" _PROJECT_DUMP_LOCATION[${_PROJECTS[3]}]="/tmp" declare -A _PROJECT_OTHER_SERVERS _PROJECT_OTHER_SERVERS[${_PROJECTS[0]}]="10.0.0.1 10.0.0.2" _PROJECT_OTHER_SERVERS[${_PROJECTS[1]}]="10.0.0.1 10.0.0.2" _PROJECT_OTHER_SERVERS[${_PROJECTS[2]}]="10.0.0.1 10.0.0.2" _PROJECT_OTHER_SERVERS[${_PROJECTS[3]}]="10.0.0.1 10.0.0.2"
In script above I've used powerful bash feature - array. This allows me to easily group configurations, without having to use long names like PROJECT1_WEB_SERVER1 to keep clarity.
Bash programming tips - part 1
As industry standard all scripts that are not for single use should consist of at least 2 things - usage and configuration.
As a good practice I would expect the script when executed to show help in man format, although most of standard shell commands like ls, pwd execute immediately without requiring any parameters.
To start writing bash script we need editor (Kate, vim i.e.) and bash, which is a part of any modern Linux distribution.
Typical bash script looks something like:
#!/bin/bash ## Includes # source scripts/_config.sh # # Main # ## Validate input validate_input $@ # Settings while [ $# -ne 0 ]; do case -o|--option) _TARGET="$1" shift ;; -h|--help) usage; exit 1 ;; (--) usage; exit 1;; (-*) usage; exit 1;; (*) usage; exit 1;; esac done exit 0
Code above is pretty much self explanatory. Bash allows you to include external files into script and this is a good practice to keep code clean and readable. This is where you should put your configuration (at least if it is not trivial, so use your brains)
At the beginning of main section I usually put some sort of input validation (discussed in detail in later posts) to terminate script immediately if input is invalid. Next is parameter parsing either manually like above or with functions like getopts. Script the continues depending on selected options.
Friday, September 21, 2012
How to remove all non printable and non UTF8 characters from string
$a='some string that you want to clean'; #remove all non utf8 characters $a = mb_convert_encoding($a, 'UTF-8', 'UTF-8'); # Remove non printable character (i.e. below ascii code 32). $a = preg_replace('/[\x00-\x08\x0B\x0C\x0E-\x1F]/u', '', $a);
I hope that I saved someones time (thanks Gregor for help).
Tuesday, August 21, 2012
Configuring Drupal search_api_solr module
In order to install Drupal search_api_solr we need to install the module using Drush or some other way.
After installation is complete, we need to move schema.xml and solrconfig.xml from module to solr.
We define Solr cores in solr.xml on the Solr root /var/solr in our case.
Sample configuration is:
<?xml version="1.0" encoding="UTF-8" ?> <solr persistent="false"> <!-- adminPath: RequestHandler path to manage cores. If 'null' (or absent), cores will not be manageable via request handler --> <cores adminPath="/admin/cores" defaultCoreName="core0"> <core name="core0" instanceDir="core0" /> </cores> </solr>Now we move schema.xml and solrconfig.xml from module to solr.
cp module_installation_folder/search_api_solr/schema.xml /var/solr/core0/conf/. cp module_installation_folder/search_api_solr/solrconfig.xml /var/solr/core0/conf/.If we check schema.xml we will see that we also need to define /var/solr/core0/conf/protwords.txt
& < > ' "and we must download mapping-ISOLatin1Accent.txt to map non ascii chars to their ASCII equivalent.
In Drupal go to http://localhost/admin/config/search/search_api/server/solr_server/edit to tell Drupal where your Solr is.
You can test your installation by visiting http://locahost:8983/solr/ as we defined it in Tomcat context as described in previous post.
All we need now is to fill Solr with data using drush sapi-i command (thanks Gasper for help).
Installing Solr on Ubuntu
If you want to install Solr on your web server you will need to install some sort of Java application server (JBoss, Jetty, Tomcat ...) to contain the app, that is Solr.
Good place to start is of course the official documentation at Apache and I used this guide to help me get started.
I personally choose Tomcat as my container, because of previous experience.
The exact procedure is:# install tomcat apt-get install tomcat6 # get Solr and extract it cd /tmp wget http://archive.apache.org/dist/lucene/solr/3.6.1/apache-solr-3.6.1.tgz tar xzf apache-solr-3.6.1.tgz #create folder for Solr and move installation to it mkdir -p /var/solr cp apache-solr-3.6.1/dist/apache-solr-3.6.1.war /var/solr/solr.war cp -R apache-solr-3.6.1/example/multicore/* /var/solr/ chown -R tomcat6 /var/solr/ #disable tomcat security echo 'TOMCAT6_SECURITY=no' | sudo tee -a /etc/default/tomcat6
We have now installed Tomcat and Solr but Tomcat is not aware of Solr. We need to define the app context for Tomcat.
In Catalina/localhost define solr.xml (or use any name you like)
vim /etc/tomcat6/Catalina/localhost/solr.xmlAnd put following data to it:
<Context path="/solr" docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <!-- make symlinks work in Tomcat --> <Resources className="org.apache.naming.resources.FileDirContext" allowLinking="true" /> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context>in this file we define that on url /solr (Context path) resides application on location /var/solr/solr.war (docBase). We also define environment variable for java runtime to use.
If you want to define port go to /etc/tomcat6/server.xml and look for Connector port
<Connector port="8983" protocol="HTTP/1.1" connectionTimeout="20000" URIEncoding="UTF-8" redirectPort="8443" />Set it to value of your desire.
To make thing simple I compiled a simple shell script to do all of the above:
#!/bin/bash echo "1. Installing tomcat6" sudo apt-get install tomcat6 echo "2. change ownership of files" sudo chown -R tomcat6 /var/solr/ echo "3.tomcat6 configuratin" echo 'TOMCAT6_SECURITY=no' | sudo tee -a /etc/default/tomcat6 sudo sed -i 's/Connector port="8080"/Connector port="8983"/g' /etc/tomcat6/server.xml sudo cat > /tmp/solr.xml << NIZ <Context path="/solr" docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <!-- make symlinks work in Tomcat --> <Resources className="org.apache.naming.resources.FileDirContext" allowLinking="true" /> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context> NIZ sudo mv /tmp/solr.xml /etc/tomcat6/Catalina/localhost/solr.xml echo "4. Restarting tomcat6" sudo service tomcat6 restart
Friday, July 27, 2012
Uhuru cloud hosting experience
I was invited by uhurucloud to test their service in order to get 1 year free hosting. So here it is:
- After creating my account and logging in there was the admin panel:
- When I decided to add new app there was the first suprise:
- I read some documentation and watched the video, and decided to make WordPress app, to get PHP and MySQL.
- Now I got 2 services:
- I am a Linux user so I expected to have some sort of SSH shell access and jumped to console part. There is no Linux support, just instructions on how to install ruby on Mac and Windows and admin api. Great :(
- Now I will get my wife's PC to try MMC console. After sucessfuly installing .NET framework and starting the app, I sucessfully connected to service.
- Console is quite nice but useless. Only thing I could do was start/stop service, browse files(not upload them), open tunnels, with very little instruction on how to use them ...
- At this point I give up. There is no way I can even start to use the service.
What is missing?
- To have even the most simple Drupal site I need console access with full drush support, git, ability to download modules etc.
- Simple way of uploading files (sftp,ftps ...)
- For corporate use (to have something similar to our production) cloud must provide full Linux environment with root access in order to install all the necessary packages, set up environment, configure load balancers, reverse proxy etc.
There are very few options, and there is no Drupal :(
But when they were clicked nothing happened. Now I was back in documentation and realized that I can use visual studio plugin, mmc app or console access.
Please not that cloud is at testing stage and services will improve.
Tuesday, April 3, 2012
Hot to compile pecl extension as 32 bit on 64 bit linux
- Download the source of pecl package.
- Untar it to some temporary location
- Go to folder with extracted tar an run phpize to make ./config file.
- Set flags to compile it as 32 bit application
CFLAGS=-m32 CPPFLAGS=-m32 CCASFLAGS=-m32 ./configure
- make
- make install
Don't forget to install 32 bit compiler:
sudo apt-get install g++-multilib
UPDATE !!! If you connect to external services like memcached, solr etc. then you will come to a dead end, because of linking problem. There are 2 things you can do:
- Create 32bit vitrual machine, install the 32bit lampp and compile all the pecl extesions you need and then transfer in to 64bit machine, and you will end up coping 32bit dependency librarires to 64bit machine (like I did).
- Make your own 64bit lampp stack from source or simply use one already made like bitnami.
To conclude I've learned that you can have 32bit lampp stack on 64bit system as long as you stick do default (PHP,MYSQL, APACHE) bundle. If you need external services (memcached, solr ...) and want't to compile extensions that references them, you are done.
P.S. Don't use lampp unless you realy need it (to support some legacy code ...). Otherwise use packages that come with system