CLOUDMAILIN HIPCHAT HUBOT SCRIPT
Took really long time to deduce workaround… :S
Set cloudmailin as
http://blah.herokuapp.com/hubot/cloudmailin/room_id
PYTHON - WRITE COMPRESSED LOG FILE INTO HDFS FOR HADOOP HIVE MAPREDUCE
import pyhdfs
from cStringIO import StringIO
import binascii
-snip-
#Set hdfs connection info
hdfsaddress = “namenode”
hdfsport = 12345
hdfsfn = “filename”
#gzip compression level
clevel = 1
-snip-
logger.info(“Writing compressed data into ” + hdfsfn + “.gz”)
#open hdfs file
fout = pyhdfs.open(hdfs, hdfsfn + “.gz”, “w”)
#compress the data and store it in compressed_data
buf = StringIO()
f = gzip.GzipFile(mode=’wb’, compresslevel=clevel,fileobj=buf)
try:
f.write(concatlog)
finally:
f.close()
compressed_data = buf.getvalue()
#write compressed data into hdfs
pyhdfs.write(hdfs,fout,compressed_data)
#close hdfs file
logger.info(“Writing task finished”)
pyhdfs.close(hdfs,fout)
-snip-
FACEBOOK SCRIBE WITH HDFS
packages:
libevent
hadoop-0.20-libhdfs
JDK for hdfs support
Boost
http://sourceforge.net/projects/boost/
./bootstrap.sh
./bjam
./bjam install
FLUME DFO LOCAL STORAGE USAGE CHECK
It might be useful when flume driver failed.
!/usr/bin/perl
#Check flume DFO directory size
sub trim($);
use strict;
use warnings;
my $exit=0;
my $backlog_size = `du -s flume | awk {‘print \$1’}`;
$backlog_size = trim($backlog_size);
if ( !$ARGV[0] || !$ARGV[1])
{
########################### Usage of the plugin
print “check_flume_backlog critical_size warning_size \n”;
exit 0;
}
######################### Case 1 if State is Critical
if ($backlog_size > $ARGV[0])
{
print “Critical: “.$backlog_size.”b\n”;
exit 2;
}
######################## Case 2 if State is Warning
if($backlog_size > $ARGV[1] || $backlog_size == 0)
{
print “Warning: “.$backlog_size.”b\n”;
exit 1;
}
######################## Case 3 if State is OK
if($backlog_size < $ARGV[0] && $backlog_size < $ARGV[1])
{
print “OK: “.$backlog_size.”b\n”;
exit 0;
}
sub trim($)
{
my $string = shift;
$string =~ s/^\s+//;
$string =~ s/\s+$//;
return $string;
}
And for centralised monitoring..
DEFAULT FIXED VERSION VALUE WHEN CREATING AN ISSUE IN JIRA
You can try to add some JavaScript code to the field that will perform required operation for you, in this case it should be ‘Fix Version’ field. You can refer to this documentation as a guideline:
http://confluence.atlassian.com/display/JIRACOM/Using+JavaScript+to+Set+Custom+Field+Values
DELETE MPLAYERX HISTORY
http://code.google.com/p/mplayerx/issues/detail?id=517
launch Terminal, and use the command to clear the history - tested.
ATLASSIAN CROWD AUTHENTICATION DIRECTORY ADDRESS FORCE UPDATE
When you migrate atlassian product (confluence, jira etc etc) to somewhere else, your backup file includes fixed crowd address which is not gonna work for some cases.
CISCO ROUTER CONFIG FOR ACME PACKET
sip-ua
authentication username
-snip-
password
7
-snip-
registrar dns:realm.domain expires
3600
USING ADDITIONAL PUBLIC IP ADDRESS OVER PPPOE NAT WITH CISCO ROUTER
To allocate router a non-NAT IP (for AWS VPC etc etc)