Google Is Becoming Less Relevant

Google Is Becoming Less Relevant Back in September Google released it’s new Hummingbird algorithm, apparently affecting 90% of search queries, as it strives for greater accuracy and more relevant results. Aimed at ‘conversational searches’, like what is the best cake? rather than simple terms such as best cake. (when is the last time you made a search request like that?)

Then on the 4th October they released Penguin 2.1, aimed at reducing the quality of spammy backlinks, I assume the theory is that better sites have better backlinks, (as they have been around longer?) and newer sites, (trying to use spammy backlinks to gain traction?) can’t be as good or as relevant. Fairly naive thinking, though I guess it might help in ‘cleaning up the web’?

Quality Of Results

If Google’s main aim is to provide better quality results. Then lets look in an honest search query that I’ve just made! nginx sip proxy! At work we need to find out if we can proxy SIP through Nginx using web sockets, so this kind of query is the kind of thing I’d use to start my information search!

Google Results

Google Results

The first result from Google, is from the Nginx documentation, but that page doesn’t have the word SIP on it anywhere, so how can it be useful to me? Sure, it’s very authoritative (although actually out of date) about Nginx and HTTP proxies, but it’s the least helpful result ever.

Bing Results

Bing Results

The first result from Bing is more relevant, a topic on the Nginx forums asking about exactly what I’m after.. using Nginx as a SIP proxy. The results after that are also a lot more relevant, other people discussing using Nginx as a SIP proxy!

Google Are Making The Internet Worse

In all likely hood, Google will probably start ranking this page for nginx sip proxy by the end of the day, if they do I will tag everything I learn on to it to make it actually helpful! Google seem to be on a quest to create a really high barrier to entry for new web developers, preferring to return old, out of date information, whilst at the same time dumbing down their search results and losing sight of what made them great in the first place:

  • Clean no frills website
  • Relevant, meaningful content
  • Being a search engine, not a web portal

How To Take Control Of Another Computer

how to take control of another computer There are several things to think about when taking control of another computer, the Operating System running on it, the speed of your network connection and the tools you have at your disposal.

How To Take Control Of Another Computer

Operating System

There are 3 main choices of Operating system that the computer you want to take control of might be running, Windows, Mac OS X and Linux. Fortunately Mac OS X is based on BSD so the tools you would use to take control of it are the same as you would for Linux, simplifying things some what!

Network Speed

If you have a fast network it’s possible to use remote desktop tools, such as Windows Remote Desktop, Virtual Network Computing (VNC) or on a Mac or Linux server you can do X over SSH.

How To Take Control Of Another Computer

Windows

To enable Windows Remote Desktop, click the Start button, click All Programs or Programs, and then click Accessories.

Mac / Linux

How To Tunnel X Over SSH

If you want to know how to take control of another computer that has X Windows on it, e.g. a Mac or a Linux machine, but even really old Solaris or other styles of Unix, then you need to make sure the machine you want to take control of has in it’s sshd_config file


X11Forwarding yes

You can then connect to the remote machine with


ssh -X hostname

Then any GUI applications you run via the command line will magically appear locally on your machine.

sudo: sorry, you must have a tty to run sudo

sudo: sorry, you must have a tty to run sudo We’re using an old version of Upstart, on Centos, to manage stopping and starting our Node.js daemons, and one of the things the script does, like any good deamon, is change the user of the deamon process from root to something more applicable, security and all that 😉

The scripts look a little like this


!upstart
description "Amazing Node.js Daemon"
author "idimmu"

start on runlevel [2345]
stop on shutdown

env PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
env NAME=”amazing-daemon”

script
export HOME=”/root”
cd /opt/idimmu/$NAME
echo $$ > /var/run/$NAME.pid
exec sudo -u idimmu /usr/bin/node /opt/idimmu/$NAME/server.js >> /var/log/$NAME/stdout.log 2>&1
end script

pre-start script
echo “[`date -u +%Y-%m-%dT%T.%3NZ`] (upstart) Starting $NAME” >> /var/log/$NAME/stdout.log
end script

pre-stop script
rm /var/run/$NAME.pid
echo “[`date -u +%Y-%m-%dT%T.%3NZ`] (upstart) Stopping $NAME” >> /var/log/$NAME/stdout.log
end script

Which is nice, as it means we can use Upstart to stop/start/status deamons really nicely. The equivalent init.d script looked really horrible.

But there’s one massive caveat, which we always encounter when building a brand new box, from scratch.


2013-09-27T10:50:10.174Z] (upstart) Starting amazing-daemon
sudo: sorry, you must have a tty to run sudo

sudo: sorry, you must have a tty to run sudo

So it all falls apart due to the following error:

sudo: sorry, you must have a tty to run sudo

Basically sudo is stopping the process from running because Upstart doesn’t have a TTY. This is easily fixable. Just edit /etc/sudoers using visudo and comment out


Defaults requiretty

i.e.


#Defaults requiretty

Now we can use Upstart to start the daemon and check it’s status to confirm it’s running! More recent versions of Upstart don’t need this hack. One day I’ll live in the future, but not today.


deploy:amazing root$ start amazing
amazing start/running, process 3965
deploy:amazing root$ status amazing
amazing start/running, process 3965

Bamo, problem solved!

pv – Pipe Viewer – My New Favourite Command Line Tool

Pipe Viewer I’ve got a rather large dataset that I need to do a lot of processing on, over several iterations, it’s a 20gb zip file, flat text, and I’m impatient and don’t like not knowing things!

My new favourite Linux command line tool, pv (pipe viewer) is totally awesome. Check this out:

 
 
 
 


pv -cN source < urls.gz | zcat | pv -cN zcat | perl -lne '($a,$b,$c,$d) = split /\||\t/; print $b unless $b =~ /ac\.uk/; print $c unless $c =~ /ac\.uk/' | pv -cN perl | gzip | pv -cN gzip > hosts.gz
zcat: 93.4GiB 1:33:18 [26.6MiB/s] [ <=> ]
perl: 85.7GiB 1:33:18 [25.3MiB/s] [ <=> ]
source: 13.2GiB 1:33:17 [3.57MiB/s] [===============================================> ] 67% ETA 0:44:41
gzip: 12.7GiB 1:33:18 [3.51MiB/s] [ <=> ]

I’m basically splitting some text, removing stuff I don’t want and doing:


zcat urls.gz | perl -lne '($a,$b,$c,$d) = split /\||\t/; print $b unless $b =~ /ac\.uk/; print $c unless $c =~ /ac\.uk/' | gzip > hosts.gz

But at appropriate moments I’ve piped the output in to the pv pipe viewer tool to report on some metrics. FYI the -N flag lets me set a name for the pv instance, and the -c flag is to enable cursor positioning so we can use multiple instances of pv!

The reason pipe viewer is totally cool is the extra sneaky data we get!

Pipe Viewer Is Magic

Because the first instance of pv is reading our urls.gz file in itself, it can display how much of the file it’s processed and roughly how long it will complete. MOST USEFUL THING EVER! Also I had no idea how large the compressed dataset was and was hesitant to uncompress the data as I wasn’t sure how big it would be, we can see from the pv instance named zcat that zcat has so far spat out 93.4GB of data, at 67% through we can predict this file is probably around 140GB if we extract it. How cool is that? We can also tell from the pv named perl that after splitting and removing the data we don’t want, we’ve so far shaved off 10GB, which is kinda interesting to splurge over for a bit, and lastly with the named gzip pv instance, pipe viewer is telling us the size of the output file we’ve generated so far.

This is totally rad.

Note. Many thanks to Norway for forcing me to rewrite my initial one liner of


zcat urls.gz | sed 's/|/ /g' | while read a b c d ; do echo $b ; echo $c ; done | grep -v ac.uk$ | gzip > hosts.gz

by glaring at me.

Enable Linux Core Dump

Enable Linux Core Dumps One of our applications (Freeswitch) just randomly crashed for no apparent reason and didn’t write anything to it’s log files. The service we’re trialling is currently in Beta so there’s room to muck about and do some diagnostics. I want to make the kernel dump a core file whenever Freeswitch dies, in case it happens again, so that we have some stuff to work with after the fact. It’ll also shut up my QA manager.

Check The Current Linux Core Dump Limits

ulimit is used to specify the maximum size of generated coredumps, this is to stop apps chewing up a million GB of RAM and then blowing your disk up, by default it’s a 0, which means nothing gets written to disk and no dump is created!


hstaging:~ # ulimit -c
0

Change The Linux Core Dump Limits To Something Awesome

To set the size limit of the linux core files to 75000 bytes, you can do something like this


hstaging:~ # ulimit -c 75000
hstaging:~ # ulimit -c
75000

but I’m a maverick, this does exactly what you think it does


hstaging:~ # ulimit -c unlimited
hstaging:~ # ulimit -c
unlimited

Enable Linux Core Dump For Application Crashes And Segfaults And Things

Ok, so we want this to persist across reboots so that basically means we have to stick the ulimit command in /etc/profile, i’m putting this at the bottom of mine:


#corefile stuff
ulimit -c unlimited > /dev/null 2>&1

this will stop anything weird getting spat out to the screen and nicely tells us that it’s core file stuff.

For our next trick we’ll set some sysctl flags so in /etc/sysctl.conf add


#corefile stuff
kernel.core_uses_pid = 1
kernel.core_pattern = /tmp/core-%e-%s-%u-%g-%p-%t
fs.suid_dumpable = 2

this basically says when the application crashes create a coredump file in /tmp with a useful name pattern


kernel.core_uses_pid = 1 - add the pid of the crashed app to the filename.
fs.suid_dumpable = 2 - enable linux core dumps for setuid processes.
kernel.core_pattern = /tmp/core-%e-%s-%u-%g-%p-%t - crazy naming pattern for a successful core dump, here's roughly what all the bits mean:
%e - executable filename
%s - number of signal causing dump
%u - real UID of dumped process
%g - real GID of dumped process
%p - PID of dumped process
%t - time of dump (seconds since 0:00h, 1 Jan 1970)

super usefuls. Then run sysctl -p so it takes effect yo!


hstaging:~ # sysctl -p

kernel.core_uses_pid = 1
kernel.core_pattern = /tmp/core-%e-%s-%u-%g-%p-%t
fs.suid_dumpable = 2

Enabling Linux Core Dump For All Apps

Now here’s the last part. When you want an application to core dump you create an environment variable, before you start it, telling the kernel to sort itself out and get ready to dump, if you want all apps on the server to generate core dumps then you’re going to want to specify this variable somewhere near the top of the process chain. The best place for this on a redhat style box is /etc/sysconfig/init, so stick the following in that file


DAEMON_COREFILE_LIMIT='unlimited'

now might be an idea to reboot to force it to be set across all applications and things

Enabling Linux Core Dumps For A Specific Application

This is the slightly less rebooty version of the above. Rather than force the environment variable to be loaded when the box starts, we just stick it in the init script for the deamon, and then restart the daemon.

In /etc/init.d/functions the RedHat guys have already stuck in


corelimit="ulimit -S -c ${DAEMON_COREFILE_LIMIT:-0}"

So we need to make sure we put our DEAMON_COREFILE_LIMIT above that. Simples. In our case it’s in /etc/init.d/freeswitch with


DAEMON_COREFILE_LIMIT='unlimited'

Distros That Aren’t RedHat

DAEMON_COREFILE_LIMIT is a RedHatism. If you’re running something cool, like Ubuntu, you’ll want to use


ulimit -c unlimited >/dev/null 2>&1
echo /tmp/core-%e-%s-%u-%g-%p-%t > /proc/sys/kernel/core_pattern

instead.

Testing Core Dumps

This is EASY, we just start the deamon, send a segfault signal, look in the right place!!


hstaging:tmp # /etc/init.d/freewitch start
hstaging:tmp # /etc/init.d/freeswitch status
freeswitch (pid 8257) is running...
hstaging:tmp # kill -s SIGSEGV 8257
hstaging:tmp # ls /tmp/core*
core-freeswitch-11-493-492-8257-1371823178

Now you give this file to your developers and take a bow!

CouchDB {“error”:”insecure_rewrite_rule”,”reason”:”too many ../.. segments”}

couchdb

Whilst working an AMAZING NPM repository mirror yesterday (which totally works, despite not really offering the performance benefit I’d hoped, because NPM is rubbish) I came across this error whilst doing things

 
 
 
 
 
 
 
 
 
 


16 http GET http://localhost:5984/registry/_design/app/_rewrite/-/all/since?stale=update_after&startkey=1371737164294
17 http 500 http://localhost:5984/registry/_design/app/_rewrite/-/all/since?stale=update_after&startkey=1371737164294
18 error Error: insecure_rewrite_rule too many ../.. segments: registry/_design/app/_rewrite/-/all/since
18 error at RegClient. (/root/.nvm/v0.8.15/lib/node_modules/npm/node_modules/npm-registry-client/lib/request.js:259:14)
18 error at Request.init.self.callback (/root/.nvm/v0.8.15/lib/node_modules/npm/node_modules/request/main.js:120:22)
18 error at Request.EventEmitter.emit (events.js:99:17)
18 error at Request. (/root/.nvm/v0.8.15/lib/node_modules/npm/node_modules/request/main.js:648:16)
18 error at Request.EventEmitter.emit (events.js:126:20)
18 error at IncomingMessage.Request.start.self.req.self.httpModule.request.buffer (/root/.nvm/v0.8.15/lib/node_modules/npm/node_modules/request/main.js:610:14)
18 error at IncomingMessage.EventEmitter.emit (events.js:126:20)
18 error at IncomingMessage._emitEnd (http.js:366:10)
18 error at HTTPParser.parserOnMessageComplete [as onMessageComplete] (http.js:149:23)
18 error at Socket.socketOnData [as ondata] (http.js:1367:20)
19 error If you need help, you may report this log at:
19 error
19 error or email it to:
19 error

Visiting that URL in a web browser gave me


{"error":"insecure_rewrite_rule","reason":"too many ../.. segments"}

This is because secure rewrites are enabled! Looking at my couchdb config this occured in the default.ini


secure_rewrites = true

so in the [http] segment in the local.ini file i set it to false, in your face security model!


secure_rewrites = false

Then i restarted couchdb, and the world was put to rights and the error went away.

Niche Site Duel 2 Participation

Niche Site Duel 2 Pat Flynn over at Smart Passive Income has just announced the launch of his Niche Site Duel 2 project, and as I kind of called him out a few month ago in my first (stalled) Income Report causing a WordPress ‘ping back’ and his mate Blake to pop in and say hi 😀 Rather than be a massive cynic, I thought I’d give them another dofollow backlink and join in with his new project!

Pat Flynn
Pat Flynn – How can you resist that smile? And he’s holding a baby!

But Why??

Ok enough jokes, I’m sure Pat’s lovely and I’ve actually got mad respect for the results he got with the Insanity work out. (OMG so much link juice …). I’m doing this because I’ve tried, and failed, to monetise idimmu.net. I don’t really mind, I enjoy writing about burgers and logging the dumb stuff I do at work so I don’t forget about it, this site was never a project to make money, it was started, quite literally, to be an online memory replacement service due to my inadequate brain, and more often than not, I do actually Google myself in order to remember how to fix iptables or setup ldap!

So, sure I like the idea of some passive income, and I like to do things with other people, so rather than strike it out on my own I’m going to join in with Pat’s challenge! Although I’m gutted I’m late to the party so can’t be part of his Mastermind Learning Group (TM).

WTF Are You Talking About?

Ok so here’s the thing the name of the game is this:

  • Pick a keywords
  • Create a niche site around it
  • ..
  • Profit

It reminds me of the underpant gnomes.

Patt’s mentioned a selection process here for his keyword, which I’ve followed!

So, What Is My Keyword?

I’m not telling you that, I’m also not sending it to Pat 😀

Pat’s considering using best minivan as his keyword, which is currently ranking this in the #1 spot for me!

However I will tell you this, I’ve actually picked 2 keywords, because I’m actually going to make 2 sites at the same time! Both keywords are in different niches and are totally awesome. I’ve registered 2 domain names and I’ve created a BlueHost account to host them!

As a proof of work I’m going to mention some hashes in a random order, 2 of them are for the 2 domains ive bought and 2 of them are for my keywords! Random, different, salts have been used for each string just in case someone wants to hire the entire of Amazon’s cloud service to brute force them!


878e65fb1dad718717e2fa345ca185254b1935cd
d5984a21ff106df0822e20a018f27fe338eefb94
97b8d9c2bb660a47a2c9b130e10ed642f15975ab
3a9eac3d7446edd8251e43d39a59d38995ac7534

When the time comes, the domain names and the keywords might be revealed 😉

To everyone taking part, good luck!

How To Create An NPM Repository Mirror

npmWe use Node.js a LOT, which means we do npm install a LOT. And npm is pretty terrible, with horrible dependency handling so we can end up requesting hundreds of dependent modules with it’s recursive patten e.g. for just one of our projects we can end up with paths like


./node_modules/bcrypt/node_modules/nodeunit/node_modules/should/node_modules/mocha/node_modules/glob/node_modules/minimatch/node_modules/sigmund/node_modules/tap/node_modules

[root@hmon workspace]# find . -name node_modules | wc -l
2103

That’s 2103 node_modules directories, for an application we’ve written that has only 22 dependencies configured for it!


[root@hmon workspace]# find . -name mocha | wc -l
59

There are 59 instances of the mocha module in the dependency chain, how is that for terrible reuse of code! Why can’t npm be nice like every other language out there, e.g. perl (hi cpan), PHP, Ruby (hi gems!) and Python??

npm does cache locally, but it kind of sucks.

Anyway, rant over, we want to create a mirror of the npm repository to mitigate periods of npm outages (occasionally it does have them) and hopefully speed things up a little bit, so here’s how I did it!

CouchDB

All the NPM data is stored in couchdb, I’m doing this on Centos so I’m going to use yum to install couchdb


[root@hmon etc]# yum install couchdb
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: centos.mirror.linuxwerk.com
* epel: mirrors.n-ix.net
* extras: centos.mirror.linuxwerk.com
* passenger: passenger.stealthymonkeys.com
* rpmforge: mirror1.hs-esslingen.de
* rpmforge-extras: mirror1.hs-esslingen.de
* rpmforge-testing: mirror1.hs-esslingen.de
* updates: mirror.optimate-server.de
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package couchdb.x86_64 0:1.2.1-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================================================================================================================
Package Arch Version Repository Size
=================================================================================================================================================================================================================
Installing:
couchdb x86_64 1.2.1-1 drum 1.1 M

Transaction Summary
=================================================================================================================================================================================================================
Install 1 Package(s)

Total download size: 1.1 M
Installed size: 3.0 M
Is this ok [y/N]: y
Downloading Packages:
couchdb-1.2.1-1.x86_64.rpm | 1.1 MB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : couchdb-1.2.1-1.x86_64 1/1
Verifying : couchdb-1.2.1-1.x86_64 1/1

Installed:
couchdb.x86_64 0:1.2.1-1

Complete!

Simples! Next step is to start it, confirm it’s listening on a port and test it works!


[root@hmon etc]# /etc/init.d/couchdb start
Starting database server couchdb
[root@hmon etc]# ps aux | grep couch
root 9790 0.0 0.0 106188 1532 pts/1 S 14:41 0:00 /bin/sh -e /opt/netdev/erlang/bin/couchdb -a /etc/couchdb/default.ini -a /etc/couchdb/local.ini -b -r 0 -p /var/run/couchdb/couchdb.pid -o couchdb.stdout -e couchdb.stderr -R
root 9800 0.0 0.0 106188 760 pts/1 S 14:41 0:00 /bin/sh -e /opt/netdev/erlang/bin/couchdb -a /etc/couchdb/default.ini -a /etc/couchdb/local.ini -b -r 0 -p /var/run/couchdb/couchdb.pid -o couchdb.stdout -e couchdb.stderr -R
root 9801 0.7 0.1 666732 18576 pts/1 Sl 14:41 0:00 /usr/lib64/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib64/erlang -progname erl -- -home /root -- -noshell -noinput -os_mon start_memsup false start_cpu_sup false disk_space_check_interval 1 disk_almost_full_threshold 1 -sasl errlog_type error -couch_ini /etc/couchdb/default.ini /etc/couchdb/local.ini /etc/couchdb/default.ini /etc/couchdb/local.ini -s couch -pidfile /var/run/couchdb/couchdb.pid -heart
root 9834 0.0 0.0 103236 872 pts/1 S+ 14:42 0:00 grep couch
root 26078 0.0 0.0 173292 1720 ? S May07 0:00 sudo -u drum setsid node /opt/netdev/drum-collab-provisioning/server/server-couch.js
drum 26079 0.0 0.4 999360 70064 ? Ssl May07 18:09 node /opt/netdev/drum-collab-provisioning/server/server-couch.js
[root@hmon etc]# netstat -lpn | grep 9801
tcp 0 0 127.0.0.1:5984 0.0.0.0:* LISTEN 9801/beam.smp

At the moment in it’s default configuration it’s only listening on 127.0.0.1 so we want to fix that!

We also want to ensure that secure_rewrites is disabled else NPM will spit out loads of errors and not work!

In /etc/couchdb/local.ini we can override the default configuration, so we want to change


[httpd]
;port = 5984
;bind_address = 127.0.0.1

to


[httpd]
;port = 5984
bind_address = 0.0.0.0
secure_rewrites = false

and then restart couchdb with /etc/init.d/couchdb restart! Now we can see couchdb listening on all ports and test it with curl


[root@hmon couchdb]# netstat -lpn | grep 5984
tcp 0 0 0.0.0.0:5984 0.0.0.0:* LISTEN 10011/beam.smp
[root@hmon couchdb]# curl http://localhost:5984
{"couchdb":"Welcome","version":"1.2.1"}

Yay, we’re alive!!!

Setting Up CouchDB Replication

Now we need to tell couchdb that it needs to replicate from the NPM master in a continuous fashion, so as the NPM master updates, so does our couchdb instance!


[root@hmon couchdb]# curl -X POST http://127.0.0.1:5984/_replicate -d '{"source":"http://isaacs.iriscouch.com/registry/", "target":"registry", "continuous":true, "create_target":true}' -H "Content-Type: application/json"
{"ok":true,"_local_id":"d0818878c462afa6791440ab08348394+continuous+create_target"}

And we’re off! You can interrogate how the replcation is doing by visiting the server with a webbrowser at http://hostname:5984/_utils/ it should look a little like this:

couchdb-npm-mirror

Eventually it will stop growing, I promise 😉 As of writing it’s just shy of 50GB

NPM Repository Mirror

Configuring NPM To Use Your Mirror

First we need to install some random npmjs stuff in to our couch database


git clone git://github.com/isaacs/npmjs.org.git
cd npmjs.org
sudo npm install -g couchapp
npm install couchapp
npm install semver
couchapp push registry/app.js http://localhost:5984/registry
couchapp push www/app.js http://localhost:5984/registry

Which looks a little bit like this


[root@hmon ~]# git clone git://github.com/isaacs/npmjs.org.git
Initialized empty Git repository in /root/npmjs.org/.git/
remote: Counting objects: 1291, done.
remote: Compressing objects: 100% (742/742), done.
remote: Total 1291 (delta 609), reused 1200 (delta 531)
Receiving objects: 100% (1291/1291), 649.70 KiB | 353 KiB/s, done.
Resolving deltas: 100% (609/609), done.
[root@hmon ~]# cd npmjs.org
[root@hmon npmjs.org]# npm install -g couchapp
npm http GET https://registry.npmjs.org/couchapp
npm http 304 https://registry.npmjs.org/couchapp
npm http GET https://registry.npmjs.org/watch
npm http GET https://registry.npmjs.org/request
npm http 304 https://registry.npmjs.org/request
npm http 304 https://registry.npmjs.org/watch
npm http GET https://registry.npmjs.org/qs
npm http GET https://registry.npmjs.org/json-stringify-safe
npm http GET https://registry.npmjs.org/forever-agent
npm http GET https://registry.npmjs.org/tunnel-agent
npm http GET https://registry.npmjs.org/http-signature
npm http GET https://registry.npmjs.org/hawk
npm http GET https://registry.npmjs.org/aws-sign
npm http GET https://registry.npmjs.org/oauth-sign
npm http GET https://registry.npmjs.org/cookie-jar
npm http GET https://registry.npmjs.org/node-uuid
npm http GET https://registry.npmjs.org/mime
npm http GET https://registry.npmjs.org/form-data/0.0.8
npm http 200 https://registry.npmjs.org/json-stringify-safe
npm http GET https://registry.npmjs.org/json-stringify-safe/-/json-stringify-safe-4.0.0.tgz
npm http 200 https://registry.npmjs.org/forever-agent
npm http GET https://registry.npmjs.org/forever-agent/-/forever-agent-0.5.0.tgz
npm http 200 https://registry.npmjs.org/http-signature
npm http 200 https://registry.npmjs.org/tunnel-agent
npm http GET https://registry.npmjs.org/http-signature/-/http-signature-0.9.11.tgz
npm http GET https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.3.0.tgz
npm http 200 https://registry.npmjs.org/qs
npm http GET https://registry.npmjs.org/qs/-/qs-0.6.5.tgz
npm http 200 https://registry.npmjs.org/aws-sign
npm http GET https://registry.npmjs.org/aws-sign/-/aws-sign-0.3.0.tgz
npm http 200 https://registry.npmjs.org/oauth-sign
npm http GET https://registry.npmjs.org/oauth-sign/-/oauth-sign-0.3.0.tgz
npm http 200 https://registry.npmjs.org/cookie-jar
npm http GET https://registry.npmjs.org/cookie-jar/-/cookie-jar-0.3.0.tgz
npm http 200 https://registry.npmjs.org/node-uuid
npm http GET https://registry.npmjs.org/node-uuid/-/node-uuid-1.4.0.tgz
npm http 200 https://registry.npmjs.org/json-stringify-safe/-/json-stringify-safe-4.0.0.tgz
npm http 200 https://registry.npmjs.org/forever-agent/-/forever-agent-0.5.0.tgz
npm http 200 https://registry.npmjs.org/mime
npm http GET https://registry.npmjs.org/mime/-/mime-1.2.9.tgz
npm http 200 https://registry.npmjs.org/form-data/0.0.8
npm http GET https://registry.npmjs.org/form-data/-/form-data-0.0.8.tgz
npm http 200 https://registry.npmjs.org/http-signature/-/http-signature-0.9.11.tgz
npm http 200 https://registry.npmjs.org/qs/-/qs-0.6.5.tgz
npm http 200 https://registry.npmjs.org/tunnel-agent/-/tunnel-agent-0.3.0.tgz
npm http 200 https://registry.npmjs.org/aws-sign/-/aws-sign-0.3.0.tgz
npm http 200 https://registry.npmjs.org/cookie-jar/-/cookie-jar-0.3.0.tgz
npm http 200 https://registry.npmjs.org/oauth-sign/-/oauth-sign-0.3.0.tgz
npm http 200 https://registry.npmjs.org/node-uuid/-/node-uuid-1.4.0.tgz
npm http 200 https://registry.npmjs.org/mime/-/mime-1.2.9.tgz
npm http 200 https://registry.npmjs.org/form-data/-/form-data-0.0.8.tgz
npm http 200 https://registry.npmjs.org/hawk
npm http GET https://registry.npmjs.org/hawk/-/hawk-0.13.1.tgz
npm http 200 https://registry.npmjs.org/hawk/-/hawk-0.13.1.tgz
npm http GET https://registry.npmjs.org/assert-plus/0.1.2
npm http GET https://registry.npmjs.org/asn1/0.1.11
npm http GET https://registry.npmjs.org/ctype/0.5.2
npm http GET https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/boom
npm http GET https://registry.npmjs.org/cryptiles
npm http GET https://registry.npmjs.org/sntp
npm http GET https://registry.npmjs.org/combined-stream
npm http GET https://registry.npmjs.org/async
npm http 200 https://registry.npmjs.org/ctype/0.5.2
npm http 200 https://registry.npmjs.org/assert-plus/0.1.2
npm http 200 https://registry.npmjs.org/asn1/0.1.11
npm http 200 https://registry.npmjs.org/boom
npm http GET https://registry.npmjs.org/ctype/-/ctype-0.5.2.tgz
npm http GET https://registry.npmjs.org/boom/-/boom-0.4.2.tgz
npm http GET https://registry.npmjs.org/assert-plus/-/assert-plus-0.1.2.tgz
npm http GET https://registry.npmjs.org/asn1/-/asn1-0.1.11.tgz
npm http 200 https://registry.npmjs.org/cryptiles
npm http 200 https://registry.npmjs.org/combined-stream
npm http GET https://registry.npmjs.org/cryptiles/-/cryptiles-0.2.1.tgz
npm http GET https://registry.npmjs.org/combined-stream/-/combined-stream-0.0.4.tgz
npm http 200 https://registry.npmjs.org/sntp
npm http 200 https://registry.npmjs.org/ctype/-/ctype-0.5.2.tgz
npm http 200 https://registry.npmjs.org/boom/-/boom-0.4.2.tgz
npm http GET https://registry.npmjs.org/sntp/-/sntp-0.2.4.tgz
npm http 200 https://registry.npmjs.org/assert-plus/-/assert-plus-0.1.2.tgz
npm http 200 https://registry.npmjs.org/asn1/-/asn1-0.1.11.tgz
npm http 200 https://registry.npmjs.org/cryptiles/-/cryptiles-0.2.1.tgz
npm http 200 https://registry.npmjs.org/combined-stream/-/combined-stream-0.0.4.tgz
npm http 200 https://registry.npmjs.org/sntp/-/sntp-0.2.4.tgz
npm http 200 https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/hoek/-/hoek-0.8.5.tgz
npm http 200 https://registry.npmjs.org/async
npm http 200 https://registry.npmjs.org/hoek/-/hoek-0.8.5.tgz
npm http GET https://registry.npmjs.org/async/-/async-0.2.9.tgz
npm http GET https://registry.npmjs.org/hoek
npm http 200 https://registry.npmjs.org/async/-/async-0.2.9.tgz
npm http GET https://registry.npmjs.org/delayed-stream/0.0.5
npm http 304 https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/hoek/-/hoek-0.9.1.tgz
npm http 200 https://registry.npmjs.org/delayed-stream/0.0.5
npm http GET https://registry.npmjs.org/delayed-stream/-/delayed-stream-0.0.5.tgz
npm http 200 https://registry.npmjs.org/hoek/-/hoek-0.9.1.tgz
npm http 200 https://registry.npmjs.org/delayed-stream/-/delayed-stream-0.0.5.tgz
/root/.nvm/v0.8.15/bin/couchapp -> /root/.nvm/v0.8.15/lib/node_modules/couchapp/bin.js
couchapp@0.9.1 /root/.nvm/v0.8.15/lib/node_modules/couchapp
├── watch@0.7.0
└── request@2.21.0 (json-stringify-safe@4.0.0, forever-agent@0.5.0, aws-sign@0.3.0, qs@0.6.5, tunnel-agent@0.3.0, oauth-sign@0.3.0, cookie-jar@0.3.0, node-uuid@1.4.0, mime@1.2.9, http-signature@0.9.11, hawk@0.13.1, form-data@0.0.8)
[root@hmon npmjs.org]# npm install couchapp
npm http GET https://registry.npmjs.org/couchapp
npm http 304 https://registry.npmjs.org/couchapp
npm http GET https://registry.npmjs.org/watch
npm http GET https://registry.npmjs.org/request
npm http 304 https://registry.npmjs.org/request
npm http 304 https://registry.npmjs.org/watch
npm http GET https://registry.npmjs.org/qs
npm http GET https://registry.npmjs.org/json-stringify-safe
npm http GET https://registry.npmjs.org/forever-agent
npm http GET https://registry.npmjs.org/tunnel-agent
npm http GET https://registry.npmjs.org/http-signature
npm http GET https://registry.npmjs.org/hawk
npm http GET https://registry.npmjs.org/aws-sign
npm http GET https://registry.npmjs.org/oauth-sign
npm http GET https://registry.npmjs.org/cookie-jar
npm http GET https://registry.npmjs.org/node-uuid
npm http GET https://registry.npmjs.org/mime
npm http GET https://registry.npmjs.org/form-data/0.0.8
npm http 304 https://registry.npmjs.org/json-stringify-safe
npm http 304 https://registry.npmjs.org/qs
npm http 304 https://registry.npmjs.org/forever-agent
npm http 304 https://registry.npmjs.org/tunnel-agent
npm http 304 https://registry.npmjs.org/http-signature
npm http 304 https://registry.npmjs.org/hawk
npm http 304 https://registry.npmjs.org/aws-sign
npm http 304 https://registry.npmjs.org/oauth-sign
npm http 304 https://registry.npmjs.org/cookie-jar
npm http 304 https://registry.npmjs.org/node-uuid
npm http 304 https://registry.npmjs.org/mime
npm http 304 https://registry.npmjs.org/form-data/0.0.8
npm http GET https://registry.npmjs.org/assert-plus/0.1.2
npm http GET https://registry.npmjs.org/asn1/0.1.11
npm http GET https://registry.npmjs.org/ctype/0.5.2
npm http GET https://registry.npmjs.org/boom
npm http GET https://registry.npmjs.org/cryptiles
npm http GET https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/sntp
npm http GET https://registry.npmjs.org/combined-stream
npm http GET https://registry.npmjs.org/async
npm http 304 https://registry.npmjs.org/ctype/0.5.2
npm http 304 https://registry.npmjs.org/asn1/0.1.11
npm http 304 https://registry.npmjs.org/assert-plus/0.1.2
npm http 304 https://registry.npmjs.org/boom
npm http 304 https://registry.npmjs.org/cryptiles
npm http 304 https://registry.npmjs.org/hoek
npm http 304 https://registry.npmjs.org/sntp
npm http 304 https://registry.npmjs.org/combined-stream
npm http 304 https://registry.npmjs.org/async
npm http GET https://registry.npmjs.org/hoek
npm http GET https://registry.npmjs.org/delayed-stream/0.0.5
npm http 304 https://registry.npmjs.org/hoek
npm http 304 https://registry.npmjs.org/delayed-stream/0.0.5
couchapp@0.9.1 node_modules/couchapp
├── watch@0.7.0
└── request@2.21.0 (json-stringify-safe@4.0.0, forever-agent@0.5.0, aws-sign@0.3.0, qs@0.6.5, tunnel-agent@0.3.0, oauth-sign@0.3.0, cookie-jar@0.3.0, node-uuid@1.4.0, mime@1.2.9, http-signature@0.9.11, hawk@0.13.1, form-data@0.0.8)
[root@hmon npmjs.org]# npm install semver
npm http GET https://registry.npmjs.org/semver
npm http 200 https://registry.npmjs.org/semver
npm http GET https://registry.npmjs.org/semver/-/semver-1.0.14.tgz
npm http 200 https://registry.npmjs.org/semver/-/semver-1.0.14.tgz
semver@1.0.14 node_modules/semver
[root@hmon npmjs.org]# couchapp push registry/app.js http://localhost:5984/registry
Preparing.
Serializing.
PUT http://localhost:5984/registry/_design/scratch
Finished push. 1-8b4b7cde241179296b34d437d6fcbec3
[root@hmon npmjs.org]# couchapp push www/app.js http://localhost:5984/registry
Preparing.
Serializing.
PUT http://localhost:5984/registry/_design/ui
Finished push. 1-7d950267677c7a20ec944e4c8385af2f

Ok, now we should in theory have a completly working mirror repository of npm now!

Testing Your NPM Repository Mirror

Testing the npm repo mirror is easy enough, first we’ll make a request to the official repo then compare it to our server!


[root@hmon npmjs.org]# npm search cakes
NAME DESCRIPTION AUTHOR DATE KEYWORDS
mocha-cakes bdd stories add-on for mocha test framework with cucumber given/then/when syntax. =quangv 2012-06-03 15:30 mocha bdd stories cucumber test testing gherkin acceptance customer functional end-user
npm http GET https://registry.npmjs.org/-/all/since?stale=update_after&startkey=1371737099041
npm http 200 https://registry.npmjs.org/-/all/since?stale=update_after&startkey=1371737099041

Ok the official mirror has something called mocha-cakes in it, sounds delicious!


[root@hmon couchdb]# npm search cakes
npm http GET http://localhost:5984/registry/_design/app/_rewrite/-/all/since?stale=update_after&startkey=1371802019107
npm http 200 http://localhost:5984/registry/_design/app/_rewrite/-/all/since?stale=update_after&startkey=1371802019107
NAME DESCRIPTION AUTHOR DATE KEYWORDS
mocha-cakes bdd stories add-on for mocha test framework with cucumber given/then/when syntax. =quangv 2012-06-03 15:30 mocha bdd stories cucumber test testing gherkin acceptance customer functional end-user

And so does ours!!! It’s still syncing so I’m going to forgive that 404

Telling NPM To Use Our NPM Repository Mirror

We can either configure ~/.npmrc with


registry = http://localhost:5984/registry/_design/app/_rewrite

or set it on the command line with


npm config set registry http://localhost:5984/registry/_design/app/_rewrite

Bamo, a dozen commands later and now you too have your own working npm repository 😀

Volusion Review

Whether you are looking to open your first e-commerce store, or a better deal on your existing one, the e-commerce arena is a minefield that needs to be navigated carefully. Businesses usually always end up paying much more for hosted e-commerce solutions than they originally expected because most e-commerce platforms are not completely upfront about their pricing model. Problems can range from fixed, long term contracts, hidden transaction fees and tiered pricing, as well as frequent network issues and unpleasant customer support. If you accidentally choose the wrong shopping cart it can end up being an expensive mistake, so you need to take the time to make sure you make the right choice.

Volusion is one of the most popular SaaS (software as a service) e-commerce solutions that quickly lets you set up a store and sell your products online. Along with ready-to-go templates, website, store and hosting in one and very low prices, Volusion offers the tools you need to customize your store and grow your business. They also offer a completely free trial for you to try before you purchase.

SIGN UP FOR YOUR FREE VOLUSION ACCOUNT

Volusion Review Summary

Volusion is a complete e-commerce shopping cart platform that offers hosting, an online store and a website with many different themes and options, no setup costs, no hidden transaction fees (very important!) and one of the lowest monthly prices in the industry.

I’m giving Volusion top marks as my experience with them has been phenomenal: 9/10.

Volusion Features

Volusion is an complete e-commerce solution, including a shopping cart and website that helps you quickly set up your online store to sell your products with the ability to customize your store, choose from over 120 free templates, organize your products, accept credit card payments and track orders.

  • Adds to shopping carts keeps customers on the product page to reduce abandoned carts
  • The deal of the day feature will help promote your products and increase sales
  • Built-in social sharing to 25- different networks will help customers spread your products virally
  • Create coupons and discounts to entice new users and bring back old ones
  • Customers can review products for real-time feedback
  • Unlimited product options with complete product options.
  • Volusion’s SmartMatch technology keeps track of your stock status, tracking unlimited combinations of product options
  • The mobile-optimised website helps you reach, and sell to, more customers
  • The built in Customer Relationship Management tool helps support your customers easily and efficiently
  • Process orders quickly to view and approve orders in a moment and get real-time performance data on your business
  • Volusion provide free 24/7 customer support
  • Also sell items on eBay, Twitter and Facebook
  • The product comparison tool lets you show customers multiple product details side-by-side
  • Built in emails and newsletters
  • Over 120 free, great looking templates
  • Showcase products with vZoom so shoppers can zoom into product images

Volusion Review

Monthly Fees

Volusion has some of the lowest monthly fees among all popular e-commerce shopping carts. Starting from $15 a month the Mini Plan includes up to 100 products, a Facebook store, 1GB of data transit, social tools and a mobile store. Subscribing to the slightly more expensive plans, such as Bronze at $35 a month also gives you more features, increasing the product range to 1,000 and doubling the data transfer to 2GB, along with adding Abandoned Cart Reports, ratings and reviews from customers and newsletters. The $65 a month Silver plan gives you 2,500 different products, phone orders, the ability to import/export in to and from the Volusion system and a fantastic CRM (Customer Relationship Management) tool.

The most expensive plan at $125 a month is the Gold plan which as well as increasing the number of products as well as the data transfer, it also offers improved customer service from Volusion. The extra features include the Deal of the Day tool, API access, MyRewards, eBay integration, batch order processing and your very own account manager.

For the big boys there’s also the Platinum plan at $195 a month which offers you unlimited products!

Volusion Setup Costs

One of the fantastic things about Volusion is that they do not charge any setup fees! Their admin interface is extremely easy to use and they include a tutorial which completely guides you from start to finish, from choosing your store’s template and design to adding your first product! Their software is user-friendly and fairly intuitive.

Platform Customization

With the Volusion platform, you have the ability to add extra features to your store via the Volusion Exchange! Allowing you to add new features from their quickly and easily.

Volusion Template Options

Volusion has over 120 different templates to choose from in different industries and colours, so you’re sure to find one you like, the majority of which are free! Some of the premium templates from other platforms, like BigCommerce can cost between $500 to $5000, which is more than 1 years worth of subscription! Volusion’s main focus is on product presentation, they offer a fantastic user experience in order to generate higher conversion rates than the competitors by providing features such as product comparison and vZoom image resizing!

Transaction Fees

Volusion charges NO TRANSACTION FEES!!!!! Ok, the bold and all caps might be over the top, but I want to stress this. On every Volusion plan, from the cheapest to the most expensive, there are no transaction fees at all, this is going to save you a TON of money compared to their competitors. As well as offering their own credit card processing service they also support many other credit card processors if you already have your own from a previous online store or brick and mortar premises!

Discount Coupons For Everyone

With all plans, Volusion allows you to generate and configure coupons and discount codes for your customers with tons of different options including multiple purchase discounts, free shipping, 10% off coupon codes etc.

Mobile Store Front

One of the other perks of Volusion is all of their plans include a mobile store front allowing a much greater reach and size of potential customer base! Optmization is not available for every single mobile device yet, currently iPad users will just see the normal storefront, but that’s ok as they’re basically computers 😉 As well as a mobile version, with Volusion you can also put your products on Facebook with their Social Store service.

Volusion Review Summary

Volusion’s service is professional and affordable, making it a fantastic option for business owners of any size. Their range of free templates is impressive and there are no hidden costs, Volusion is an overall great choice for any e-commerce shopping cart, and used by many large companies already, e.g. 3M, National Geographic and the Chicago Tribune.

SIGN UP FOR YOUR FREE VOLUSION ACCOUNT

Date Published: [date format=”M j, m”]