October 2015

Hage Yaapa
MongoDB's 2.5 GB limitation
Just as I was about do a test project using MongoDB, I am informed by my host Webfaction that data on MongoDB will be limited to ~2.5GB only. It is not a limitation set by Webfaction, rather it is a limitation in the 32bit version of MongoDB. What the hell?!
The MongoDB guys explains the reason thus:
"By not supporting more than 2gb on 32-bit, we’ve been able to keep our code much simpler and cleaner. This greatly reduces the number of bugs, and reduces the time that we need to release a 1.0 product. The world is moving toward all 64-bit very quickly. Right now there aren’t too many people for whom 64-bit is a problem, and in the long term, we think this will be a non-issue."
Whatever be the reason, I really wish MongoDB would remove this limitation on the 32bit version, I don't think Webfaction is going to upgrade ALL its servers to 64bit anytime soon. So conclusion is, MongoDB is definitely not an option if you are looking to develop a data extensive website - on a 32bit platform. So sad.
  1. MongoDB 32bit limitation

Hage Yaapa

MySQL Table is marked as crashed and should be repaired

Your website stops working and you see a spine-chilling error message "MySQL table is marked as crashed and should be repaired". What now? Well, you need to repair it.
There are two approaches to repairing a crashed table. I hope it is not bad news for you, but they will work only for MyISAM engines. If you are using InnoDB, consider restoring the table from a backup (at the cost of loss of some data). The most common reason for InnoDB tables crashing is lack of disk space - fix that!
How long does it take to repair a crashed MySQL table?
Before you jump on the repair guns, you might want to know how long it might take to repair the table. For a few KB table, it takes about a few seconds; for a few MB table, it will take a few minutes; for a few GB table it will take hours; for really huge GB tables, it might take days to weeks! Also it depends a lot on the available RAM and processor power.
Do you have a backup of the table? It just might be a better option to restore the table from a backup for a HUGE table that has crashed - unless the data is not crucially important. Do you really want to run a week-long repair process? Think about it. And always backup your database.
Repairing the table using the MySQL console
If you have shell access, connect to the MySQL server and do the following:
>use my_database;
>repair table my_crashed_table;
This method is recommended for repairing crashed tables of anything from a few KBs to a few GBs, depending on the RAM and processor power available to you.
Repairing the table using PHPMyAdmin
Use this method only if you don't have access to the shell. Normally it is recommended for small size tables only, say a few KB to a couple of MBs. Anything more than that you'll end up frustrated, and potentially angry. Do the following:
Log on to PHPMyAdmin > select the affected database > select the affected table from the right pane > from the With selected menu, select "Repair table". The crashed table should be repaired in one quick stroke, if the table size is of a few KBs to a few MBS, and the engine is MyISAM.
What causes MySQL tables to crash?
There can be multiple causes of a MySQL table crashing. The number one cause of table crash is running out of disk space. If you are anticipating a potential HUUUUGE amount of data in your database, you better make sure you have the required disk space in advance.
Other potential causes of MySQL table crashes are problems with the operating system, power failures, hardware problems, unexpected termination of the MySQL server, corruption of data due to external programs.
Lessons learnt
  1. Backup your database. Backup your database. Backup your database.
  2. Plan your architecture with future in mind. Think ahead.
  3. Don't try to repair a 700 GB table if you are on a shared hosting.

Hage Yaapa
I saw a rather shocking tweet from @newsycombinator today saying "Don't use MongoDB". The article is hosted on pastebin.com and contains no author information. I am reposting the article on the website, just in case the one on pastebin disappears.
Don't use MongoDB
I've kept quiet for awhile for various political reasons, but I now
feel a kind of social responsibility to deter people from banking
their business on MongoDB.
Our team did serious load on MongoDB on a large (10s of millions
of users, high profile company) userbase, expecting, from early good
experiences, that the long-term scalability benefits touted by 10gen
would pan out. We were wrong, and this rant serves to deter you
from believing those benefits and making the same mistake
we did. If one person avoid the trap, it will have been
worth writing. Hopefully, many more do.
Note that, in our experiences with 10gen, they were nearly always
helpful and cordial, and often extremely so. But at the same
time, that cannot be reason alone to supress information about
the failings of their product.
Why this matters
Databases must be right, or as-right-as-possible, b/c database
mistakes are so much more severe than almost every other variation
of mistake. Not only does it have the largest impact on uptime,
performance, expense, and value (the inherit value of the data),
but data has *inertia*. Migrating TBs of data on-the-fly is
a massive undertaking compared to changing drcses or fixing the
average logic error in your code. Recovering TBs of data while
down, limited by what spindles can do for you, is a helpless
Databases are also complex systems that are effectively black
boxes to the end developer. By adopting a database system,
you place absolute trust in their ability to do the right thing
with your data to keep it consistent and available.
Why is MongoDB popular?
To be fair, it must be acknowledged that MongoDB is popular,
and that there are valid reasons for its popularity.
* It is remarkably easy to get running
* Schema-free models that map to JSON-like structures
have great appeal to developers (they fit our brains),
and a developer is almost always the individual who
makes the platform decisions when a project is in
its infancy
* Maturity and robustness, track record, tested real-world
use cases, etc, are typically more important to sysadmin
types or operations specialists, who often inherit the
platform long after the initial decisions are made
* Its single-system, low concurrency read performance benchmarks
are impressive, and for the inexperienced evaluator, this
is often The Most Important Thing
Now, if you're writing a toy site, or a prototype, something
where developer productivity trumps all other considerations,
it basically doesn't matter *what* you use. Use whatever
gets the job done.
But if you're intending to really run a large scale system
on Mongo, one that a business might depend on, simply put:
Why not?
**1. MongoDB issues writes in unsafe ways *by default* in order to
win benchmarks**
If you don't issue getLastError(), MongoDB doesn't wait for any
confirmation from the database that the command was processed.
This introduces at least two classes of problems:
* In a concurrent environment (connection pools, etc), you may
have a subsequent read fail after a write has "finished";
there is no barrier condition to know at what point the
database will recognize a write commitment
* Any unknown number of save operations can be dropped on the floor
due to queueing in various places, things outstanding in the TCP
buffer, etc, when your connection drops of the db were to be KILL'd or
segfault, hardware crash, you name it
**2. MongoDB can lose data in many startling ways**
Here is a list of ways we personally experienced records go missing:
1. They just disappeared sometimes. Cause unknown.
2. Recovery on corrupt database was not successful,
pre transaction log.
3. Replication between master and slave had *gaps* in the oplogs,
causing slaves to be missing records the master had. Yes,
there is no checksum, and yes, the replication status had the
slaves current
4. Replication just stops sometimes, without error. Monitor
your replication status!
**3. MongoDB requires a global write lock to issue any write**
Under a write-heavy load, this will kill you. If you run a blog,
you maybe don't care b/c your R:W ratio is so high.
**4. MongoDB's sharding doesn't work that well under load**
Adding a shard under heavy load is a nightmare.
Mongo either moves chunks between shards so quickly it DOSes
the production traffic, or refuses to more chunks altogether.
This pretty much makes it a non-starter for high-traffic
sites with heavy write volume.
**5. mongos is unreliable**
The mongod/config server/mongos architecture is actually pretty
reasonable and clever. Unfortunately, mongos is complete
garbage. Under load, it crashed anywhere from every few hours
to every few days. Restart supervision didn't always help b/c
sometimes it would throw some assertion that would bail out a
critical thread, but the process would stay running. Double
It got so bad the only usable way we found to run mongos was
to run haproxy in front of dozens of mongos instances, and
to have a job that slowly rotated through them and killed them
to keep fresh/live ones in the pool. No joke.
**6. MongoDB actually once deleted the entire dataset**
MongoDB, 1.6, in replica set configuration, would sometimes
determine the wrong node (often an empty node) was the freshest
copy of the data available. It would then DELETE ALL THE DATA
ON THE REPLICA (which may have been the 700GB of good data)
AND REPLICATE THE EMPTY SET. The database should never never
never do this. Faced with a situation like that, the database
should throw an error and make the admin disambiguate by
wiping/resetting data, or forcing the correct configuration.
NEVER DELETE ALL THE DATA. (This was a bad day.)
They fixed this in 1.8, thank god.
**7. Things were shipped that should have never been shipped**
Things with known, embarrassing bugs that could cause data
problems were in "stable" releases--and often we weren't told
about these issues until after they bit us, and then only b/c
we had a super duper crazy platinum support contract with 10gen.
The response was to send up a hot patch and that they were
calling an RC internally, and then run that on our data.
**8. Replication was lackluster on busy servers**
Replication would often, again, either DOS the master, or
replicate so slowly that it would take far too long and
the oplog would be exhausted (even with a 50G oplog).
We had a busy, large dataset that we simply could
not replicate b/c of this dynamic. It was a harrowing month
or two of finger crossing before we got it onto a different
database system.
**But, the real problem:**
You might object, my information is out of date; they've
fixed these problems or intend to fix them in the next version;
problem X can be mitigated by optional practice Y.
Unfortunately, it doesn't matter.
The real problem is that so many of these problems existed
in the first place.
Database developers must be held to a higher standard than
your average developer. Namely, your priority list should
typically be something like:
1. Don't lose data, be very deterministic with data
2. Employ practices to stay available
3. Multi-node scalability
4. Minimize latency at 99% and 95%
5. Raw req/s per resource
10gen's order seems to be, #5, then everything else in some
order. #1 ain't in the top 3.
These failings, and the implied priorities of the company,
indicate a basic cultural problem, irrespective of whatever
problems exist in any single release: a lack of the requisite
discipline to design database systems businesses should bet on.
Please take this warning seriously.

Dom the Conservative
A group of Muslim migrants had started a soccer game, when a toddler attempted to join in the recreation. However, one player didn’t take too kindly to the interruption, so he decided to teach the little boy a lesson he’d never forget.
A toddler was rushed to the hospital with serious injuries after a Muslim migrant became enraged with the boy for interrupting a soccer game at a refugee center in Suhl, Germany on Thursday, Breitbart reports.
The child had entered the gymnasium in which the game was taking place, looking for his older brothers. Excited by the festivities, the 4-year-old ran onto the make-shift field and kicked the soccer ball, which prompted a migrant to attack the helpless child.
The migrant took the game ball and began beating the child “several times in the face” with it until a supervisor intervened, according to Focus.de.
However, the furious migrant wasn’t finished inflicting his wrath. He tracked down the boy after the game, picked up a large rock, and proceeded to stone the boy. Had it not been for an asylum worker stepping in, the boy might have been beaten to death.
The boy is being treated at a local hospital, and is said to be suffering from swelling and “massive bruising.”
The migrant remains unidentified, which is no surprise since the progressive movement seeks to hide any threat to their agenda.
Refugee centers are a hotbed for injury, rape, and pedophilia, and their leftist volunteer workers are quick to cover up these serious crimes.
Mad World News previously reported on a 3-year-old girl who was grabbed in a Swedish refugee center, taken to a secluded area, and raped by a Muslim refugee. Her mother immediately phoned police, but when officials showed up to question leftist asylum workers, they refused to cooperate.
The Swedish Migration Board attempted to conceal the rape and the rapist’s identity. They moved the pedophile to another building, but were ultimately unable to thwart his arrest.
According to another report, leftists tried to convince a rape victim to stay silent in order to refrain from further tarnishing their open borders policy. For over a month, a 30-year-old woman hid her gang-rape by Muslim migrants after fellow left-wing activists told her the truth could damage their Utopian dream.
The leftist dream is turning European countries like Sweden into rape capitals of the world. Their agenda to incorporate “multiculturalism,” however, only works when the migrating cultures respect other cultures.
Islamists see these open borders as an opportunity to infiltrate, subjugate, and over-populate. With their aggressive behavior, high birth rate, and undying zeal to incorporate Sharia law, the West is in danger of falling to stealth jihad now more than ever.
H/T [The Religion Of Peace]
Photo Credit [Breitbart, CM-Life]

Hage Yaapa
The winning alternative to RequireJS and Browserify
Is there a way to enable the module loading capability of Node.js in the browser? How wonderful it would be if you could write JavaScript modules the Node.js way and load them in the browser.
There have been many attempts at doing this. Notable among them are RequireJS and Browserify. Both got things done one or the other way, but are unnecessarily complicated and a pain to use.
If you type "browserify" or "requirejs" on Google Search, you can easily see people are actively looking for their alternatives. Why are they looking for alternatives? You guess is correct - because RequireJS and Browserify are not up to the mark.
Enter Component.js, by T.J. Holowaychuk (blog post introducing Component).
Component.js solves the module loading problem in the browser in an intuitive, straightforward manner. By the end of this tutorial, you will come to the conclusion that, when it comes to loading modules in the browser, Component.js is the winner!
Component.js does much more than loading JavaScript modules in the browser[1], but in this post, I will focus on its module loading capability on the clientside, and show you how it is done using an example.
First off, install Component.js:
$ npm install -g component
Now, remember Component.js is not tied to Node.js or Express.js. It is just a Node.js module that enables module loading in the browser - on any HTML file. It generates a file named build.js in a directory called build (the directory and the file name can be customized), which endows your JavaScript with the module loading magic.
To begin the exercise, let's create a directory named component-tutorial:
$ mkdir component-tutorial
$ cd component-tutorial
Create an HTML file called app.html with this content:

Look at that var randomer = require('my-randomer'). Yes, it will work as expected. Isn't that wonderful?
Time to create our my-randomer module. It will be a very simple module, which just returns a random number. To keep our workspace neat, let's create the component in a directory named my-components.
$ component create my-components/my-randomer -l
name: my-randomer
description: random number generator
does this component have js? y
does this component have css? 
does this component have html? 
create : my-components/my-randomer create : my-components/my-randomer/index.js create : my-components/my-randomer/Makefile create : my-components/my-randomer/.gitignore create : my-components/my-randomer/component.json
Notice how we specified the -l options to make it a local module. Not doing so would have resulted us in having to create a GitHub style repository for our module.
Since our component is just a JavaScript module and does not include any HTML or CSS, we press y only for "does this component have js?".
The contents of the component directory:
index.js - is the module file, we implement the functionality of the module here.
Makefile - enables us to alternatively build the component using the make utility.
.gitignore - is a list directories that should be ignored by Git.
component.json - is the component's manifest file.
Here is the code for our module. Let's keep it simple:
module.exports = function() {
    return Math.random();
Our module is in place now. Time to build the build.js file, and make it available to app.html.
The component build command is used to build the components file (build.js, in our case).
Component.js will try to build in any directory with a component.json file. That's another way of saying, "you need a component.json file to build the components file".
Create a component.json file in the component-tutorial directory.
$ vi component.json
File: component.json
    "name": "app",
    "version": "1.0.0",
    "paths": ["my-components"],
    "local": ["my-randomer"]
You can read about all the details of component.json here. In the meantime:
paths - is a list of directories where your local components can be found.
local - is a list of local components that should be included in build.js.
With a component.json file in the component-tutorial directory, we are all set to run the build command. Let's go!
$ component build
The command will return with no output in the console, but it will create the build directory and the build.js file in it.
Now open the app.html in the browser. You can see the module in action!
There are lots of open source modules which are Component.js components, you can find them all here. A component.json file is all that's required to make any Node.js / CommonJS module a valid Component.js component.
Our example was for a local component created in the local file system. Next, let's see how we can use a component from GitHub. Let's include a component called capitalize in our existing app. Install the component:
$ component install yields/capitalizeinstall : yields/capitalize@master
 fetch : yields/capitalize:index.js
 complete : yields/capitalize
Open component.json, and you will find that the component installation process has modified component.json to add the additional component dependency.
 "name": "app",
 "version": "1.0.0",
 "paths": [
 "local": [
 "dependencies": {
 "yields/capitalize": "*"
Rebuild the components in the directory to update the build.js file[2].
$ component build
Now that we have updated the build file, let's update app.html file to include yields-capitalize in it:

Open app.html in the browser to see yields-capitalize also in an alerting action.
You can specify the build output directory and file name using the -o and -n options respectively. Here is an example:
$ component build -o public/components -n built.js
There is much more to Component.js than loading JavaScript modules in the browser. I will try to cover them all in the coming days. Component.js is pretty new and not many people know about it yet, help spread the word about it and tell your friends about its ability to load JavaScript modules in the browser. If anyone is looking for Browserify or RequireJS alternatives, refer them to Component.js, and this tutorial.

Hage Yaapa

Using the Neo4j REST API

After starting the Neo4j server, load the HTTP console by clicking here. The HTTP console uses the Neo4j REST API to interact with the database. Even though you can use the HTTP shell for manually interacting with the database, it is best used for prototyping the REST calls your app would be making to the database. Unless Neo4j provides bindings for your language (Java, Python, Ruby), you will most probably be using the REST API to talk to the database.
For your reference, the Neo4j documentation is located at http://docs.neo4j.org/chunked/stable/.
Let's see how some of the common operations can be performed in Neo4j Nodes and Relationships using the REST API.
To create a node
POST http://localhost:7474/db/data/node
==> 201 Created
==> {
==> "outgoing_relationships" : "http://localhost:7474/db/data/node/26/relationships/out",
==> "data" : {
==> },
==> "traverse" : "http://localhost:7474/db/data/node/26/traverse/{returnType}",
==> "all_typed_relationships" : "http://localhost:7474/db/data/node/26/relationships/all/{-list|&|types}",
==> "property" : "http://localhost:7474/db/data/node/26/properties/{key}",
==> "self" : "http://localhost:7474/db/data/node/26",
==> "properties" : "http://localhost:7474/db/data/node/26/properties",
==> "outgoing_typed_relationships" : "http://localhost:7474/db/data/node/26/relationships/out/{-list|&|types}",
==> "incoming_relationships" : "http://localhost:7474/db/data/node/26/relationships/in",
==> "extensions" : {
==> },
==> "create_relationship" : "http://localhost:7474/db/data/node/26/relationships",
==> "paged_traverse" : "http://localhost:7474/db/data/node/26/paged/traverse/{returnType}{?pageSize,leaseTime}",
==> "all_relationships" : "http://localhost:7474/db/data/node/26/relationships/all",
==> "incoming_typed_relationships" : "http://localhost:7474/db/data/node/26/relationships/in/{-list|&|types}"
==> }
The above command creates an empty node, with no properties, except for a reference id to itself. In the above example the node id is 26. The REST API uses the HTTP protocol; the "201 Created" message you see is a HTTP status code.
You can also specify the properties of a node as it's being created by passing an additional JSON object.
POST http://localhost:7474/db/data/node {"name":"Archie"}
==> 201 Created
==> {
==> "outgoing_relationships" : "http://localhost:7474/db/data/node/27/relationships/out",
==> "data" : {
==> "name" : "Archie"
==> },
==> "traverse" : "http://localhost:7474/db/data/node/27/traverse/{returnType}",
==> "all_typed_relationships" : "http://localhost:7474/db/data/node/27/relationships/all/{-list|&|types}",
==> "property" : "http://localhost:7474/db/data/node/27/properties/{key}",
==> "self" : "http://localhost:7474/db/data/node/27",
==> "properties" : "http://localhost:7474/db/data/node/27/properties",
==> "outgoing_typed_relationships" : "http://localhost:7474/db/data/node/27/relationships/out/{-list|&|types}",
==> "incoming_relationships" : "http://localhost:7474/db/data/node/27/relationships/in",
==> "extensions" : {
==> },
==> "create_relationship" : "http://localhost:7474/db/data/node/27/relationships",
==> "paged_traverse" : "http://localhost:7474/db/data/node/27/paged/traverse/{returnType}{?pageSize,leaseTime}",
==> "all_relationships" : "http://localhost:7474/db/data/node/27/relationships/all",
==> "incoming_typed_relationships" : "http://localhost:7474/db/data/node/27/relationships/in/{-list|&|types}"
==> }
Make sure the JSON object adheres to the JSON specification, else you will encounter an invalid JSON data error message. For example {name:"Archie"} is invalid because the property should also be quoted according to the JSON specs. Also according to the JSON specs, you should use the double quote character (") for quoting strings, and not the single quote (').
If you look at the return object of the above command, you can see that it contains the URLs to the node (self) and its properties, among other information related to the node.
To read a node
Once you have the node id, you can get a node this way:
GET http://localhost:7474/db/data/node/27
==> 200 OK
==> {
==> "outgoing_relationships" : "http://localhost:7474/db/data/node/27/relationships/out",
==> "data" : {
==> "name" : "Archie"
==> },
==> "traverse" : "http://localhost:7474/db/data/node/27/traverse/{returnType}",
==> "all_typed_relationships" : "http://localhost:7474/db/data/node/27/relationships/all/{-list|&|types}",
==> "property" : "http://localhost:7474/db/data/node/27/properties/{key}",
==> "self" : "http://localhost:7474/db/data/node/27",
==> "properties" : "http://localhost:7474/db/data/node/27/properties",
==> "outgoing_typed_relationships" : "http://localhost:7474/db/data/node/27/relationships/out/{-list|&|types}",
==> "incoming_relationships" : "http://localhost:7474/db/data/node/27/relationships/in",
==> "extensions" : {
==> },
==> "create_relationship" : "http://localhost:7474/db/data/node/27/relationships",
==> "paged_traverse" : "http://localhost:7474/db/data/node/27/paged/traverse/{returnType}{?pageSize,leaseTime}",
==> "all_relationships" : "http://localhost:7474/db/data/node/27/relationships/all",
==> "incoming_typed_relationships" : "http://localhost:7474/db/data/node/27/relationships/in/{-list|&|types}"
==> }
Trying to get a none existent node will result in HTTP 404 error:
GET http://localhost:7474/db/data/node/10303371
==> 404 Not Found
==> {
==> "message" : "Cannot find node with id [10303371] in database.",
==> "exception" : "org.neo4j.server.rest.web.NodeNotFoundException: Cannot find node with id [10303371] in database.",
==> "stacktrace" : [ "org.neo4j.server.rest.web.DatabaseActions.node(DatabaseActions.java:112)", "org.neo4j.server.rest.web.DatabaseActions.getNode(DatabaseActions.java:223)", "org.neo4j.server.rest.web.RestfulGraphDatabase.getNode(RestfulGraphDatabase.java:202)", "sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)", "sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)", "sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)", "java.lang.reflect.Method.invoke(Method.java:597)", "com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)", "com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)", "com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)", "com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)", "com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)", "com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)", "com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)", "com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)", "com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)", "com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)", "com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)", "com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)", "com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)", "com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)", "com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)", "javax.servlet.http.HttpServlet.service(HttpServlet.java:820)", "org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)", "org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)", "org.neo4j.server.web.LimitRequestTimeFilter.doFilter(LimitRequestTimeFilter.java:64)", "org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)", "org.neo4j.server.statistic.StatisticFilter.doFilter(StatisticFilter.java:62)", "org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)", "org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)", "org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)", "org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)", "org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)", "org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)", "org.mortbay.jetty.Server.handle(Server.java:326)", "org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)", "org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)", "org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)", "org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)", "org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)", "org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)", "org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)" ]
==> }
To delete a node
Deleting a node is pretty straightforward, just call the HTTP DELETE method on the node id:
DELETE http://localhost:7474/db/data/node/26
==> 204 No Content
Note: nodes with relationships cannot be deleted. To demonstrate that, let's create another node:
POST http://localhost:7474/db/data/node {"name":"Veronica"}
and create a relationship:
POST http://localhost:7474/db/data/node/27/relationships {"to" : "http://localhost:7474/db/data/node/28", "type" : "LOVES"}
==> 201 Created
==> {
==> "start" : "http://localhost:7474/db/data/node/27",
==> "data" : {
==> },
==> "self" : "http://localhost:7474/db/data/relationship/1",
==> "property" : "http://localhost:7474/db/data/relationship/1/properties/{key}",
==> "properties" : "http://localhost:7474/db/data/relationship/1/properties",
==> "type" : "LOVES",
==> "extensions" : {
==> },
==> "end" : "http://localhost:7474/db/data/node/28"
==> }
Note that each relationship we create has its own id too. The relationship id is required for referring, updating and deleting the relationship. Now that we have created an outgoing relationship between nodes 27 and 28, let's try deleting node 27:
DELETE http://localhost:7474/db/data/node/27
==> 409 Conflict
==> {
==> "message" : "The node with id 27 cannot be deleted. Check that the node is orphaned before deletion.",
==> "exception" : "org.neo4j.server.rest.web.OperationFailureException: The node with id 27 cannot be deleted. Check that the node is orphaned before deletion.",
==> "stacktrace" : [ "org.neo4j.server.rest.web.DatabaseActions.deleteNode(DatabaseActions.java:244)", "org.neo4j.server.rest.web.RestfulGraphDatabase.deleteNode(RestfulGraphDatabase.java:216)", "sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)", "sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)", "sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)", "java.lang.reflect.Method.invoke(Method.java:597)", "com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)", "com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205)", "com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)", "com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)", "com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)", "com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)", "com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)", "com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)", "com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)", "com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)", "com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)", "com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)", "com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)", "com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)", "com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)", "javax.servlet.http.HttpServlet.service(HttpServlet.java:820)", "org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)", "org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)", "org.neo4j.server.web.LimitRequestTimeFilter.doFilter(LimitRequestTimeFilter.java:64)", "org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)", "org.neo4j.server.statistic.StatisticFilter.doFilter(StatisticFilter.java:62)", "org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)", "org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)", "org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)", "org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)", "org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)", "org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)", "org.mortbay.jetty.Server.handle(Server.java:326)", "org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)", "org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:926)", "org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)", "org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)", "org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)", "org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)", "org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)" ]
==> }
There it is. It failed to delete. We need to delete all the relationships on node 27 before we can delete it. Here is how we delete the relationship we created:
DELETE http://localhost:7474/db/data/relationship/1
==> 204 No Content
With the relationship deleted, let's try deleting the node once more:
DELETE http://localhost:7474/db/data/node/27
==> 204 No Content
This time it's gone. Deleted.
May sound strange, but there is no specific command to delete all the nodes in a graph. You have to walk all the nodes, delete the relationships, and delete the nodes thereafter.
To continue with the examples, we'll need Archie. So let's create a new node and give it the name Archie.
POST http://localhost:7474/db/data/node {"name":"Archie"}
==> 201 Created
==> {
==> "outgoing_relationships" : "http://localhost:7474/db/data/node/29/relationships/out",
==> "data" : {
==> "name" : "Archie"
==> },
==> "traverse" : "http://localhost:7474/db/data/node/29/traverse/{returnType}",
==> "all_typed_relationships" : "http://localhost:7474/db/data/node/29/relationships/all/{-list|&|types}",
==> "property" : "http://localhost:7474/db/data/node/29/properties/{key}",
==> "self" : "http://localhost:7474/db/data/node/29",
==> "properties" : "http://localhost:7474/db/data/node/29/properties",
==> "outgoing_typed_relationships" : "http://localhost:7474/db/data/node/29/relationships/out/{-list|&|types}",
==> "incoming_relationships" : "http://localhost:7474/db/data/node/29/relationships/in",
==> "extensions" : {
==> },
==> "create_relationship" : "http://localhost:7474/db/data/node/29/relationships",
==> "paged_traverse" : "http://localhost:7474/db/data/node/29/paged/traverse/{returnType}{?pageSize,leaseTime}",
==> "all_relationships" : "http://localhost:7474/db/data/node/29/relationships/all",
==> "incoming_typed_relationships" : "http://localhost:7474/db/data/node/29/relationships/in/{-list|&|types}"
==> }
From here on, node id 29 is Archie.
To add properties to a node
Let's give Archie an age:
PUT http://localhost:7474/db/data/node/29/properties/age 17
==> 204 No Content
Don't be alarmed by "204 No Content", it means the operation was successful, but the server did not return any content. Anyway, let's confirm that the property is actually created.
GET http://localhost:7474/db/data/node/29/properties
==> 200 OK
==> {
==> "age" : 17,
==> "name" : "Archie"
==> }
There it is, node 29 now has a new property called age with the value of 17.
What if you wanted to set a whole bunch of properties at one go? Here is how you do that:
PUT http://localhost:7474/db/data/node/28/properties {"name":"Veronica", "age":17, "hobbies":["shopping", "boys"]}
Not that a property value cannot be null or a JSON object. These two commands will fail:
PUT http://localhost:7474/db/data/node/28/properties {"name":"Veronica", "age":17, "hobbies":null}
PUT http://localhost:7474/db/data/node/28/properties {"name":"Veronica", "age":17, "hobbies":{"primary":"shopping", "secondary":"boys"}}
But these will work:
PUT http://localhost:7474/db/data/node/28/properties {"name":"Veronica", "age":17, "hobbies":""}
"" is an empty string, but it is not null.
PUT http://localhost:7474/db/data/node/28/properties {"name":"Veronica", "age":17, "hobbies":"{\"primary\":\"shopping\", \"secondary\":\"boys\"}"}
This time the hobbies JSON objects has been stringified, making it a valid Neo4j property value.
It would have been good to be able to use JSON as a property value too, but as of this writing we cannot have JSON objects as values of properties. The only valid data types for property values are numbers (integers and floats), strings, and arrays.
To update properties of a node
Editing an existing property involves the setting of the property with a new value.
PUT http://localhost:7474/db/data/node/29/properties/name "Archibald"
==> 204 No Content
Confirm the edit:
GET http://localhost:7474/db/data/node/29/properties/
==> 200 OK
==> {
==> "age" : 17,
==> "name" : "Archibald"
==> }
Note how we quoted "Archibald". Any string parameter needs to be quoted, numbers need not be.
Existing properties can also be updated using another method:
PUT http://localhost:7474/db/data/node/29/properties {"name":"Archie"}
GET http://localhost:7474/db/data/node/29/properties/
==> 200 OK
==> {
==> "name" : "Archie"
==> }
We have lost the age property! This method assumes that the accompanying JSON data is the new set of properties for the node. Since we excluded the age property, we lost it. For updating individual properties, use the first method.
To delete properties of a node
To delete a specific property:
DELETE http://localhost:7474/db/data/node/28/properties/hobbies
To delete the whole set of properties for a node:
DELETE http://localhost:7474/db/data/node/28/properties
Even though the examples were for nodes, the commands and parameters apply to working with relationship properties too. For example to add a property to a relationship:
PUT http://localhost:7474/db/data/relationship/10/properties/year 1941
The above will add a property called year with the value 1941 to the relationship id 10.
To list nodes
To get all the relationship types
GET http://localhost:7474/db/data/relationship/types
==> 200 OK
==> ["KNOWS","RELATED_TO","HATES","Hates","LOVES"]
Note that relationship types are case-insentitive, and persists forever, meaning once you create a relationship type, you can't delete it. That sounds strange, but that's the way it is as of this writing.
To get a better understanding of the relationships API and examples, let's create a graph of the students' relationship dynamics at a high school. Assume we have created seven nodes in a fresh new Neo4j database, let's add properties to those nodes:
PUT http://localhost:7474/db/data/node/1/properties {"name":"Archie"}
PUT http://localhost:7474/db/data/node/2/properties {"name":"Betty"}
PUT http://localhost:7474/db/data/node/3/properties {"name":"Veronica"}
PUT http://localhost:7474/db/data/node/4/properties {"name":"Jughead"}
PUT http://localhost:7474/db/data/node/5/properties {"name":"Reggie"}
PUT http://localhost:7474/db/data/node/6/properties {"name":"Ethel"}
PUT http://localhost:7474/db/data/node/7/properties {"name":"Food"}
To create a relationship
To create a relationship we make a POST request to a node's relationship path with a JSON data containing two mandatory properties "to" and "type".
For example, let's create a relationship from node 1 to node 2, with the relationship type (label) of "LOVES".
POST http://localhost:7474/db/data/node/1/relationships {"to" : "http://localhost:7474/db/data/node/2", "type" : "LOVES"}
==> 201 Created
==> {
==> "start" : "http://localhost:7474/db/data/node/1",
==> "data" : {
==> },
==> "self" : "http://localhost:7474/db/data/relationship/3",
==> "property" : "http://localhost:7474/db/data/relationship/3/properties/{key}",
==> "properties" : "http://localhost:7474/db/data/relationship/3/properties",
==> "type" : "LOVES",
==> "extensions" : {
==> },
==> "end" : "http://localhost:7474/db/data/node/2"
==> }
An id of 3 is assigned to the relationship we just created.
Similarly, let's create the rest of the high school relationship dynamics:
POST http://localhost:7474/db/data/node/1/relationships {"to" : "http://localhost:7474/db/data/node/3", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/2/relationships {"to" : "http://localhost:7474/db/data/node/1", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/3/relationships {"to" : "http://localhost:7474/db/data/node/1", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/3/relationships {"to" : "http://localhost:7474/db/data/node/5", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/4/relationships {"to" : "http://localhost:7474/db/data/node/7", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/5/relationships {"to" : "http://localhost:7474/db/data/node/3", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/6/relationships {"to" : "http://localhost:7474/db/data/node/4", "type" : "LOVES"}
POST http://localhost:7474/db/data/node/1/relationships {"to" : "http://localhost:7474/db/data/node/4", "type" : "FRIENDS"}
POST http://localhost:7474/db/data/node/4/relationships {"to" : "http://localhost:7474/db/data/node/1", "type" : "FRIENDS"}
POST http://localhost:7474/db/data/node/2/relationships {"to" : "http://localhost:7474/db/data/node/3", "type" : "FRIENDS"}
POST http://localhost:7474/db/data/node/3/relationships {"to" : "http://localhost:7474/db/data/node/2", "type" : "FRIENDS"}
Note that relationships are not "unique". By that I mean, if you were to run this command again:
POST http://localhost:7474/db/data/node/1/relationships {"to" : "http://localhost:7474/db/data/node/2", "type" : "LOVES"}
another relationship will be created with the same "to" and "type" value; you can create unlimited "duplicates", so be careful. I am not sure if this is a good feature or not, but that's the way Neo4j works at the moment.
To get the details of a relationship
For the details of a relationship, make a GET request to the relationship's path:
GET http://localhost:7474/db/data/relationship/3
==> 200 OK
==> {
==> "start" : "http://localhost:7474/db/data/node/1",
==> "data" : {
==> "intensity" : "low",
==> "since" : "day one"
==> },
==> "self" : "http://localhost:7474/db/data/relationship/3",
==> "property" : "http://localhost:7474/db/data/relationship/3/properties/{key}",
==> "properties" : "http://localhost:7474/db/data/relationship/3/properties",
==> "type" : "LOVES",
==> "extensions" : {
==> },
==> "end" : "http://localhost:7474/db/data/node/2"
==> }
To add properties to a relationship
Relationships, like nodes, can have properties too. Here is how you add a property to a relationship:
PUT http://localhost:7474/db/data/relationship/3/properties/intensity "low"
If you want to add multiple properties:
PUT http://localhost:7474/db/data/relationship/3/properties {"intensity":"low", "since":"day one"}
You can test the results of the two commands above this way:
GET http://localhost:7474/db/data/relationship/3
To create a relationship with properties
Relationship properties need not be added after creating the relationship, they can be added as the relationship is being created. Here is an example:
POST http://localhost:7474/db/data/node/1/relationships {"to" : "http://localhost:7474/db/data/node/5", "type" : "RIVALS", "data":{"intensity":"hight", "since":"day one"}}
Creating, reading, updating, and deleting relationship properties is similar to node properties, refer to the operations on node properties in the last examples.
To delete a relationship
Deleting a relationship is easy. Just call the DELETE command on the relationship path. For example:
DELETE http://localhost:7474/db/data/relationship/3
To see all the relationships on a node
GET http://localhost:7474/db/data/node/1/relationships/all
To see all the incoming relationships to a node
GET http://localhost:7474/db/data/node/3/relationships/in
To see all the outgoing relationships from a node
GET http://localhost:7474/db/data/node/4/relationships/out
To see typed relationships
Add the relationship type at the end of all or in or out query to filter the result accordingly. Look at these examples:
GET http://localhost:7474/db/data/node/1/relationships/all/LOVES
GET http://localhost:7474/db/data/node/4/relationships/out/LOVES
GET http://localhost:7474/db/data/node/4/relationships/in/LOVES
You can specify more than one relationship type:
GET http://localhost:7474/db/data/node/4/relationships/out/LOVES&FRIENDS
Important: make sure to escape the "&" character if making a direct call to the URL, else it will be interpreted as something else. If you are using an open source Neo4j REST client, probably the "&" issue may be taken care by the wrapper and it may not even be obvious to you.
In the above examples we have seen how nodes, relationships, and properties can be created, edited, updated, and deleted from the Neo4j HTTP terminal. But the question still remains "How do I actually use the REST API in Java, JavaScript, Node.js, Pythoh, Rails, PHP, .NET etc?".
Since the REST API is just HTTP request, all you have to do is make the appropriate HTTP request with the required data. That is relatively easy, but it is actually even easier because well written open source libraries are already there for most popular development platforms. Find them at http://docs.neo4j.org/chunked/snapshot/tutorials-rest.html.
While there is a REST client for both Java and Python, you might also want to check out Neo4j embedded for Java and Python.
Even though I'd like to cover the whole REST API in this tutorial, I will have to stop here for the sake reading comfort and content organization. There are other advanced topics under the REST API, namely: Indexes, Cypher Queries, Built-in Algorithms, and Batch Operations. If I were to include everything here, I might as well write a whole book on the REST API. I will be covering those topics in the coming days, in the mean time you can refer the official REST API docs at http://docs.neo4j.org/chunked/milestone/rest-api.html.

How to implement pagination in MongoDB using $slice
Another technique of implementing pagination in MongoDB involves the use of $push and $slice. In this method, we store the documents in an array and use the $slice method to accomplish what skip()-and-limit() does, but without the overhead associated with the skip() method.
We will be using a single root document in a collection with two fields: i. an array to store the sub-documents, ii. a numerical key to store the size of the array.
// Clear the collection of any previous data and create the root document
db.companies.insert({items:[], count:0})

// Add the sub-documents to the root document
db.companies.update({}, {$push:{items:'Google'}})
db.companies.update({}, {$set:{count:1}})
db.companies.update({}, {$push:{items:'Facebook'}})
db.companies.update({}, {$set:{count:2}})
db.companies.update({}, {$push:{items:'Apple'}})
db.companies.update({}, {$set:{count:3}})
db.companies.update({}, {$push:{items:'Microsoft'}})
db.companies.update({}, {$set:{count:4}})
db.companies.update({}, {$push:{items:'Oracle'}})
db.companies.update({}, {$set:{count:5}})
db.companies.update({}, {$push:{items:'IBM'}})
db.companies.update({}, {$set:{count:6}})
db.companies.update({}, {$push:{items:'Yahoo'}})
db.companies.update({}, {$set:{count:7}})
db.companies.update({}, {$push:{items:'HP'}})
db.companies.update({}, {$set:{count:8}})
Now observe how the $slice operator works.
db.companies.find({}, {items:{$slice:[0, 3]}})
db.companies.find({}, {items:{$slice:[3, 3]}})
From the above commands you can see, you already have pagination in place. It just needs to be made dynamic, which is accomplished thus:
var skip = NUMBER_OF_ITEMS * (PAGE_NUMBER - 1)
db.companies.find({}, {$slice:[skip, NUMBER_OF_ITEMS]})
NUMBER_OF_ITEMS is the number of items to be shown on a page
PAGE_NUMBER is the current page number
For creating the pagination navigation links, use the count field to get the number of items and the number of pages.
  1. Items are no longer root documents
  2. Need to maintain a count key
  3. Data structure clarity and logic is somewhat lost

Sean Brown A Swedish journalist was attempting to make a movie about a highly hostile Muslim “no-go” zone. Everything was going as planned, until the residents decided to give her an Islamic “welcome” while shooting the film.
Ironically, as Valentina Xhaferi was attempting to document hostility in the area, she herself came under attack by a mob of angry Muslims throwing stones at her while she was filming. Xhaferi was trying to investigate reports about police being pelted with rocks while on patrol in the Stockholm district of Tensta, which has a foreign-born population of over 70 percent, but things didn’t exactly go as planned, according to Breitbart.
Valentina Xhaferi
“They thought we crossed the limit and that we were standing on their land,” Xhaferi said.
She and a cameraman traveled to the area last week to film their report, and they were approached by a man who demanded to know why they were filming. The upset man then walked away, only to return with a mob of other men, all armed with rocks.
“Then he became very, very angry and said he’ll get stones and show us what stoning is. When I saw that he was armed with a stone I just wanted to get out of there,” said Xhaferi.
At that point, three more men approached Xhaferi and her cameraman and demanded to know what they were doing, all while the camera was rolling. They then shouted insults at the pair before kicking the camera to the ground and pouring coffee on the cameraman before running off.
Xhaferi said it was “impossible to calm them down,” and she just tried to get herself and her cameraman out of there before something horrible happened. But before they could flee, they were given an Islamic welcome that would make Muhammad proud.
“I become very anxious and had a feeling that the situation was going to explode,” she recalled. “That’s when the guy threw a stone at us.”
She ended up filming her report – a week later while under protection from a police escort. The fact she needed security points to how bad the situation has become.
“I do not take it personally, but I think it’s really bloody bad not to be able to make a recording in a public place,” she said. “That we must be protected by police.”
Sweden has seen quite the flood of Muslim migrants recently, and there’s reportedly 55 similar areas in Sweden as a result. What’s worse is the current migrant crisis is only compounding the situation, which will undoubtedly lead to even more areas popping up.
Europe as a whole should be terrified of what their leaders are allowing to happen, since we know such “no-go zones” aren’t limited to just Sweden. The current flood of migrants over the borders is undoubtedly changing the cultural landscape of these countries, and could very well lead to their demise in the near future if it’s not brought under control, and fast. Just ask this pretty white reporter who found herself the victim of a stoning for absolutely no reason other than she was around Muslims.

Question: What is the equivalent of typing ls to list folders and files in Linux in a Windows command prompt?

Answer: Type DIR to show the folders and files in command prompt.  DIR is the MS DOS version of LS, which lists the files and folders in the current directory.  Here is a huge list of all the Linus terminal commands and their Windows equivalents.
To get help on a Windows command, use the /? option, for example date /?.
Windows command Unix command Notes
arp arp
assign ln Create a file link
assign ln -s On Unix, a directory may not have multiple links, so instead a symbolic link must be created with ln -s.
assoc file
at at

attrib chown
Sets ownership on files and directories
cd cd On Windows, cd alone prints the current directory, but on Unix cd alone returns the user to his home directory.
cd pwd On Windows, cd alone prints the current directory.
chkdsk fsck Checks filesystem and repairs filesystem corruption on hard drives.
cls clear Clear the terminal screen
copy cp
date Date on Unix prints the current date and time. Date and time on Windows print the date and time respectively, and prompt for a new date or time.
del rm
deltree rm -r Recursively deletes entire directory tree
dir ls “dir” also works on some versions of Unix.
doskey /h
F7 key
history The Unix history is part of the Bash shell.
edit vi
edit brings up a simple text editor in Windows. On Unix, the environment variable EDITOR should be set to the user’s preferred editor.
exit exit
On Unix, pressing the control key and D simultaneously logs the user out of the shell.
explorer nautilus
The command explorer brings up the file browser on Windows.
fc diff
find grep
ftp ftp
help man “help” by itself prints all the commands
hostname hostname
ipconfig /all ifconfig -a The /all option lets you get the MAC address of the Windows PC
mem top Shows system status
mkdir mkdir
more more

move mv
net session w

net statistics uptime
nslookup nslookup
ping ping
print lpr Send a file to a printer.
shutdown -r
shutdown -r
regedit edit /etc/* The Unix equivalent of the Windows registry are the files under /etc and /usr/local/etc. These are edited with a text editor rather than with a special-purpose editing program.
rmdir rmdir
rmdir /s rm -r Windows has a y/n prompt. To get the prompt with Unix, use rm -i. The i means “interactive”.
set env Set on Windows prints a list of all environment variables. For individual environment variables, set is the same as echo $ on Unix.
set Path echo $PATH Print the value of the environment variable using set in Windows.
shutdown shutdown Without an option, the Windows version produces a help message
shutdown -s shutdown -h Also need -f option to Windows if logged in remotely
sort sort
start & On Unix, to start a job in the background, use command &. On Windows, the equivalent is start command. See How to run a Windows command as a background job like Unix ?.
systeminfo uname -a
tasklist ps “tasklist” is not available on some versions of Windows. See also this article on getting a list of processes in Windows using Perl
title ? In Unix, changing the title of the terminal window is possible but complicated. Search for “change title xterm”.
tracert traceroute
tree find
ls -R
On Windows, use tree | find “string”
type cat
ver uname -a
xcopy cp -R Recursively copy a directory tree

Step by step tutorial on how to setup your Node.js project in Eclipse IDE

This tutorial shows you how to setup a professional web application project using Node.js and Express framework in a Eclipse IDE


First Download and install Node.js on your machine if you haven’t already.
Then install Express framework using Node.js Command Line:
$ npm install -g express
Now Download Eclipse IDE for EE Developers, the Juno package from here. It can be any other Eclipse packages but I recommend using Eclipse EE Juno packages as it comes with many web development packages.
Eclipse IDE
Eclipse IDE

Install Node Eclipse

Update: node eclipse has a new eclipse IDE called Enide, prepackaged with all necessary softwares.

download Enide from http://www.nodeclipse.org/enide/studio/2014/ and come back for rest of tutorial here to learn how to use it :)
1. Drag and drop  into a running Eclipse (menu area) to install Nodeclipse. Or from Eclipse Menu click Help > Install new softwares

Eclipse Help Menu
Eclipse Help Menu
2.  Enter the update site URL into the Work with text Box: http://marketplace.eclipse.org/marketplace-client-intro?mpc_install=1520853
UPDATE: for the latest version read this page http://www.nodeclipse.org/updates/ 
 Uncheck “Contact all updates site during install to find required software” to make installation quicker.
You should see centre box filled with the list of plugins, first three are essential and the rest are optional. Select the appropriate and click next.
3. Review features and accept the licence.
4. You will be ask if you would like to restart Eclipse, click Restart Now.
5. After Eclipse restarts switch to Node perspective: menu > windows > open perspective > other and select Node perspective.

Creating a new Node Project

from Eclipse menu select: file > new > Node Express Project
Choose a name and location for your project and select a template for your your HTML files. You have two options of Jade template or ejs template. I recommend ejs but it is totally up to you. If you don’t have experience about these templates read more here:
EJS Template                 Jade Template
Select your template and click finish.
You should see Eclipse starts to add required library for everything shown in console panel as well as a nice structured Node.js app under Project Explorer panel.
Node.js App in Eclipse
Node.js App in Eclipse
In order to run the project as localhost we need to add a run configuration.
From Eclipse menu select: Run > Run Configurations
run Configurations
Select Node Applications from Left side list and click on ‘add new configuration’ icon.
you should see the right side settings appears. Click on search and search for app.js which is our Node application in this project then click run.
On the console you should see a message like: Express server listening on port 3000
Now you can see your web app running on localhost port 3000
congratulations, you have your Node.js app running on: http://localhost:3000
Node Express App
Node Express App

Explaining Project Structure

If you are familiar with a Model View Controller (MVC) project structure, this is very similar to it.
Node Project Structure
Node Project Structure
  • Looking at the project explorer panel, from the top we have JavaScript resources folder. This is where all JavaScript libraries default to Eclipse is located.
  • Then we have public folder. This is where all the public files such as css and JavaScript files that anyone can access are located.
  • Next folder is ‘routes’. This is where you implement your routing system with their own functionalities.
  • Next is ‘Views’ folder where all the view files are located. Depending on your template system you choose when creating the new project it can be ejs file or jade files.
  • Then we have our core Node.js application functionalities inside the app.js file. This is the file that we run in order to run the applications.
  • Finally we have the package.json file which is our standard Node.js file for managing all the packages and libraries.
I hope this was useful. In next tutorial I will show you guys how to use this project and build a complex Node.js web app that is responsive to all devices using the famous twitter bootstrap.
Any question please leave a comment :)


Contact Form


Email *

Message *

Powered by Blogger.
Javascript DisablePlease Enable Javascript To See All Widget