Update libhdfs engine documention and options
Signed-off-by: Jens Axboe <axboe@fb.com>
diff --git a/HOWTO b/HOWTO
index d728353..a0b89c8 100644
--- a/HOWTO
+++ b/HOWTO
@@ -694,7 +694,21 @@
having to go through FUSE. This ioengine
defines engine specific options.
- hdfs Read and write through Hadoop (HDFS).
+ libhdfs Read and write through Hadoop (HDFS).
+ The 'filename' option is used to specify host,
+ port of the hdfs name-node to connect. This
+ engine interprets offsets a little
+ differently. In HDFS, files once created
+ cannot be modified. So random writes are not
+ possible. To imitate this, libhdfs engine
+ expects bunch of small files to be created
+ over HDFS, and engine will randomly pick a
+ file out of those files based on the offset
+ generated by fio backend. (see the example
+ job file to create such files, use rw=write
+ option). Please note, you might want to set
+ necessary environment variables to work with
+ hdfs/libhdfs properly.
external Prefix to specify loading an external
IO engine object file. Append the engine
diff --git a/examples/libhdfs.fio b/examples/libhdfs.fio
new file mode 100644
index 0000000..d5c0ba6
--- /dev/null
+++ b/examples/libhdfs.fio
@@ -0,0 +1,8 @@
+[global]
+runtime=300
+
+[hdfs]
+filename=dfs-perftest-base.dfs-perftest-base,9000
+ioengine=libhdfs
+rw=read
+bs=256k
diff --git a/fio.1 b/fio.1
index b5ff3cc..c61948b 100644
--- a/fio.1
+++ b/fio.1
@@ -613,8 +613,16 @@
having to go through FUSE. This ioengine defines engine specific
options.
.TP
-.B hdfs
-Read and write through Hadoop (HDFS)
+.B libhdfs
+Read and write through Hadoop (HDFS). The \fBfilename\fR option is used to
+specify host,port of the hdfs name-node to connect. This engine interprets
+offsets a little differently. In HDFS, files once created cannot be modified.
+So random writes are not possible. To imitate this, libhdfs engine expects
+bunch of small files to be created over HDFS, and engine will randomly pick a
+file out of those files based on the offset generated by fio backend. (see the
+example job file to create such files, use rw=write option). Please note, you
+might want to set necessary environment variables to work with hdfs/libhdfs
+properly.
.RE
.P
.RE
diff --git a/options.c b/options.c
index 484efc1..3acfdc8 100644
--- a/options.c
+++ b/options.c
@@ -672,7 +672,7 @@
}
td->o.numa_memnodes = strdup(nodelist);
numa_free_nodemask(verify_bitmask);
-
+
break;
case MPOL_LOCAL:
case MPOL_DEFAULT:
@@ -1542,7 +1542,7 @@
},
#endif
#ifdef CONFIG_LIBHDFS
- { .ival = "hdfs",
+ { .ival = "libhdfs",
.help = "Hadoop Distributed Filesystem (HDFS) engine"
},
#endif