Skip to content

Commit

Permalink
Push from FB repo (hive99-r11750-01282011)
Browse files Browse the repository at this point in the history
  • Loading branch information
jgray committed May 25, 2011
1 parent e8ca5fa commit bbfed86
Show file tree
Hide file tree
Showing 246 changed files with 21,458 additions and 7,000 deletions.
71 changes: 70 additions & 1 deletion FB-CHANGES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@ the patches applied from issues referenced in CHANGES.txt.

Release 0.20.3 + FB - Unreleased.


MAPREDUCE-2218 schedule additional tasks when killactions are dispatched
MAPREDUCE-2162 handle stddev > mean
MAPREDUCE-2141 Add an "extra data" field to Task for use by Mesos
MAPREDUCE-2118 optimize getJobSetupAndCleanupTasks (by removing global lock - r9768)
MAPREDUCE-2157 taskLauncher threads in TT can die because of unexpected interrupts
Expand Down Expand Up @@ -123,5 +126,71 @@ Release 0.20.3 + FB - Unreleased.
HDFS-1458 Improve checkpoint performance by avoiding unnecessary
image downloads.
HADOOP-7001 Allow run-time configuration of configured nodes.
HADOOP-7049 Fixed TestReconfiguration.
HADOOP-7049 Fixed TestReconfiguration.
MAPREDUCE-1752 HarFileSystem.getFileBlockLocations()
HDFS-1524 Image loader should make sure to read every byte in
image file
HDFS-1526 Dfs client name for a map/reduce task should have some
randomness
HADOOP-7060 A more elegant FileSystem#listCorruptFileBlocks API
HDFS-1533 A more elegant FileSystem#listCorruptFileBlocks API
MAPREDUCE-2215 A more elegant FileSystem#listCorruptFileBlocks API
HDFS-1477 Make NameNode Reconfigurable.
MAPREDUCE-2198 Allow FairScheduler to control the number of slots on each
TaskTracker
HDFS-1537 Add a metrics for tracking the number of reported
corrupt replicas
HDFS-1536 Improve HDFS WebUI
HDFS-1540 Make Datanode handle errors from namenode.register call elegantly
HADOOP-6148 Implement a pure Java CRC32 calculator
HADOOP-6166 Improve PureJavaCrc32
MAPREDUCE-782 Use PureJavaCrc32 in mapreduce spills
HDFS-1550 NPE when listing a file with no lcoation
HDFS-1553 Hftp file read should retry a different datanode if the
chosen best datanode fails to connect to NameNode
HDFS-1558 Optimize startFileInternal to do less calls to FSDir
HDFS-1508 Ability to do savenamespace without being in safemode
MAPREDUCE-2239 BlockPlacementPolicyRaid should call getBlockLocations
only when necessary
HDFS-1539 Prevent data loss when a cluster suffers a power loss
MAPREDUCE-2240 DistBlockFixer could sleep indefinitely.
HADOOP-4885 Do not try to restore failed replicas of Name Node storage (at checkpoint time)
HDFS-1541 Not marking datanodes dead When namenode in safemode
HADOOP-7088 JMX Bean that exposes version and build information
HADOOP-6609 UTF8 Fixed deadlock in RPC by replacing shared static
DataOutputBuffer in the UTF8 class with a thread local variable.
MAPREDUCE-2245 Failure metrics for block fixer.
HADOOP-6904 A baby step towards inter-version RPC communications.
HDFS-1335 HDFS side of HADOOP-6904.
MAPREDUCE-2263 MAP/REDUCE side of HADOOP-6904.
HDFS-1509 Resync discarded directories in fs.name.dir during saveNamespace command
MAPREDUCE-2275 RaidNode should monitor violations of block placement
MAPREDUCE-2248 DistributedRaidFileSystem unraids only the corrupt block.
HDFS-1578 First step towards data transter protocol compatibility:
a new RPC for fetching data transfer protocol version
MAPREDUCE-1818 Generalize dist raid scheduler options.
MAPREDUCE-2274 Generalize block fixer scheduler options.
MAPREDUCE-2279 Improper byte -> int conversion in DistributedRaidFileSystem
HDFS-1577 Fall back to a random datanode when bestNode fails.
MAPREDUCE-2292 Provide a shell interface for querying the status of FairScheduler
MAPREDUCE-2267 RAID code can now read blocks streams in parallel.
MAPREDUCE-2302 Add static factory methods in GaloisField
HDFS-270 Datanode upgrade should process dfs.data.dirs in parallel.
MAPREDUCE-2312 Better error handling in RaidShell.
MAPREDUCE-2313 Raid code closes open streams.
HDFS-1443 Batch the calls in DataStorage to FileUtil.createHardLink()
HADOOP-6833 IPC leaks call parameters when exceptions thrown.
HDFS-1622 Update TestDFSUpgradeFromImage to test batch hardlink improvement.
HDFS-1614 Provide an option to saveNamespace to save namespace uncompressed.
MAPREDUCE-2320 RAID DistBlockFixer should limit pending jobs.
HDFS-1627 Fix NPE in SNN.
MAPREDUCE-2329 RAID BlockFixer should excluded tmp files.
MAPREDUCE-2347 RAID blockfixer should check file blocks after the file is
fixed
MAPREDUCE-2352 RAID blockfixer can use a heuristic to find unfixable
files
MAPREDUCE-2368 RAID DFS should handle zero-length files.
HDFS-1775 FSNamesystem.readLock should be held while doing getContentSummary
HDFS-1776 Bug in concat code.
HDFS-1780 Make saving the fsimage on startup configurable
HDFS-1803 Display the progress of the fsimage loading
5 changes: 2 additions & 3 deletions NOTICE.txt
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
This product includes software developed by The Apache Software
Foundation (http://www.apache.org/).

This product includes software developed by Yahoo! Inc.,
powering the largest Hadoop clusters in the Universe!
(http://developer.yahoo.com/hadoop).
This product includes software developed by Facebook.
(http://github.com/facebook).

18 changes: 18 additions & 0 deletions bin/hadoop
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,12 @@ unset IFS
if [ "$COMMAND" = "namenode" ] ; then
CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode'
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_GC_LOG_OPTS $HADOOP_NAMENODE_OPTS"
elif [ "$COMMAND" = "avatarshell" ] ; then
CLASS='org.apache.hadoop.hdfs.AvatarShell'
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_GC_LOG_OPTS $HADOOP_CLIENT_OPTS"
elif [ "$COMMAND" = "avatarzk" ] ; then
CLASS='org.apache.hadoop.hdfs.AvatarZKShell'
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_GC_LOG_OPTS $HADOOP_CLIENT_OPTS"
elif [ "$COMMAND" = "avatarnode" ] ; then
CLASS='org.apache.hadoop.hdfs.server.namenode.AvatarNode'
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_GC_LOG_OPTS $HADOOP_NAMENODE_OPTS"
Expand Down Expand Up @@ -233,9 +239,15 @@ elif [ "$COMMAND" = "jmxget" ] ; then
elif [ "$COMMAND" = "jobtracker" ] ; then
CLASS=org.apache.hadoop.mapred.JobTracker
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_GC_LOG_OPTS $HADOOP_JOBTRACKER_OPTS"
if [ -n "$HADOOP_INSTANCE" ] ; then
CMDLINE_OPTS="-instance $HADOOP_INSTANCE $CMDLINE_OPTS"
fi
elif [ "$COMMAND" = "tasktracker" ] ; then
CLASS=org.apache.hadoop.mapred.TaskTracker
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_GC_LOG_OPTS $HADOOP_TASKTRACKER_OPTS"
if [ -n "$HADOOP_INSTANCE" ] ; then
CMDLINE_OPTS="-instance $HADOOP_INSTANCE $CMDLINE_OPTS"
fi
elif [ "$COMMAND" = "job" ] ; then
CLASS=org.apache.hadoop.mapred.JobClient
elif [ "$COMMAND" = "queue" ] ; then
Expand All @@ -262,6 +274,12 @@ elif [ "$COMMAND" = "archive" ] ; then
elif [ "$COMMAND" = "sampler" ] ; then
CLASS=org.apache.hadoop.mapred.lib.InputSampler
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
elif [ "$COMMAND" = "hourglass" ] ; then
CLASS=org.apache.hadoop.mapred.HourGlass
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
elif [ "$COMMAND" = "fairscheduler" ] ; then
CLASS=org.apache.hadoop.mapred.FairSchedulerShell
HADOOP_OPTS="$HADOOP_OPTS $HADOOP_CLIENT_OPTS"
else
CLASS=$COMMAND
fi
Expand Down
17 changes: 17 additions & 0 deletions bin/hadoop-config.sh
Original file line number Diff line number Diff line change
Expand Up @@ -66,3 +66,20 @@ then
export HADOOP_SLAVES="${HADOOP_CONF_DIR}/$slavesfile"
fi
fi

#check to see if the instance is given
if [ $# -gt 1 ]
then
if [ "--instance" = "$1" ]
then
shift
instance=$1
if [ "$instance" != "0" && "$instance" != "1" ]
then
echo "Instance must be 0 or 1"
exit -1
fi
shift
export HADOOP_INSTANCE=$instance
fi
fi
8 changes: 6 additions & 2 deletions bin/hadoop-daemon.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
# HADOOP_NICENESS The scheduling priority for daemons. Defaults to 0.
##

usage="Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] (start|stop) <hadoop-command> <args...>"
usage="Usage: hadoop-daemon.sh [--config <conf-dir>] [--hosts hostlistfile] [--instance <0|1>] (start|stop) <hadoop-command> <args...>"

# if no args specified, show usage
if [ $# -le 1 ]; then
Expand Down Expand Up @@ -85,7 +85,11 @@ if [ "$HADOOP_PID_DIR" = "" ]; then
fi

if [ "$HADOOP_IDENT_STRING" = "" ]; then
export HADOOP_IDENT_STRING="$USER"
ident_string=$USER
if [ -n "$HADOOP_INSTANCE" ]; then
ident_string="${ident_string}-${HADOOP_INSTANCE}"
fi
export HADOOP_IDENT_STRING=$ident_string
fi

# some variables
Expand Down
96 changes: 44 additions & 52 deletions build.xml
Original file line number Diff line number Diff line change
Expand Up @@ -365,21 +365,6 @@
</target>

<target name="compile-mapred-classes" depends="compile-core-classes">
<!-- Compile Java files (excluding JSPs) checking warnings -->
<javac
encoding="${build.encoding}"
srcdir="${mapred.src.dir}"
includes="org/apache/hadoop/**/*.java"
destdir="${build.classes}"
debug="${javac.debug}"
optimize="${javac.optimize}"
target="${javac.version}"
source="${javac.version}"
deprecation="${javac.deprecation}">
<compilerarg line="${javac.args} ${javac.args.warnings}" />
<classpath refid="classpath"/>
</javac>

<jsp-compile
uriroot="${src.webapps}/task"
outputdir="${build.src}"
Expand All @@ -394,14 +379,10 @@
webxml="${build.webapps}/job/WEB-INF/web.xml">
</jsp-compile>

<subant target="compile">
<property name="version" value="${version}"/>
<fileset file="${contrib.dir}/fairscheduler/build.xml"/>
</subant>

<!-- Compile Java files (excluding JSPs) checking warnings -->
<javac
encoding="${build.encoding}"
srcdir="${build.src}"
srcdir="${mapred.src.dir};${build.src}"
includes="org/apache/hadoop/**/*.java"
destdir="${build.classes}"
debug="${javac.debug}"
Expand All @@ -410,10 +391,7 @@
source="${javac.version}"
deprecation="${javac.deprecation}">
<compilerarg line="${javac.args} ${javac.args.warnings}" />
<classpath>
<path location="${build.dir}/contrib/fairscheduler/classes"/>
<path refid="classpath"/>
</classpath>
<classpath refid="classpath"/>
</javac>

<copy todir="${build.classes}">
Expand Down Expand Up @@ -494,45 +472,50 @@
<mkdir dir="${build.native}/src/org/apache/hadoop/io/compress/zlib"/>
<mkdir dir="${build.native}/src/org/apache/hadoop/io/compress/lzma"/>

<javah
classpath="${build.classes}"
destdir="${build.native}/src/org/apache/hadoop/io/compress/zlib"
<javah
classpath="${build.classes}"
destdir="${build.native}/src/org/apache/hadoop/io/compress/zlib"
force="yes"
verbose="yes"
>
<class name="org.apache.hadoop.io.compress.zlib.ZlibCompressor" />
verbose="yes"
>
<class name="org.apache.hadoop.io.compress.zlib.ZlibCompressor" />
<class name="org.apache.hadoop.io.compress.zlib.ZlibDecompressor" />
</javah>
</javah>

<javah
classpath="${build.classes}"
destdir="${build.native}/src/org/apache/hadoop/io/compress/lzma"
<javah
classpath="${build.classes}"
destdir="${build.native}/src/org/apache/hadoop/io/compress/lzma"
force="yes"
verbose="yes"
>
<class name="org.apache.hadoop.io.compress.lzma.LzmaCompressor" />
verbose="yes"
>
<class name="org.apache.hadoop.io.compress.lzma.LzmaCompressor" />
<class name="org.apache.hadoop.io.compress.lzma.LzmaDecompressor" />
</javah>

<exec dir="${build.native}" executable="sh" failonerror="true">
<env key="OS_NAME" value="${os.name}"/>
<env key="OS_ARCH" value="${os.arch}"/>
<env key="JVM_DATA_MODEL" value="${sun.arch.data.model}"/>
<env key="HADOOP_NATIVE_SRCDIR" value="${native.src.dir}"/>
<arg line="${native.src.dir}/configure LDFLAGS='-L${basedir}/nativelib/lzma' CPPFLAGS='-I${basedir}/nativelib/lzma'"/>
</javah>

<exec dir="${build.native}" executable="sh" failonerror="true">
<env key="OS_NAME" value="${os.name}"/>
<env key="OS_ARCH" value="${os.arch}"/>
<env key="JVM_DATA_MODEL" value="${sun.arch.data.model}"/>
<env key="HADOOP_NATIVE_SRCDIR" value="${native.src.dir}"/>
<arg line="${native.src.dir}/configure LDFLAGS='-L${basedir}/nativelib/lzma' CPPFLAGS='-I${basedir}/nativelib/lzma'"/>
</exec>

<exec dir="${build.native}" executable="${make.cmd}" failonerror="true">
<env key="OS_NAME" value="${os.name}"/>
<env key="OS_ARCH" value="${os.arch}"/>
<env key="JVM_DATA_MODEL" value="${sun.arch.data.model}"/>
<env key="HADOOP_NATIVE_SRCDIR" value="${native.src.dir}"/>
<env key="JVM_DATA_MODEL" value="${sun.arch.data.model}"/>
<env key="HADOOP_NATIVE_SRCDIR" value="${native.src.dir}"/>
</exec>

<exec dir="${build.native}" executable="sh" failonerror="true">
<arg line="${build.native}/libtool --mode=install cp ${build.native}/lib/libhadoop.la ${build.native}/lib"/>
<exec dir="${build.native}" executable="sh" failonerror="true">
<arg line="${build.native}/libtool --mode=install cp ${build.native}/lib/libhadoop.la ${build.native}/lib"/>
</exec>
<copy file="${basedir}/nativelib/lzma/liblzma.so" todir="${build.native}/lib"/>
<delete>
<fileset dir="${build.native}/lib" includes="liblzma.so*"/>
</delete>
<copy file="${basedir}/nativelib/lzma/liblzma.so" tofile="${build.native}/lib/liblzma.so"/>
<copy file="${basedir}/nativelib/lzma/liblzma.so" tofile="${build.native}/lib/liblzma.so.0"/>
<copy file="${basedir}/nativelib/lzma/liblzma.so" tofile="${build.native}/lib/liblzma.so.5"/>
</target>

<target name="compile-core"
Expand Down Expand Up @@ -722,7 +705,7 @@
<copy file="${test.src.dir}/org/apache/hadoop/mapred/sharedTest1/sharedTest.txt" tofile="${test.cache.data}/sharedTest1/sharedTest2.txt"/>
<copy file="${test.src.dir}/org/apache/hadoop/mapred/sharedTest2/sharedTest.txt" todir="${test.cache.data}/sharedTest2"/>
<copy file="${test.src.dir}/org/apache/hadoop/mapred/sharedTest1/sharedTest.zip" todir="${test.cache.data}/sharedTest1"/>
<copy file="${test.src.dir}/org/apache/hadoop/hdfs/hadoop-14-dfs-dir.tgz" todir="${test.cache.data}"/>
<copy file="${test.src.dir}/org/apache/hadoop/hdfs/hadoop-26-dfs-dir.tgz" todir="${test.cache.data}"/>
<copy file="${test.src.dir}/org/apache/hadoop/hdfs/hadoop-dfs-dir.txt" todir="${test.cache.data}"/>
<copy file="${test.src.dir}/org/apache/hadoop/cli/testConf.xml" todir="${test.cache.data}"/>
<copy file="${test.src.dir}/org/apache/hadoop/cli/clitest_data/data15bytes" todir="${test.cache.data}"/>
Expand Down Expand Up @@ -784,6 +767,7 @@
value="${build.native}/lib:${lib.dir}/native/${build.platform}"/>
<sysproperty key="install.c++.examples" value="${install.c++.examples}"/>
<sysproperty key="user.home" value="${test.user.home}"/>
<env key="LD_LIBRARY_PATH" value="${build.native}/lib${path.separator}${env.LD_LIBRARY_PATH}"/>
<!-- set io.compression.codec.lzo.class in the child jvm only if it is set -->
<syspropertyset dynamic="no">
<propertyref name="io.compression.codec.lzo.class"/>
Expand Down Expand Up @@ -1297,6 +1281,14 @@
<delete dir="${build.dir}"/>
<delete dir="${docs.src}/build"/>
<delete file="${jdiff.xml.dir}/hadoop_${version}.xml"/>
<delete file="${conf.dir}/hdfs-site.xml"/>
<delete file="${conf.dir}/core-site.xml"/>
<delete file="${conf.dir}/hadoop-policy.xml"/>
<delete file="${conf.dir}/capacity-scheduler.xml"/>
<delete file="${conf.dir}/mapred-site.xml"/>
<delete file="${conf.dir}/mapred-queue-acls.xml"/>
<delete file="${conf.dir}/slaves"/>
<delete file="${conf.dir}/masters"/>
</target>

<!-- ================================================================== -->
Expand Down
4 changes: 2 additions & 2 deletions ivy/libraries.properties
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ jasper.version=5.5.12
jsp.version=2.1
jsp-api.version=5.5.12
jets3t.version=0.6.1
jetty.version=6.1.25
jetty.version=6.1.26
jetty.jsp.version=6.1.14
jetty-util.version=6.1.25
jetty-util.version=6.1.26
junit.version=4.5
jdiff.version=1.0.9
json.version=1.0
Expand Down
Binary file modified nativelib/lzma/liblzma.so
Binary file not shown.
Loading

0 comments on commit bbfed86

Please sign in to comment.