hadoop-xdfs-1.0.zip

源码

https://github.com/xinwuyun/XidianFileSystem

环境

  • 环境:windows
  • jdk: 1.7 (只试用了1.7)
  • Hadoop2.7.3

    使用步骤

  1. 解压缩
  2. 将jar包放到hadoop-2.7.3\share\hadoop\hdfs\lib
  3. 编辑hadoop-2.7.3/etc/hadoop/core-site.xml ```xml <?xml version=”1.0” encoding=”UTF-8”?> <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
fs.xidian.impl org.apache.hadoop.fs.xd.XdFileSystem
  1. 4. 命令行操作测试
  2. :::warning
  3. 测试时保证有对应文件存在<br />文件路径格式`xidian://`+`路径``路径`不包含分卷名(cd盘等),注意用斜杠分割不要用`\`
  4. :::
  5. ```git
  6. hadoop fs -ls xidian:///test
  7. hadoop fs -mkdir xidian:///test/new_folder
  8. hadoop fs -cp xidian:///test/test.txt xidian://test/data
  9. hadoop fs -cp xidian:///test/test.txt xidian:///test/data
  10. hadoop fs -copyFromLocal file:///tmp/back.txt xidian:///test/data
  11. hadoop fs -cat xidian:///test/test.txt

我的目录
image.png
运行结果截图
image.png

MapReduce

PS C:\hadoop\hadoop-2.7.3\share\hadoop\mapreduce> hadoop jar hadoop-mapreduce-examples.jar wordcount xidian:///test/data xidian:///test/output1
22/05/16 23:10:12 INFO xd.XdFileSystem: *** Using Xd file system ***
22/05/16 23:10:12 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
22/05/16 23:10:12 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
22/05/16 23:10:12 INFO input.FileInputFormat: Total input paths to process : 3
22/05/16 23:10:13 INFO mapreduce.JobSubmitter: number of splits:3
22/05/16 23:10:13 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1418680143_0001
22/05/16 23:10:13 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
22/05/16 23:10:13 INFO mapreduce.Job: Running job: job_local1418680143_0001
22/05/16 23:10:13 INFO mapred.LocalJobRunner: OutputCommitter set in config null
22/05/16 23:10:13 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
22/05/16 23:10:13 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
22/05/16 23:10:13 INFO mapred.LocalJobRunner: Waiting for map tasks
22/05/16 23:10:13 INFO mapred.LocalJobRunner: Starting task: attempt_local1418680143_0001_m_000000_0
22/05/16 23:10:13 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
22/05/16 23:10:13 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
22/05/16 23:10:13 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@2d6ff346
22/05/16 23:10:13 INFO mapred.MapTask: Processing split: xidian:/test/data/test.txt:0+37
22/05/16 23:10:13 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
22/05/16 23:10:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
22/05/16 23:10:13 INFO mapred.MapTask: soft limit at 83886080
22/05/16 23:10:13 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
22/05/16 23:10:13 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
22/05/16 23:10:13 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
22/05/16 23:10:13 INFO mapred.LocalJobRunner:
22/05/16 23:10:13 INFO mapred.MapTask: Starting flush of map output
22/05/16 23:10:13 INFO mapred.MapTask: Spilling map output
22/05/16 23:10:13 INFO mapred.MapTask: bufstart = 0; bufend = 57; bufvoid = 104857600
22/05/16 23:10:13 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214376(104857504); length = 21/6553600
22/05/16 23:10:13 INFO mapred.MapTask: Finished spill 0
22/05/16 23:10:13 INFO mapred.Task: Task:attempt_local1418680143_0001_m_000000_0 is done. And is in the process of committing
22/05/16 23:10:13 INFO mapred.LocalJobRunner: map
22/05/16 23:10:13 INFO mapred.Task: Task 'attempt_local1418680143_0001_m_000000_0' done.
22/05/16 23:10:13 INFO mapred.LocalJobRunner: Finishing task: attempt_local1418680143_0001_m_000000_0
22/05/16 23:10:13 INFO mapred.LocalJobRunner: Starting task: attempt_local1418680143_0001_m_000001_0
22/05/16 23:10:13 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
22/05/16 23:10:13 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
22/05/16 23:10:13 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@78cd3e08
22/05/16 23:10:13 INFO mapred.MapTask: Processing split: xidian:/test/data/back.txt:0+33
22/05/16 23:10:13 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
22/05/16 23:10:13 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
22/05/16 23:10:13 INFO mapred.MapTask: soft limit at 83886080
22/05/16 23:10:13 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
22/05/16 23:10:13 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
22/05/16 23:10:13 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
22/05/16 23:10:13 INFO mapred.LocalJobRunner:
22/05/16 23:10:13 INFO mapred.MapTask: Starting flush of map output
22/05/16 23:10:13 INFO mapred.MapTask: Spilling map output
22/05/16 23:10:13 INFO mapred.MapTask: bufstart = 0; bufend = 50; bufvoid = 104857600
22/05/16 23:10:13 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214380(104857520); length = 17/6553600
22/05/16 23:10:13 INFO mapred.MapTask: Finished spill 0
22/05/16 23:10:13 INFO mapred.Task: Task:attempt_local1418680143_0001_m_000001_0 is done. And is in the process of committing
22/05/16 23:10:13 INFO mapred.LocalJobRunner: map
22/05/16 23:10:13 INFO mapred.Task: Task 'attempt_local1418680143_0001_m_000001_0' done.
22/05/16 23:10:13 INFO mapred.LocalJobRunner: Finishing task: attempt_local1418680143_0001_m_000001_0
22/05/16 23:10:13 INFO mapred.LocalJobRunner: Starting task: attempt_local1418680143_0001_m_000002_0
22/05/16 23:10:13 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
22/05/16 23:10:13 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
22/05/16 23:10:13 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@53a46b20
22/05/16 23:10:13 INFO mapred.MapTask: Processing split: xidian:/test/data/test1.txt:0+25
22/05/16 23:10:14 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
22/05/16 23:10:14 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
22/05/16 23:10:14 INFO mapred.MapTask: soft limit at 83886080
22/05/16 23:10:14 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
22/05/16 23:10:14 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
22/05/16 23:10:14 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
22/05/16 23:10:14 INFO mapred.LocalJobRunner:
22/05/16 23:10:14 INFO mapred.MapTask: Starting flush of map output
22/05/16 23:10:14 INFO mapred.MapTask: Spilling map output
22/05/16 23:10:14 INFO mapred.MapTask: bufstart = 0; bufend = 39; bufvoid = 104857600
22/05/16 23:10:14 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214384(104857536); length = 13/6553600
22/05/16 23:10:14 INFO mapred.MapTask: Finished spill 0
22/05/16 23:10:14 INFO mapred.Task: Task:attempt_local1418680143_0001_m_000002_0 is done. And is in the process of committing
22/05/16 23:10:14 INFO mapred.LocalJobRunner: map
22/05/16 23:10:14 INFO mapred.Task: Task 'attempt_local1418680143_0001_m_000002_0' done.
22/05/16 23:10:14 INFO mapred.LocalJobRunner: Finishing task: attempt_local1418680143_0001_m_000002_0
22/05/16 23:10:14 INFO mapred.LocalJobRunner: map task executor complete.
22/05/16 23:10:14 INFO mapred.LocalJobRunner: Waiting for reduce tasks
22/05/16 23:10:14 INFO mapred.LocalJobRunner: Starting task: attempt_local1418680143_0001_r_000000_0
22/05/16 23:10:14 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
22/05/16 23:10:14 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree currently is supported only on Linux.
22/05/16 23:10:14 INFO mapred.Task:  Using ResourceCalculatorProcessTree : org.apache.hadoop.yarn.util.WindowsBasedProcessTree@167f3561
22/05/16 23:10:14 INFO mapred.ReduceTask: Using ShuffleConsumerPlugin: org.apache.hadoop.mapreduce.task.reduce.Shuffle@6546662d
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: MergerManager: memoryLimit=333971456, maxSingleShuffleLimit=83492864, mergeThreshold=220421168, ioSortFactor=10, memToMemMergeOutputsThreshold=10
22/05/16 23:10:14 INFO reduce.EventFetcher: attempt_local1418680143_0001_r_000000_0 Thread started: EventFetcher for fetching Map Completion Events
22/05/16 23:10:14 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1418680143_0001_m_000002_0 decomp: 39 len: 43 to MEMORY
22/05/16 23:10:14 INFO reduce.InMemoryMapOutput: Read 39 bytes from map-output for attempt_local1418680143_0001_m_000002_0
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 39, inMemoryMapOutputs.size() -> 1, commitMemory -> 0, usedMemory ->39
22/05/16 23:10:14 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1418680143_0001_m_000001_0 decomp: 52 len: 56 to MEMORY
22/05/16 23:10:14 INFO reduce.InMemoryMapOutput: Read 52 bytes from map-output for attempt_local1418680143_0001_m_000001_0
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 52, inMemoryMapOutputs.size() -> 2, commitMemory -> 39, usedMemory ->91
22/05/16 23:10:14 INFO reduce.LocalFetcher: localfetcher#1 about to shuffle output of map attempt_local1418680143_0001_m_000000_0 decomp: 48 len: 52 to MEMORY
22/05/16 23:10:14 INFO reduce.InMemoryMapOutput: Read 48 bytes from map-output for attempt_local1418680143_0001_m_000000_0
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: closeInMemoryFile -> map-output of size: 48, inMemoryMapOutputs.size() -> 3, commitMemory -> 91, usedMemory ->139
22/05/16 23:10:14 INFO reduce.EventFetcher: EventFetcher is interrupted.. Returning
22/05/16 23:10:14 INFO mapred.LocalJobRunner: 3 / 3 copied.
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: finalMerge called with 3 in-memory map-outputs and 0 on-disk map-outputs
22/05/16 23:10:14 INFO mapred.Merger: Merging 3 sorted segments
22/05/16 23:10:14 INFO mapred.Merger: Down to the last merge-pass, with 3 segments left of total size: 121 bytes
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: Merged 3 segments, 139 bytes to disk to satisfy reduce memory limit
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: Merging 1 files, 139 bytes from disk
22/05/16 23:10:14 INFO reduce.MergeManagerImpl: Merging 0 segments, 0 bytes from memory into reduce
22/05/16 23:10:14 INFO mapred.Merger: Merging 1 sorted segments
22/05/16 23:10:14 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 129 bytes
22/05/16 23:10:14 INFO mapred.LocalJobRunner: 3 / 3 copied.
22/05/16 23:10:14 INFO Configuration.deprecation: mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
22/05/16 23:10:14 INFO mapred.Task: Task:attempt_local1418680143_0001_r_000000_0 is done. And is in the process of committing
22/05/16 23:10:14 INFO mapred.LocalJobRunner: 3 / 3 copied.
22/05/16 23:10:14 INFO mapred.Task: Task attempt_local1418680143_0001_r_000000_0 is allowed to commit now
22/05/16 23:10:14 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1418680143_0001_r_000000_0' to xidian:/test/output1/_temporary/0/task_local1418680143_0001_r_000000
22/05/16 23:10:14 INFO mapred.LocalJobRunner: reduce > reduce
22/05/16 23:10:14 INFO mapred.Task: Task 'attempt_local1418680143_0001_r_000000_0' done.
22/05/16 23:10:14 INFO mapred.LocalJobRunner: Finishing task: attempt_local1418680143_0001_r_000000_0
22/05/16 23:10:14 INFO mapred.LocalJobRunner: reduce task executor complete.
22/05/16 23:10:14 INFO mapreduce.Job: Job job_local1418680143_0001 running in uber mode : false
22/05/16 23:10:14 INFO mapreduce.Job:  map 100% reduce 100%
22/05/16 23:10:14 INFO mapreduce.Job: Job job_local1418680143_0001 completed successfully
22/05/16 23:10:14 INFO mapreduce.Job: Counters: 35
        File System Counters
                FILE: Number of bytes read=1186519
                FILE: Number of bytes written=2338765
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                XIDIAN: Number of bytes read=0
                XIDIAN: Number of bytes written=43
                XIDIAN: Number of read operations=0
                XIDIAN: Number of large read operations=0
                XIDIAN: Number of write operations=0
        Map-Reduce Framework
                Map input records=15
                Map output records=15
                Map output bytes=146
                Map output materialized bytes=151
                Input split bytes=274
                Combine input records=15
                Combine output records=11
                Reduce input groups=5
                Reduce shuffle bytes=151
                Reduce input records=11
                Reduce output records=5
                Spilled Records=22
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=0
                Total committed heap usage (bytes)=1507328000
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=43