Skip to content
JohnnyFFM edited this page Jun 13, 2021 · 22 revisions

Scavenger - PoCC's Reference Burst Miner User Guide





1. Introduction

The Scavenger is the reference Burstcoin (or Burst) cross-platform mining software, endorsed by the PoC Consortium (herein after referred to as "PoCC"), created to further improve and optimize Burst mining.

This document accommodates Scavenger version 1.7+, which is released under the GNU General Public License v3.0.

Distinctive features of the Scavenger are:

  • Multi-account solo and pool mining support
  • Direct IO usage which bypasses the OS caching layer
  • Support for AVX512F,AVX2, AVX and SSE
  • x86 32bit & 64bit, arm, aarch64
  • "HDD wake up" feature which prevents HDDs from entering "Sleep" mode
  • Support for GPU mining (as of version 1.2)
  • GPU DMA memory transfers support (async)
  • Support for parallel CPU and GPU mining (multi-GPU support is not available)
  • Usability improvements (reading and processing speed indicators)
  • System benchmark
  • Overlapping plot files indication
  • HTTPS support

To Contents

2. Requirements

Scavenger can run and has been tested on 64-bit operating systems: Windows, Linux, Unix, MacOS.

To Contents

3. Prerequisites

In order to mine Burst, users have to have an active Burst account and stored plot files. Additionally, the reward recipient has to be set prior to start of Burst mining, regardless of whether the user will mine in a pool or solo mine. As it is outside the scope of this document to provide further details on necessary steps that precede mining, users are advised to familiarize themselves with the concept of Burst mining and preparation steps through the mining section of the Burst Wiki.

To Contents

4. Installation

Scavenger is available for download at the dedicated Scavenger page of the PoC-Consortium GitHub repository. The page offers several compiled versions of the application. Some compiles are discontinued or might not be available.

  • scavenger-x.y.z-x86_64-apple-darwin-cpu-gpu.tar.gz - for Mac OS systems with CPU+GPU support
  • scavenger-x.y.z-x86_64-apple-darwin-cpu-only.tar.gz - for Mac OS systems with CPU support
  • scavenger-x.y.z-x86_64-pc-windows-msvc-cpu-gpu.zip - for Windows systems with CPU+GPU support
  • scavenger-x.y.z-x86_64-pc-windows-msvc-cpu-only.zip - for Windows systems with CPU support
  • scavenger-x.y.z-x86_64-unknown-linux-gnu-cpu-gpu.tar.gz - for Linux systems with CPU+GPU support
  • scavenger-x.y.z-x86_64-unknown-linux-gnu-cpu-only.tar.gz - for Linux systems with CPU support
  • scavenger-x.y.z-arm-unknown-linux-gnueabihf-cpu-only.tar.gz - e.g. for Raspberry Pi 1 (32bit)
  • scavenger-x.y.z-armv7-unknown-linux-gnueabihf-cpu-only.tar.gz - e.g. for Raspberry Pi 2,3,4 (32bit)
  • scavenger-x.y.z-armv7-unknown-linux-gnueabihf-cpu-gpu.tar.gz - e.g. for Odroid XU4 (32bit)
  • scavenger-x.y.z-aarch64-unknown-linux-android-cpu-only.tar.gz - e.g. for modern Android phones, Pi 4 (64bit)

The application source code is also available for download and can be used after it has been compiled, which is described in chapter 8 of this user guide.

After downloading the desired version of the Scavenger, users should unzip the archive. The resulting folder will contain the executable file and the configuration file ("config.yaml"), together with the "test_data" folder.

Users should note that the executable file and the configuration have to be stored in the same folder for the Scavenger to work out-of-the-box. The configuration file can be stored in a different location but would then need to be specified via the --config CLI-option.

After the archive has been unzipped, the installation is complete.

To Contents

5. Configuration

Before starting to mine Burst, the configuration file has to be edited. The Scavenger configuration file is given in YAML format, a human-readable data serialization language (for more information about YAML, refer to this article). It can be edited with any text processor (such as Notepad or Notepad++). The contents of the config.yaml file before configuration:

Passphrase And Account ID Settings For Solo Mining

Passphrase and account numeric ID setting is mandatory for solo mining. As mentioned above, prior to mining, it is necessary to complete the reward recipient assignment for solo mining as well as for pool mining.

account_id_to_secret_phrase: 60282355196851764065: 'glad suffer red during single bear shut slam hill death papi although' # define accounts and passphrases for solo mining'

As of Scavenger version 1.2, multi-account mining is supported, so in case the user wishes to solo mine to several accounts, they should provide the account_id_to_secret_phrase parameter in the format shown above for each of the accounts used to create plot files that will be used.

Passphrase And Account ID Settings For Pool Mining

Pool mining doesn't require account ID and passphrase settings, so the entire line can be deleted from the configuration file or marked as a comment by placing a hash sign (#) at the beginning of the line. This is due to the capability of the miner to read the numeric account ID from plot files. Same as in case of solo mining, mining can be done to multiple accounts, by using plot files plotted with different account ID. In case of, e.g. a multi-account pool-mining operation is being configured, the relevant accounts will be read from plot files configured in the next step.

To Contents

Paths To Folders With Plot Files

Paths to plot files are provided in system-native format:

  • For Windows systems the path format should be provided as shown below - adapted to the user's file structure:

plot_dirs:

- 'C:\first\windows\plot\dir'

- 'C:\second\windows\plot\dir'

  • Linux file system format:

plot_dirs:

- '/first/linux/plot/dir'

- '/second/linux/plot/dir'

The miner will use all plot files within configured folder(s). Note that in case plot files are stored on more than one physical or logical drives or folders, each of the locations is to be listed with a leading hyphen, as in the example.

To Contents

Pool Or Wallet URL Setting

The URL parameter in the configuration file should point to the pool site or to the wallet, for solo mining:

Wallet: url: 'http://wallet.dev.burst-test.net:6876'

Pool: url: 'http://0-100pool.burstcoin.ro:8880'

This address is the source from which the mining information is read and to which the deadlines are submitted. Users can get the relevant address on the site of the pool which they want to mine in.

Note that multi-pool mining is NOT supported, so only one pool address should be provided.

To Contents

Mining Settings

The following group of settings affects the resources used for performing the mining related operations and calculations:

  • HDD Reading Settings

The number of disks read in parallel:

hdd_reader_thread_count: 0 # default 0 (=number of disks)

As shown above, the default value is set to 0. Users are advised to set this value to the number of hard disks where plot files are stored.

As mentioned in the Introduction, Scavenger supports the usage of directIO - a setting that enables or disables operating system caching.

hdd_use_direct_io: true # default true

The default value is "true" - i.e. operating system caching layer will be bypassed by the Scavenger with this setting.

If HDD wake-up is available, this setting instructs the Scavenger on how often, after submitting the deadline to the pool/wallet, to execute the "HDD wake-up" script. The setting is given in seconds, with a default value 240.

wakeup_after: 240 # default 240s

If the HDD wake-up script should not be executed, insert 0.

  • CPU Usage Settings

Number of active CPU threads for hashing operations:

cpu_threads: 0 # default 0 (=auto: number of logical cpu cores)

Number of CPU workers to use for calculating deadlines:

cpu_worker_task_count: 4 # default 4 (0=GPU only)

As shown above, the default setting is 4. In case only GPU is to be used for deadlines calculation, the number of CPU task used for calculation should be set to 0. For optimal performance match the sum of all worker tasks (CPU + GPU) to the number of hard disks. To set the amount of RAM used for nonce caching, use the following configuration file line, which refers to number of nonces to be read at once for processing.

cpu_nonces_per_cache: 65536 # default 65536

The default value is 65536. The RAM size used for caching nonces is calculated as: nonces_per_cache * worker_thread_count * 2 * 64

The following setting will use as many worker threads as there are CPU cores, if the setting is set to "true".

cpu_thread_pinning: false # default false

The default setting is "false".

  • GPU Settings

With the introduction of GPU support for deadline calculation, a new set of configuration parameters has been added to the Scavenger config file. Before enabling GPU mining, ensure that the latest available GPU drivers are installed on the system.

To list GPUs available on the system, users can run Scavenger with CLI option -o, and the Scavenger will display available GPU platforms and devices. It will also hint the ideal multiplier for the gpu_nonces_per_cache setting. An example is shown in the image below:

Available GPUs displayed

In order to use the GPU, the following settings have to be provided to the configuration file:

Number of GPU threads to use:

gpu_threads: 0 # default 0 (=GPU off)

Lower performs better, for integrated GPUs try 1, for dedicated GPUs 1-4. GPU platform and device are self-explanatory and can be obtained as described above.

gpu_platform: 0 # default 0

gpu_device: 0 # default 0

The GPU worker task count refers to the number of threads used for GPU processing. In case no GPU is available on the system or it shouldn't be used, users should leave the default setting of 0.

gpu_worker_task_count: 0 # default 0 (0=CPU only)

GPU nonces per cache setting can improve the miner performance on some systems. It is practically impossible to give a general recommendation about this setting, as the performance depends on CPU settings and the overall system capability.

gpu_nonces_per_cache: 262144 # default 262144

Users can change this setting and observe how it will affect the performance, and adapt both this and the CPU nonces per cache setting until they reach desired improvements.

gpu_mem_mapping: false # default false

The option shown above enable memory mapping. Enabling this option can result in ZeroCopyBuffers for CPU integrated GPUs, while on some GPUs it will speed up the data transfer. This feature should be disabled for dedicated GPUs.

gpu_async: false # default false

The option shown above enable asynchronous data transfer and hashing. Enabling this option can improve performance for dedicated GPUs. This feature should be disabled for integrated GPUs.

  • Deadline setting

The target deadline is a parameter based on plot files total size. The majority of pools provide a target deadline calculator on their websites, so users are advised to calculate their target deadline and insert the value into the configuration file.

target_deadline: 4294967295 # default u32::MAX

Note that most pools have a defined maximal deadline that can be meaningfully submitted to the pool for historic share calculation. With small plot sizes, it might happen that the target deadline given by the calculator exceeds the pool deadline. In such cases, users should use the maximal pool deadline, as any value that exceeds it won't be accepted by the pool.

  • "Get mining info" interval

As the name states it, this setting instructs the miner on the frequency of requesting mining information from the wallet. The value is given in milliseconds. The default value is set to 3000, but in order to avoid skipping "quick" blocks, the setting can be reduced to an amount around 1000ms.

get_mining_info_interval: 3000 # default 3000ms

  • Timeout for pool/wallet requests

The setting instructs the miner for how long to wait for responses from the pool or wallet before proceeding with the next request. The value is given in milliseconds, with a default of 5000.

timeout: 5000 # default 5000ms

  • Enable/disable sending proxy details send_proxy_details: false # default false

To Contents

Console And Logging Settings

The console settings refer to what will be displayed in the console screen after the mining starts, while logging settings define what information and in which format will be written to log files.

console_log_level: 'Info' # default Info, Options (Off, Error, Warn, Info, Debug, Trace)

logfile_log_level: 'Warn' # default Warn, Options (Off, Error, Warn, Info, Debug, Trace)

The two lines shown above, available in the Scavenger configuration file, configure the level of information displayed in console and log files, respectively. As the comments show, there are different levels of information that can be written to console or log files.

"Off" - no information will be displayed/logged. User are strongly advised against using this setting.

"Error" - only information shown in console or log will be when an error occurs.

"Warn" - warning and error information display/logging.

"Info" - shows/writes information on regular processes, including warnings and errors.

"Debug" - includes errors, warnings, regular information on processes and additional information relevant for debugging. Note that this setting will increase significantly the size of log files, and provides information of very little use to users with no development related skills.

"Trace" - Includes all above listed information with additional information regarding application process execution. Same as the "Debug" level - it significantly increases the size of log files and includes information relevant for development and optimization.

The general recommendation for the user who wishes to mine Burst, without getting into the application structure and its working cycles, the "Info" level should provide enough information to monitor the application and the mining process. In case of repeated errors, unrelated to configuration and pool/wallet accessibility errors, the "Debug" setting can be turned on, to provide enough information to supporting developers.

  • Log files size and number

As mining is a process that, once started, tends to last for long periods of time, and in order to prevent the accumulation of unneeded log files (once the round has successfully passed, the logged information has no real purpose for the miner), the following settings allow the user to control the amount of space that will be used for storing log files.

logfile_max_count: 10 # maximum number of log files to keep

logfile_max_size : 20 # maximum size per logfile in MiB

The first line refers to the number of log files that will be kept in the "log" folder while the maximum size setting controls the size of log files. The above example will keep 10 files of 20 MiB. Once the tenth file reaches the configured size, the first log file will be overwritten.

To Contents

Performance Analysis And Benchmark Settings

Scavenger allows users to configure additional settings, that will show the system performance. The show progress, when enabled (default) will show the progress and speed of processing.

show_progress: true # default true

As shown in the screenshot, users can see the amount of data that has been read and that is to be read, the percentage, the average reading speed from the start of the round and the estimated time until the reading will be completed.

Note that the visualization shown in the progress bar doesn't show the full physical size of plot files, but the size that is actually read in a round: Total Size [TiB] = Size scanned per round [GiB] * 4

The drive stats option will show the speed of reading disks. To enable it, change the value of the parameter to "true".

show_drive_stats: false # default false

The benchmark option will test system capabilities.
When the default "disabled" is changed to "I/O" the Scavenger will display the maximum HDD reading speed (no hashing is performed for this test). If the value is set to "XPU" the maximum hashing speed will be shown.

benchmark_only: 'disabled' # default disabled, options (disabled, I/O, XPU)

For optimal performance in cases where both CPU and GPU mining is enabled, users are strongly advised to first run a benchmark and set the productive configuration based on results of it. One of the ways to complete the benchmark is:

  • optimizing CPU by setting the hdd_reader parameter to the number of hard drives with plot files,
  • proceed by setting the cpu_worker_thread_count value between the number of cores and hdd_reader and observe the performance depending on the settings,
  • optimize GPU (in GPU only scenario) by setting the value of gpu_worker_thread_count to values between 1 and hdd_reader,
  • experiment with gpu_nonces_per_cache and cpu_nonces_per_cache depending on available resources,
  • combine the CPU and GPU results in order to find the configuration for optimal performance.

Log Patterns

The Scavenger allows users to define log patters. For the regular user, the low noise log patterns will provide enough information to monitor the miner operation and get notification about errors and warnings.

# Low noise log patterns

console_log_pattern: '{({d(%H:%M:%S)} [{l}]):16.16} {m}{n}'

logfile_log_pattern: '{({d(%Y-%m-%d %H:%M:%S)} [{l}]):26.26} {m}{n}'

The above settings will display the following information:

Console: time in 24 hour format (hour:minute:second), information level (as described in "Console and logging settings" section) e.g. [INFO], [WARN] and the event and its value (e.g. new block: height_525002, scoop=791).

Log file: date, time in 24 hour format (hour:minute:second), information level (as described in "Console and logging settings" section) e.g. [INFO], [WARN] and the event and its value (e.g. new block: height_525002, scoop=791).

Users can also configure more detailed log patterns using configuration shown below:

# More detailed log patterns

#console_log_pattern: '{d(%H:%M:%S.%3f%z)} [{h({l}):<5}] [{T}] [{t}] - {M}:{m}{n}'

#logfile_log_pattern: '{d(%Y-%m-%dT%H:%M:%S.%3f%z)} [{h({l}):<5}] [{T}]-[{t}] [{f}:{L}] - {M}:{m}{n}'

However, users should note in case e.g. two sets of console or log file logging patterns are provided in the configuration file, the miner won't start.

After the configuration has been set, the users have to save the config.yaml file to the folder where the Scavenger executable is stored.

Note that in case the config.yaml file has been changed while the Scavenger is running, in order for the newly provided settings to take effect, the miner has to be restarted.

Since it is possible to make errors by accidentally adding/deleting characters during the editing of the config file, leading to corrupt yaml syntax, users can check the config file for syntax error using e.g. online YAML syntax error checkers.

To Contents

6. Mining

Once the preparation steps have been completed, and the configuration of the miner set, users need to start the Scavenger executable and thus the mining is started.

This is what the user will see in the miner console, if the configuration is correct and the pool is online:

Miner console

The first line shows the CPU instruction set. Further, the miner reads plot files from configured paths. In case a path defined in the config file is not found, the miner will indicate that in the console. The size of plotfiles is given in TiBs (tebibytes, for more information on the 1024-based multiples of byte refer to this article). After the plot files have been read, the miner requests mining information from the wallet and calculates the deadline. Found deadlines are submitted to the pool or the wallet (in solo mining). Once all plot files have been read and all found deadlines submitted to the pool/wallet, the miner declares the round has been finished and displays the duration of the process in milliseconds. Reading times can be optimized by adjusting settings described in "Mining Settings" section of the previous chapter. The miner then waits for the new round to start, by requesting mining information in intervals set in the config.yaml file.

To Contents

7. Troubleshooting

A number of common errors will be described in this chapter, with instructions how to repair them or how to seek further help.

  1. "Error getting mining info" usually implies to problems with internet connection or unavailability of the pool/wallet. In most cases, this error will appear in the console and then the miner will proceed its operation, which implies there was a short interval of unavailability. In case the miner console shows this error repeatedly, without proceeding with operations, check the internet connection, the ISP's connection to the internet (traceroute) or the site of the configured pool (or alternatively, the availability of the wallet for solo miners).

  2. "Account's reward recipient doesn't match the pool's" points to wrongly configured pool/wallet information (users should make sure the information in the configuration file is correct by checking the pool site, including the port number) or the reward recipient transaction has not yet been completed. Note that after the reward recipient assignment transaction has been issued, it takes 4 blocks for it to take effect.

  3. "Submitted on wrong height" indicates that the deadline was submitted too late - the block the deadline was for has already been forged and announced on the network, which has moved to the next block. This may occur with "quick" blocks, or when deadlines are submitted via internet connection with high latency. Additionally, it happens when the mining information request is not sent to the wallet often enough, so users might want to set the value to a lower one.

  4. "Reader:error reading chunk from ..." is an error occurring when a configured plot file destination isn't found. This can happen due to HDD malfunction (or connecting cable malfunction). If this error is observed, users are advised to check the disk for which the error is reported and run relevant diagnostics.

In case users observe errors or encounter other problems or bugs/unexpected behavior of the Scavenger miner, they're encouraged to seek support in any of the Burst community channels.

If the observed problem is proven or suspected to be a bug, issue reports can be submitted to the Scavenger GitHub repository issues page.

To Contents

8. Compiling

Users who prefer to compile the application from its source code, can do so by downloading the source code from the PoCC Scavenger release page. In order to compile the code, the latest version of Rust is required. To download and install Rust, visit the Rust Programming Language page and follow instructions provided on it. After setting up the environment, proceed by executing the following commands in Rust (note that OpenCL support is optional, and in case the user wishes to run or build with OpenCL support the "--featuers opencl: should be appended to commands):

# build debug und run directly

e.g. cargo run --features=simd #for a cpu version with SIMD support

# build debug (unoptimized)

e.g cargo build --features=neon #for a arm cpu version with NEON support

# build release (optimized)

e.g. cargo build --release --features=opencl,simd #for a cpu/gpu version

To test the compiled application using the data provided in "test_data" folder, execute:

# test

cargo test [--features={opencl,simd,neon}]

Binaries are located in target/debug or target/release depending on optimization.


To Contents