Skip to main content

Miner Setup

In the following guide, we will cover all the necessary steps to set up an Arweave node.

Requirements

  1. A Unix OS (Debian 11 preferable)

Suggested Hardware

There are several million transactions on the Arweave chain.

  1. 32GB RAM

  2. ~64TB of SSD storage available

  3. Intel i5 / AMD FX or greater, +4 vCPUs should be more than enough, these are typically Intel Xeon CPUs.

Install the Miner

Download the .tar.gz archive for your OS from the releases page.

Extract the contents of the archive. It's recommended to unpack it inside a dedicated directory. You can always move this directory around, but the miner may not work if you move only some of the files. The weave data would, by default, be stored in this directory as well, but we recommend to override it using the data_dir command-line argument.

If your OS/platform architecture is not in the list, check the source code repository README for how to build the miner from source.

tip

It is also possible to set-up an Arweave mining environment on Windows using the ‘Windows Subsystem for Linux’ or a virtual machine environment.

Preparation: File Descriptors Limit

The number of available file descriptors affects the rate at which your node can process data. As the default limit assigned to user processes on most operating systems is usually low, we recommend increasing it.

You can check the current limit by executing ulimit -n.

On Linux, to set a bigger global limit, open /etc/sysctl.conf and add the following line:

fs.file-max=100000000

Execute sysctl -p to make the changes take effect.

You may also need to set a proper limit for the particular user. To set a user-level limit, open /etc/security/limits.conf and add the following line:

<your OS user>         soft    nofile  10000000

Open a new terminal session. To make sure the changes took effect, and the limit was increased, type ulimit -n. You can also change the limit for the current session via ulimit -n 10000000

If the above does not work, set

DefaultLimitNOFILE=10000000

in both /etc/systemd/user.confand /etc/systemd/system.conf

Running the Miner

Now you’re ready to start the mining process by using the following command from the Arweave directory:

./bin/start mine mining_addr YOUR-MINING-ADDRESS peer 188.166.200.45 peer 188.166.192.169 peer 163.47.11.64 peer 139.59.51.59 peer 138.197.232.192

{% hint style="warning" %} Please replace YOUR-MINING-ADDRESS with the address of the wallet you would like to credit when you find a block! {% endhint %}

If you would like to see a log of your miner’s activity, you can run ./bin/logs -f in the Arweave directory in a different terminal.

The mining console should eventually look like this:

[Stage 1/3] Starting to hash
Miner spora rate: 1545 h/s, recall bytes computed/s: 3129, MiB/s read: 386, the round lasted 145 seconds.
[Stage 1/3] Starting to hash
Skipping hashrate report, the round lasted less than 10 seconds.
[Stage 1/3] Starting to hash
Miner spora rate: 1545 h/s, recall bytes computed/s: 3182, MiB/s read: 386, the round lasted 135 seconds.
[Stage 1/3] Starting to hash
Miner spora rate: 1637 h/s, recall bytes computed/s: 3292, MiB/s read: 409, the round lasted 245 seconds.
[Stage 1/3] Starting to hash

When you mine a block, the console shows:

[Stage 2/3] Produced candidate block ... and dispatched to network.

Approximately 20 minutes later, you should see

[Stage 3/3] Your block ... was accepted by the network

Note that occasionally your block won't be confirmed (the chain chooses a different fork).

To stop the miner, run ./bin/stop or kill the OS process (kill -sigterm <pid> or pkill <name>). Sending a SIGKILL (kill -9) is not recommended.

Tuning the Miner

Optimizing Miners SPoRA Rate

The three crucial factors determining your miner's efficiency are disk throughput (GiB/s), the amount of synchronized data, and processor power. We recommend that you have 32 GiB of RAM, while the minimum requirement is 8 GiB.

The node reports its hashrate in the console - Miner spora rate: 1546 h/sand logs -miner_sporas_per_second. Note that it is 0 when you start the miner without data and slowly increases as more data is synchronized. After the number stabilizes, you can input it into the mining calculator generously created by the community member @tiamat here to see the expected return.

To estimate the hashrate in advance, you would need to know or measure your CPU's performance, the disk throughput, and the amount of disk space you will allocate for mining.

To benchmark CPU, you can run the packaged randomx-benchmark script../bin/randomx-benchmark --mine --init 32 --threads 32 --jit --largePages. Replace 32 with the number of CPU threads. Note that reducing the number of threads might improve the outcome. Do not specify --largePages if you have not configured them yet. For the reference, a 32-threads AMD Ryzen 3950x can do about 10000 h/s, a 32-threads AMD EPYC 7502P - 24000 h/s, a 12-threads Intel Xeon E-2276G CPU - 2500 h/s, a 2-threads Intel Xeon CPU E5-2650 machine in the cloud - 600 h/s.

If you do not know the throughput of your disk, run hdparm -t /dev/sda. Replace /dev/sda with the disk name from df -h. To be competitive, consider a fast NVMe SSD capable of several GiB per second and more.

Finally, to see the upper hashrate limit of a setup, run ./bin/hashrate-upper-limit 2500 1 3 where 2500 is a RandomX hashrate, 1 is the number of GiB a disk reads per second, 3 is 1/replicated share of the weave. For example, a 12-core Intel Xeon with a 1 GiB/s SSD with a third of the weave is capped at 540 h/s. In practice, the performance is usually about 0.7 - 0.9 of the upper limit.

Changing the mining configuration

We made our best effort to choose reasonable defaults; however, changing some of the following parameters may improve the efficiency of your miner: stage_one_hashing_threads (between 1 and the number of CPU threads), stage_two_hashing_threads , io_threads, randomx_bulk_hashing_iterations. For example,

./bin/start stage_one_hashing_threads 32 stage_two_hashing_threads 32 io_threads 50 randomx_bulk_hashing_iterations 64 data_dir /your/dir mine sync_jobs 80 mining_addr YOUR-MINING-ADDRESS peer 188.166.200.45 peer 188.166.192.169 peer 163.47.11.64 peer 139.59.51.59 peer 138.197.232.192

recall bytes computed/s should be roughly equal to Miner spora rate divided by your share of the weave. If it is not, consider increasing io_threads and decreasingstage_one_hashing_threads. You can learn the share of the weave the node has synced to date by dividing the size of the chunk_storage folder (du -sh /path/to/data/dir/chunk_storage) by the total weave size. Increasing randomx_bulk_hashing_iterations to 128 or bigger might make a big difference on the powerful machine.

Syncing the weave

The Arweave miner does not mine without data. For every new block, in order to mine it, numerous random chunks of the past data need to be read and checked. It takes time to download data from the peers, so do not expect mining to be very intensive after the first launch. For example, if you have 10% of the total weave size, you are mining at 10% efficiency of a similar setup with the entire dataset. Note that it is not required to download the complete dataset. If you only have 1 TiB of space for the chunk_storage and rocksdb folders, the node will fill it up, and your miner may nevertheless be competitive, assuming the disk and the processor are sufficiently performant.

To speed up bootstrapping, use a higher (default is 20) value for the sync_jobs configuration parameter like this:

./bin/start mine sync_jobs 80 mining_addr YOUR-MINING-ADDRESS peer 188.166.200.45 peer 188.166.192.169 peer 163.47.11.64 peer 139.59.51.59 peer 138.197.232.192

You can set the sync_jobs back to 2 after historical data is synced. Turn the miner off (do not set the mine flag) to further speed up syncing.

Configuring large memory pages

To get an additional performance boost, consider configuring huge memory pages in your OS.

On Ubuntu, to see the current values, execute:cat /proc/meminfo | grep HugePages. To set a value, run sudo sysctl -w vm.nr_hugepages=1000. To make the configuration survive reboots, create /etc/sysctl.d/local.conf and put vm.nr_hugepages=1000 there.

The output of cat /proc/meminfo | grep HugePages should then look like this:
AnonHugePages: 0 kB
ShmemHugePages: 0 kB HugePages_Total: 1000 HugePages_Free: 1000 HugePages_Rsvd: 0 HugePages_Surp: 0

If it does not or if there is a "erl_drv_rwlock_destroy" error on startup, reboot the machine.

Finally, tell the miner it can use large pages by specifying enable randomx_large_pageson startup:

./bin/start mine enable randomx_large_pages mining_addr YOUR-MINING-ADDRESS peer 188.166.200.45 peer 188.166.192.169 peer 163.47.11.64 peer 139.59.51.59 peer 138.197.232.192

Using Multiple Disks

The simplest approach is to store everything one a single disk. Skip this section if you are fine with that. However, you may store metadata that is not used in mining on a cheaper and slower medium, e.g., an HDD disk.

Mount the fast devices to the chunk_storage and rocksdb folders:

sudo mount /dev/nvme1n1 /your/dir/chunk_storage
sudo mount /dev/nvme1n2 /your/dir/rocksdb
sudo mount /dev/hdd1 /your/dir

The output of df -h should look like:

/dev/hdd1 5720650792 344328088 5087947920 7% /your/dir /dev/nvme1n1 104857600 2097152 102760448 2% /your/dir/chunk_storage /dev/nvme1n2 104857600 2097152 102760448 2% /your/dir/rocksdb

Replace /dev/nvme1n1, /dev/nvme1n2, /dev/hdd1 with the filesystems you have, replace /your/dir with the directory you specify on startup:

./bin/start data_dir /your/dir mine sync_jobs 80 mining_addr YOUR-MINING-ADDRESS peer 188.166.200.45 peer 188.166.192.169 peer 163.47.11.64 peer 139.59.51.59 peer 138.197.232.192

Troubleshooting

Make sure your node is accessible on the Internet

An important part of the mining process is discovering blocks mined by other miners. Your node needs to be accessible from anywhere on the Internet so that your peers can connect with you and share their blocks.

To check if your node is publicly accessible, browse to http://[Your Internet IP]:1984. You can obtain your public IP here, or by running curl ifconfig.me/ip. If you specified a different port when starting the miner, replace "1984" anywhere in these instructions with your port. If you can not access the node, you need to set up TCP port forwarding for incoming HTTP requests to your Internet IP address on port 1984 to the selected port on your mining machine. For more details on how to set up port forwarding, consult your ISP or cloud provider.

If the node is not accessible on the Internet, the miner functions but is significantly less efficient.

Copying data to another machine

If you want to bootstrap another miner on a different machine, you can copy the downloaded data over from the first miner to bring it up to speed faster. Please follow these steps:

  1. Stop the first Arweave miner, and ensure the second miner is also not running.
  2. Copy the entire data_dir folder to the new machine. Note that the chunk_storage folder contains sparse files, so copying them the usual way will take a lot of time and the size of the destination folder will be too large. To copy this folder, use rsync with the -aS flags or archive it via tar -Scf before copying. You can optionally only copy the data_sync_state and chunk_storage_index files and the rocksdb/ar_data_sync_db , rocksdb/ar_data_sync_chunk_db and chunk_storage folders. These folders contain all the data required for mining. However, unless one of the two nodes stores the full weave, letting them sync data themselves would increase mining efficiency in the long run. You can set a high value for the sync_jobs configuration parameter to bootstrap the node faster.
  3. Start both miners.