Node Provider Networking Guide
This guide is designed to provide an overview of the networking requirements and guide Node Providers through setting up their servers into a rack with functioning networking.
Configuring networks is not trivial. You should be familiar with IP networking, network equipment and network cabling.
Resources to learn about networking:
- CCNA Study Materials
- Kevin Wallace YouTube Training Videos
DFINITY does not provide support for network configuration.
If you hire technical assistance, keep decentralization and security in mind. Use a local technician you personally know and carefully monitor their work.
Requirements
To join your servers to the Internet Computer (IC) you will need:
- 10G Network equipment
- Rackspace in a data center
- Internet connection
- Bandwidth
- ~300Mbps per node
- Ingress/egress ratio is currently 1:1. We expect egress (serving responses to client queries) to increase faster than ingress in the future.
- This should guide how many servers to deploy and the appropriate ISP connection speed
- E.g. a 1Gbps connection will support up to 3 IC nodes.
- One IPv6 /64 subnet - each node gets multiple IPv6 addresses
- One IPv4 address for every 4 nodes in a given data center per node provider (IPv4 addresses cannot be shared between node providers). See Appendix 1 for table.
- All IP addresses are assigned statically and automatically by IC-OS
- This is configured in the IC-OS Installation Runbook
- Bandwidth
Network Cabling
When racking and stacking your servers, ensure the at least one 10G network port on each server is connected to the 10G switch. SFP+ and Ethernet are supported.
For example, on a Supermicro 1U server, the 10G ports are in a cluster as seen above. Vendors differ.
Connect the 10G switch to the ISP endpoint - this could be the Top Of Rack (TOR) switch or other box.
Network Configuration
Node machines require:
- The ability to acquire a public static IPv6 address on a /64 subnet
- An IPv6 gateway to communicate with other nodes on the broad internet
- Unfiltered internet access
One of every four nodes requires:
- The ability to acquire a public static IPv4 address
- An IPv4 gateway to communicate with other nodes on the broad internet
- Unfiltered internet access
There are many many ways to configure the network and some details depend on the ISP and data center. Here are some Example Network Configuration Scenarios.
See the Node Provider Networking Troubleshooting Guide for help.
BMC Setup Recommendations
What’s a BMC?
The Baseboard Management Controller (BMC) grants control of the underlying server hardware.
BMC’s have notoriously poor security. Vendors may name their implementation differently (Dell -> iDRAC, HPE -> iLO, etc.).
Recommendations
Change the password
BMC’s usually come with a common password. Log in via crash cart, KVM or the web interface and change it to something strong.
No broad internet access
It is highly recommended: do not expose your BMC to the broad internet. This is a safety precaution against attackers.
Options:
- Don’t connect the BMC to the internet.
- Maintenance or node recovery will require physical access in this case.
- Any BMC activities occur via SSH on the host (unreliable on many mainboard vendors) or via crash cart.
- Connect the BMC to a separate dumb switch, and the dumb switch connects to a Rack Mounted Unit (RMU).
- Connect the BMC to a managed switch, and create a separate VLAN
This can get complicated. It’s outside the scope of this document to explain how to do this.
Resources:
- StackExchange - Best practice for accessing management port of firewall
- Supermicro Guidance
- Unicom Guidance
What NOT to do
Don’t use external firewalls, packet filters, rate limiters
Don’t block or interfere with any traffic to the node machines. This can disrupt node machine functionality. Occasionally ports are opened for incoming (and outgoing) connections when new versions of node software are deployed.
What about network security?
IC-OS manages its own software firewalls and rate limiters strictly and is designed with security as a primary principle.
Don't configure the switch to use LACP bonding
This feature is on the roadmap for investigation but IC nodes do not support LACP bonding at the moment. Configuring it on the switch may cause problems with nodes.
How DFINITY manages its servers
See reference DFINITY data center runbook.
Final Checklist
- Did you deploy a 10G switch?
- Is at least one 10G port on each server plugged into the 10G switch?
- Do you have one IPv6 /64 prefix allocated from your ISP?
- Do you have at least one IPv4 address for every four nodes allocated?
- Does each node have ~300Mbps bandwidth?
- Is your BMC inaccessible from the broad internet?
References
- Gen2 Network Requirements - more detailed, possibly out of date.
Appendix 1: Number of IPv4 Addresses Required
# Nodes | # IPv4 Addresses |
1 | 1 |
2 | 1 |
3 | 1 |
4 | 1 |
5 | 2 |
6 | 2 |
7 | 2 |
8 | 2 |
9 | 3 |
10 | 3 |
11 | 3 |
12 | 3 |
13 | 4 |
14 | 4 |
15 | 4 |
16 | 4 |
17 | 5 |
18 | 5 |
19 | 5 |
20 | 5 |
21 | 6 |
22 | 6 |
23 | 6 |
24 | 6 |
25 | 7 |
26 | 7 |
27 | 7 |
28 | 7 |