Node Provider Networking Guide

From Internet Computer Wiki
Revision as of 22:01, 19 January 2024 by Raymond (talk | contribs) (Make the ipv4 addresse count more concise)
Jump to: navigation, search

This guide is designed to provide an overview of the networking requirements and guide Node Providers through setting up their servers into a rack with functioning networking.

Configuring networks is not trivial. You should be familiar with IP networking, network equipment and network cabling.

Resources to learn about networking:

DFINITY does not provide support for network configuration.

If you hire technical assistance, keep decentralization and security in mind. Use a local technician you personally know and carefully monitor their work.

Requirements

To join your servers to the Internet Computer (IC) you will need:

  • 10G Network equipment
    • SFP+ or Ethernet
    • Switch(es)
    • Cabling
    • Quantity determined by number of nodes deployed
  • Rackspace in a data center
  • Internet connection
    • Bandwidth
      • ~300Mbps per node
      • Ingress/egress ratio is currently 1:1. We expect egress (serving responses to client queries) to increase faster than ingress in the future.
      • This should guide how many servers to deploy and the appropriate ISP connection speed
      • E.g. a 1Gbps connection will support up to 3 IC nodes.
    • One IPv6 /64 subnet - each node gets multiple IPv6 addresses
    • One IPv4 address for every 4 nodes in a given data center per node provider (IPv4 addresses cannot be shared between node providers). See Appendix 1 for table.
    • All IP addresses are assigned statically and automatically by IC-OS

Network Cabling

When racking and stacking your servers, ensure the at least one 10G network port on each server is connected to the 10G switch. SFP+ and Ethernet are supported.

Supermicro 1124US-TNRP 1U server rear photo diagram.png

For example, on a Supermicro 1U server, the 10G ports are in a cluster as seen above. Vendors differ.

Connect the 10G switch to the ISP endpoint - this could be the Top Of Rack (TOR) switch or other box.

Network Configuration

Node machines require:

  • The ability to acquire a public static IPv6 address on a /64 subnet
  • An IPv6 gateway to communicate with other nodes on the broad internet
  • Unfiltered internet access


One of every four nodes requires:

  • The ability to acquire a public static IPv4 address
  • An IPv4 gateway to communicate with other nodes on the broad internet
  • Unfiltered internet access


There are many many ways to configure the network and some details depend on the ISP and data center. Here are some Example Network Configuration Scenarios.

See the Node Provider Networking Troubleshooting Guide for help.

BMC Setup Recommendations

What’s a BMC?

The Baseboard Management Controller (BMC) grants control of the underlying server hardware.

BMC’s have notoriously poor security. Vendors may name their implementation differently (Dell -> iDRAC, HPE -> iLO, etc.).

Recommendations

Change the password

BMC’s usually come with a common password. Log in via crash cart, KVM or the web interface and change it to something strong.

No broad internet access

It is highly recommended: do not expose your BMC to the broad internet. This is a safety precaution against attackers.

Options:

  • Don’t connect the BMC to the internet.
    • Maintenance or node recovery will require physical access in this case.
    • Any BMC activities occur via SSH on the host (unreliable on many mainboard vendors) or via crash cart.
  • Connect the BMC to a separate dumb switch, and the dumb switch connects to a Rack Mounted Unit (RMU).
  • Connect the BMC to a managed switch, and create a separate VLAN

This can get complicated. It’s outside the scope of this document to explain how to do this.

Resources:

What NOT to do

Don’t use external firewalls, packet filters, rate limiters

Don’t block or interfere with any traffic to the node machines. This can disrupt node machine functionality. Occasionally ports are opened for incoming (and outgoing) connections when new versions of node software are deployed.

What about network security?

IC-OS manages its own software firewalls and rate limiters strictly and is designed with security as a primary principle.

Don't configure the switch to use LACP bonding

This feature is on the roadmap for investigation but IC nodes do not support LACP bonding at the moment. Configuring it on the switch may cause problems with nodes.

How DFINITY manages its servers

See reference DFINITY data center runbook.

Final Checklist

  • Did you deploy a 10G switch?
  • Is at least one 10G port on each server plugged into the 10G switch?
  • Do you have one IPv6 /64 prefix allocated from your ISP?
  • Do you have at least one IPv4 address for every four nodes allocated?
  • Does each node have ~300Mbps bandwidth?
  • Is your BMC inaccessible from the broad internet?

References

Appendix 1: Number of IPv4 Addresses Required

# Nodes # IPv4 Addresses
1 to 4 1
5 to 8 2
9 to 12 3
13 to 16 4
17 to 20 5
21 to 24 6
25 to 28 7