Difference between revisions of "Node Provider Maintenance Guide"

From Internet Computer Wiki
Jump to: navigation, search
 
(19 intermediate revisions by 5 users not shown)
Line 1: Line 1:
 +
== Troubleshooting ==
 +
See the [[Node Provider Troubleshooting]] guide for info on troubleshooting failed onboardings, unhealthy nodes, networking, and more.
 +
 
== Submitting NNS proposals ==
 
== Submitting NNS proposals ==
Here are some NNS proposals you may have to submit after onboarding nodes.
+
As a part of being a Node Provider, you will likely have to submit some NNS proposals. The page at the following link describes some of these proposals: [[Node Provider NNS proposals]]
  
=== Adjusting the node allowance in a Data Center ===
+
== Monitoring ==
To adjust the node allowance for an existing node operator record, you would need to use the <code>propose-to-update-node-operator-config</code> subcommand of the <code>ic-admin</code> tool. You should typically not add a new node operator record if you just want to add more nodes to the ''existing'' DC.
+
You are expected to regularly monitor the health of your nodes. Node health status is available on the public dashboard. Example: [https://dashboard.internetcomputer.org/node/235hh-hmjhq-dejel-3q5oi-pdz66-dygbp-yi2sy-zmuiq-rj7r7-65hue-wae node status].
  
Here's a step-by-step guide on how to do this:
+
You can also view your node's [https://internetcomputer.org/docs/current/references/node-providers/node-metrics#manually-obtaining-metrics public health metrics] and monitor it with the [https://internetcomputer.org/docs/current/references/node-providers/node-metrics IC observability stack].
  
1.  '''Gather Necessary Information:''' Ensure you have the following details:
+
===Community Tools and Resources===
  
#<code>NODE_PROVIDER_ID</code>: The principal ID of the node provider under which the node operator record is registered.
+
Several node providers have generously shared tools to facilitate monitoring node health. These tools can provide notifications in case of node issues.
# <code>NODE_OPERATOR_ID</code>: The principal ID of the node operator whose allowance you want to change.
 
# <code>NEURON_ID</code>: The ID of the neuron that will propose this change.
 
# <code>CURRENTLY_REMAINING_NODE_ALLOWANCE</code>: The number of nodes that the node operator is allowed to add to the network without submitting a proposal.
 
#<code>NEW_NODE_ALLOWANCE</code>: The new number of nodes that the node operator is allowed to add.
 
The parts 1, 2, and 3 should be in your records, and should be the same principals (IDs) used to onboard nodes ''in the given DC''. The part 4 can be obtained from the registry, with <code>ic-admin</code>
 
$ ic-admin --nns-url <nowiki>https://ic0.app</nowiki> get-node-operator $NODE_OPERATOR_ID
 
For example:
 
$ ic-admin --nns-url <nowiki>https://ic0.app</nowiki> get-node-operator yl63e-n74ks-fnefm-einyj-kwqot-7nkim-g5rq4-ctn3h-3ee6h-24fe4-uqe
 
Fetching the most recent value for key: node_operator_record_yl63e-n74ks-fnefm-einyj-kwqot-7nkim-g5rq4-ctn3h-3ee6h-24fe4-uqe
 
Most recent version is 35791. Value:
 
NodeOperator { node_operator_principal_id: yl63e-n74ks-fnefm-einyj-kwqot-7nkim-g5rq4-ctn3h-3ee6h-24fe4-uqe, node_allowance: 0, node_provider_principal_id: niw4y-easue-l3qvz-sozsi-tfkvb-cxcx6-pzslg-5dqld-ooudp-hsuui-xae, dc_id: "mu1", rewardable_nodes: {"type0": 0, "type1": 28}, ipv6: None }
 
In the above example, the <code>CURRENTLY_REMAINING_NODE_ALLOWANCE</code> is 0. So if you want to add 5 more nodes with the same node operator (i.e. in the same DC), you should use <code>NEW_NODE_ALLOWANCE=5</code>. However, if the <code>CURRENTLY_REMAINING_NODE_ALLOWANCE</code> had value 2, you would only need 3 more nodes on top of your currently remaining allowance (2+3=5), so you should use <code>NEW_NODE_ALLOWANCE=3</code> in the proposal
 
  
2. '''Prepare the Command''': Construct the <code>ic-admin</code> command using the gathered information. Here's an example template:
+
====Aviate Labs Node Monitor====
$ NEURON_ID=XXXXXXXXXXXXXXXXXXXX
 
$ NODE_PROVIDER_PRINCIPAL=xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxx
 
$ NODE_OPERATOR_PRINCIPAL=xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxxxx-xxx
 
$ NODE_PROVIDER_NAME="My Company"
 
$ NEW_NODE_ALLOWANCE=5
 
$ FORUM_POST_URL=[https://forum.dfinity.org/ https://forum.dfinity.org/...]
 
 
$ ./ic-admin \
 
        --nns-url <nowiki>https://ic0.app</nowiki> \
 
        -s ~/.config/dfx/identity/node-provider-hotkey/identity.pem \
 
    propose-to-update-node-operator-config \
 
        --node-provider-id $NODE_PROVIDER_PRINCIPAL \
 
        --node-operator-id $NODE_OPERATOR_ID \
 
        --summary "Node provider '$NODE_PROVIDER_NAME' is adjusting the node allowance $NODE_ALLOWANCE to nodes in the $DC_ID data center. Link to the forum post for: $FORUM_POST_URL" \
 
        --proposer $NEURON_ID \
 
          $NEW_NODE_ALLOWANCE 
 
Replace all placeholder variables above with the actual values before submitting the proposal.
 
  
3.  '''Dry Run (strongly recommended)''': To preview the proposal without actually submitting it, you can add the <code>--dry-run</code> flag to the above command. This is useful for checking the proposal payload and ensuring everything is correct before the actual submission.
+
*'''Turnkey Solution''': Receive email alerts for unhealthy nodes.
 +
*'''Link''': [https://www.aviatelabs.co/node-monitor AviateLabs Node Monitor]
  
4.  '''Execute the Command''': Once you are sure about the command and the details, execute it in your terminal. This will submit a proposal to update the node allowance in the node operator's configuration.
+
====DIY Node Monitoring====
  
5.  '''Monitor and Voting''': After submitting the proposal, it will go through a voting process by the governance system. You should monitor this to see if the proposal gets accepted or rejected.
+
*'''GitHub Repository''': Run your own node monitoring system.
 +
*'''Link''': [https://github.com/aviate-labs/node-monitor Aviate Labs GitHub]
  
7.  '''Verification (Post-Approval)''': If the proposal is approved, you may want to verify that the node allowance has been updated as expected. This might involve querying the node operator's record with <code>get-node-operator</code> as described above.
+
====Prometheus Exporter for Node Status====
  
Note that the exact command and options will vary based on your specific configuration and requirements. Make sure to replace placeholders with actual values relevant to your setup.
+
*'''GitHub Repository''': A tool for exporting node status to a Prometheus-compatible format.
 +
*'''Link''': [https://github.com/virtualhive/ic-node-status-prometheus-exporter IC Node Status Prometheus Exporter]
  
To see all available options, you can run:
+
== Common maintenance tasks ==
$ ic-admin --nns-url <nowiki>https://ic0.app</nowiki> propose-to-update-node-operator-config --help
+
*[[Removing a Node From the Registry]]
 +
*[[Adding additional node machines to existing Node Allowance]]
 +
*[[Updating your node's IPv4 and domain name]]
 +
*[[Changing IPv6 addresses of nodes]]
 +
*[[Moving a node from one DC to another]]
 +
*[[iDRAC access and TSR logs]]
 +
*[[Checking node CPU and memory speed]]
 +
*For changing your Node Provider or DC principal, please refer to [[Node Provider NNS proposals]]
 +
*[[Updating Firmware]]
 +
==Permitted tools==
 +
For security and confidentiality reasons, other tools are not allowed to run on the same machine in parallel with the replica. In case you need to troubleshoot an issue, it is recommended to either boot the machine from a USB drive that has a live Linux distribution (e.g. [https://ubuntu.com/tutorials/try-ubuntu-before-you-install#3-boot-from-usb-flash-drive Ubuntu]) or to debug from an auxiliary machine in the same rack on which you have complete control, as described in [[Troubleshooting Unhealthy Nodes#Setting Up an Auxiliary Machine for Network Diagnostics|Unhealthy Nodes#Setting Up an Auxiliary Machine for Network Diagnostics]]
  
== Joining the Node Provider Matrix Channel ==
+
==Scheduled data center outages==
 +
When your data center notifies you of a scheduled outage, you must:
  
Node Providers are encouraged to join the dedicated [[Node Provider Matrix channel]]. This platform is essential for discussing maintenance-related queries and sharing insights about node operations.
+
*Notify DFINITY on the [[Node Provider Matrix channel]]
 +
*Make sure your nodes return to one of the healthy statuses when the outage is resolved:
 +
**Active in Subnet - The node is healthy and actively functioning within a subnet.
 +
**Awaiting Subnet - The node is operational and prepared to join a subnet when necessary.
 +
*If a node is degraded at first, give it a little bit of time in case it needs to catch up, but make sure that it does return to one of the two healthy statuses.
  
=== Communication Guidelines ===
+
==Node rewards based on useful work==
 +
The Internet Computer protocol can tolerate up to 1/3 of nodes misbehaving. There is an ongoing activity to automatically issue node rewards based on useful work, and also to automatically reduce node remuneration in case nodes are misbehaving. This will provide a financial incentive for honest behavior. Please follow the forum and the Matrix channel to stay informed about these activities.
  
* '''Active Participation''': Ensure your notifications are enabled to receive new messages promptly. Your input or intervention might be crucial, especially in urgent situations.
+
In the meantime, the recommendation is to prepare for this by making sure that your nodes are online and healthy at all times, otherwise you risk penalties even before the automatic node rewards based on useful work become active.
* '''Regular Operations''': Regularly monitor the health of your node. Node health status is available on the public dashboard, which. Example: [https://dashboard.internetcomputer.org/node/b5d56-nm7ae-p24jg-t25gp-5bmhb-rjbnt-3dmoq-goqby-5tf6c-ygnnu-aqe node status].
 
  
== Tools and Resources ==
+
== Subnet recovery==
 +
In case subnet recovery is needed, we may have to reach out to you for assistance. Please make sure you closely follow activities in the Matrix Channel, and enable notifications on new messages -- especially direct mentions.
  
Several node providers have generously shared tools to facilitate monitoring node health. These tools can provide notifications in case of node issues.
+
==General best practices==
  
=== Aviate Labs Node Monitor ===
+
# Keep a separate machine in the same rack with appropriate tools for network diagnostics and troubleshooting
 +
# Engage with the node provider community for support and to share effective troubleshooting techniques
 +
===Setting Up an Auxiliary Machine for Network Diagnostics===
 +
Robust Internet connectivity is essential. Without access to internal node logs and metrics, troubleshooting requires alternative strategies, including the use of an auxiliary machine within the same rack. Here's a brief outline for setting up an auxiliary machine in the same rack while following best security practices:
  
* '''Turnkey Solution''': Receive email alerts for unhealthy nodes.
+
# Hardware Setup:
* '''Link''': [https://www.aviatelabs.co/node-monitor AviateLabs Node Monitor]
+
#* Choose a server with sufficient resources to run diagnostic tools without impacting its performance. There is no need to follow the gen1/gen2 hardware requirements for this server (since this node would not be joining the IC network), but make sure the server is performant enough to run network tests.
 +
#* Ensure that physical security measures are in place to prevent unauthorized access.
 +
# Operating System and Software:
 +
#* Install a secure operating system, like a minimal installation of Linux (we prefer Ubuntu 22.04), which reduces the attack surface.
 +
#* Keep the system updated with the latest security patches and firmware updates.
 +
# Network Configuration:
 +
#* Configure the machine with an IPv6 address in the same range as the IC nodes for accurate testing.
 +
#* Set up a restrictive firewall on the machine to allow ''only the necessary'' inbound and outbound traffic. Consider allowing Internet access for this machine only during troubleshooting sessions and keeping the machine behind a VPN at other times.
 +
# Diagnostic Tools:
 +
#* Install network diagnostic tools such as <code>ping</code>, <code>traceroute</code>, <code>nmap</code>, <code>tcpdump</code>, and <code>iperf</code>.
 +
#* Configure monitoring tools to simulate node activities and track responsiveness.
 +
# Security Measures:
 +
#* Use strong, unique passwords for all accounts, and change them regularly. Or, preferably, do not use passwords at all and use key-based access instead.
 +
#* Implement key-based SSH authentication and disable root login over SSH.
 +
#* Regularly review logs for any unusual activities that might indicate a security breach.
 +
# Maintenance and Updates:
 +
#* Regularly update all software to the latest versions.
 +
#* Periodically test your network diagnostic tools to ensure they are functioning as expected.
  
=== DIY Node Monitoring ===
+
==Peer-support and bug reports / resolution: Node Provider Matrix Channel==
  
* '''GitHub Repository''': Run your own node monitoring system.
+
Node Providers are encouraged to join the dedicated [[Node Provider Matrix channel]]. This platform can be used for discussing maintenance-related queries and sharing insights, report issues, and search for previous resolutions for operations.
* '''Link''': [https://github.com/aviate-labs/node-monitor Aviate Labs GitHub]
 
  
=== Prometheus Exporter for Node Status ===
+
Please consult the Matrix channel for troubleshooting issues '''<u>only after consulting the [[Node Provider Troubleshooting]] guide</u>'''
  
* '''GitHub Repository''': A tool for exporting node status to a Prometheus-compatible format.
+
'''Communication Guidelines on the Matrix Channel'''
* '''Link''': [https://github.com/virtualhive/ic-node-status-prometheus-exporter IC Node Status Prometheus Exporter]
 
  
== Additional Notes ==
+
As a Node Provider, ensure your notifications are enabled to receive new messages promptly. Your input or intervention might be crucial, especially in urgent situations.
  
* '''Screenshots''': Include screenshots of the node status from the public dashboard for reference and troubleshooting.
+
It is recommended to add the node provider name to your alias (handle) on the communication platform, to facilitate communication and enable others to quickly and easily mention you.
In case you observe issues, follow: [[Unhealthy Nodes]] and [[Node Provider Troubleshooting]]
 

Latest revision as of 07:20, 19 July 2024

Troubleshooting

See the Node Provider Troubleshooting guide for info on troubleshooting failed onboardings, unhealthy nodes, networking, and more.

Submitting NNS proposals

As a part of being a Node Provider, you will likely have to submit some NNS proposals. The page at the following link describes some of these proposals: Node Provider NNS proposals

Monitoring

You are expected to regularly monitor the health of your nodes. Node health status is available on the public dashboard. Example: node status.

You can also view your node's public health metrics and monitor it with the IC observability stack.

Community Tools and Resources

Several node providers have generously shared tools to facilitate monitoring node health. These tools can provide notifications in case of node issues.

Aviate Labs Node Monitor

DIY Node Monitoring

Prometheus Exporter for Node Status

Common maintenance tasks

Permitted tools

For security and confidentiality reasons, other tools are not allowed to run on the same machine in parallel with the replica. In case you need to troubleshoot an issue, it is recommended to either boot the machine from a USB drive that has a live Linux distribution (e.g. Ubuntu) or to debug from an auxiliary machine in the same rack on which you have complete control, as described in Unhealthy Nodes#Setting Up an Auxiliary Machine for Network Diagnostics

Scheduled data center outages

When your data center notifies you of a scheduled outage, you must:

  • Notify DFINITY on the Node Provider Matrix channel
  • Make sure your nodes return to one of the healthy statuses when the outage is resolved:
    • Active in Subnet - The node is healthy and actively functioning within a subnet.
    • Awaiting Subnet - The node is operational and prepared to join a subnet when necessary.
  • If a node is degraded at first, give it a little bit of time in case it needs to catch up, but make sure that it does return to one of the two healthy statuses.

Node rewards based on useful work

The Internet Computer protocol can tolerate up to 1/3 of nodes misbehaving. There is an ongoing activity to automatically issue node rewards based on useful work, and also to automatically reduce node remuneration in case nodes are misbehaving. This will provide a financial incentive for honest behavior. Please follow the forum and the Matrix channel to stay informed about these activities.

In the meantime, the recommendation is to prepare for this by making sure that your nodes are online and healthy at all times, otherwise you risk penalties even before the automatic node rewards based on useful work become active.

Subnet recovery

In case subnet recovery is needed, we may have to reach out to you for assistance. Please make sure you closely follow activities in the Matrix Channel, and enable notifications on new messages -- especially direct mentions.

General best practices

  1. Keep a separate machine in the same rack with appropriate tools for network diagnostics and troubleshooting
  2. Engage with the node provider community for support and to share effective troubleshooting techniques

Setting Up an Auxiliary Machine for Network Diagnostics

Robust Internet connectivity is essential. Without access to internal node logs and metrics, troubleshooting requires alternative strategies, including the use of an auxiliary machine within the same rack. Here's a brief outline for setting up an auxiliary machine in the same rack while following best security practices:

  1. Hardware Setup:
    • Choose a server with sufficient resources to run diagnostic tools without impacting its performance. There is no need to follow the gen1/gen2 hardware requirements for this server (since this node would not be joining the IC network), but make sure the server is performant enough to run network tests.
    • Ensure that physical security measures are in place to prevent unauthorized access.
  2. Operating System and Software:
    • Install a secure operating system, like a minimal installation of Linux (we prefer Ubuntu 22.04), which reduces the attack surface.
    • Keep the system updated with the latest security patches and firmware updates.
  3. Network Configuration:
    • Configure the machine with an IPv6 address in the same range as the IC nodes for accurate testing.
    • Set up a restrictive firewall on the machine to allow only the necessary inbound and outbound traffic. Consider allowing Internet access for this machine only during troubleshooting sessions and keeping the machine behind a VPN at other times.
  4. Diagnostic Tools:
    • Install network diagnostic tools such as ping, traceroute, nmap, tcpdump, and iperf.
    • Configure monitoring tools to simulate node activities and track responsiveness.
  5. Security Measures:
    • Use strong, unique passwords for all accounts, and change them regularly. Or, preferably, do not use passwords at all and use key-based access instead.
    • Implement key-based SSH authentication and disable root login over SSH.
    • Regularly review logs for any unusual activities that might indicate a security breach.
  6. Maintenance and Updates:
    • Regularly update all software to the latest versions.
    • Periodically test your network diagnostic tools to ensure they are functioning as expected.

Peer-support and bug reports / resolution: Node Provider Matrix Channel

Node Providers are encouraged to join the dedicated Node Provider Matrix channel. This platform can be used for discussing maintenance-related queries and sharing insights, report issues, and search for previous resolutions for operations.

Please consult the Matrix channel for troubleshooting issues only after consulting the Node Provider Troubleshooting guide

Communication Guidelines on the Matrix Channel

As a Node Provider, ensure your notifications are enabled to receive new messages promptly. Your input or intervention might be crucial, especially in urgent situations.

It is recommended to add the node provider name to your alias (handle) on the communication platform, to facilitate communication and enable others to quickly and easily mention you.