<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.internetcomputer.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sb</id>
	<title>Internet Computer Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.internetcomputer.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sb"/>
	<link rel="alternate" type="text/html" href="https://wiki.internetcomputer.org/wiki/Special:Contributions/Sb"/>
	<updated>2026-04-09T15:39:36Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.17</generator>
	<entry>
		<id>https://wiki.internetcomputer.org/w/index.php?title=IC_Smart_Contract_Memory&amp;diff=6916</id>
		<title>IC Smart Contract Memory</title>
		<link rel="alternate" type="text/html" href="https://wiki.internetcomputer.org/w/index.php?title=IC_Smart_Contract_Memory&amp;diff=6916"/>
		<updated>2023-12-18T18:26:35Z</updated>

		<summary type="html">&lt;p&gt;Sb: Correct number of GiB of stable storage&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==Overall Architecture==&lt;br /&gt;
[[File:Screen Shot 2022-12-01 at 14.41.33.png|512px|thumb|Figure 1. The two memories that can be accessed by the canister smart contracts.]]&lt;br /&gt;
Canister smart contracts running on the Internet Computer (IC) store data just like most other programs would. To this end, the IC offers developers two types of memory where data can be stored, as depicted in Figure 1. The first is the regular &#039;&#039;&#039;heap memory&#039;&#039;&#039; that is exposed as the Web Assembly virtual machine heap. This should be used as a scratch, temporary memory that will be cleared after any canister upgrade. The second type of memory is the &#039;&#039;&#039;stable memory&#039;&#039;&#039;, which is a larger memory used for permanent data storage. &lt;br /&gt;
&lt;br /&gt;
==Orthogonal Persistence==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rust&amp;quot;&amp;gt;&lt;br /&gt;
use ic_cdk_macros::{query, update};&lt;br /&gt;
use std::{cell::RefCell, collections::HashMap};&lt;br /&gt;
&lt;br /&gt;
thread_local! {&lt;br /&gt;
    static STORE: RefCell&amp;lt;HashMap&amp;lt;String, u64&amp;gt;&amp;gt; = RefCell::default();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#[update]&lt;br /&gt;
fn insert(key: String, value: u64) {&lt;br /&gt;
    STORE.with(|store| store.borrow_mut().insert(key, value));&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#[query]&lt;br /&gt;
fn lookup(key: String) -&amp;gt; u64 {&lt;br /&gt;
    STORE.with(|store| *store.borrow().get(&amp;amp;key).unwrap_or(&amp;amp;0))&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The IC offers orthogonal persistence, an illusion given to programs to run forever: the heap of each canister is automatically preserved and restored the next time it is called. For that, the execution environment needs to determine efficiently which memory pages have been dirtied during message execution so that the modified pages are tracked and periodically persisted to disk. The listing above shows an example key-value store that illustrates how easy it is to use orthogonal persistence. The key-value store in this case is backed by a simple Rust HashMap stored on the heap by means of a thread-local variable. A RefCell is used to provide interior mutability. The example would also be possible without it, but mutating the thread-local variable would be unsafe in that case, as the Rust compiler cannot guarantee exclusive access to it.&lt;br /&gt;
&lt;br /&gt;
==Heap Memory==&lt;br /&gt;
Canisters running on the IC are programmed either in Rust or Motoko. The canisters are then compiled down to web assembly (Wasm). All the variables and data structures defined in these higher-level languages are then stored in the Wasm heap. All accesses to data structures and variables defined in the higher-level languages are then translated to memory copy operations in Wasm (e.g., load, store, copy, grow). The Wasm heap memory is a 4GiB, 32-bit address space that backs the Wasm programs. Due to possible changes in data structures and in Wasm (and high-level language) compilers, the heap should not be used as a permanent memory, but rather as a (faster) scratch, temporary memory. This is because during canister upgrades, the heap layout might change (i.e., data structure layouts) which could leave the canister in an unusable state. However, in-between updates the heap memory is persisted thanks to orthogonal persistence.&lt;br /&gt;
&lt;br /&gt;
==Stable Memory==&lt;br /&gt;
Next to the heap memory, canister developers can make use of the stable memory. This is an additional 64-bit addressable memory, which is currently 96GiB in size, with plans to increase it further in the future. Programs written in either Rust or Motoko need to explicitly use stable memory by using the API. This API offers primitives to copy memory back and forth between the Wasm heap and the stable memory. An alternative to using this lower level API directly is to use the stable structures API, which offers developers a collection of Rust data structures (e.g., B-trees) that operate directly in stable memory. Next to using the stable memory through stable data structures, a pattern often used by developers is to persist heap state between canister upgrades. This is achieved via serializing heap memory (or data structures), saving it to stable memory and applying the opposite operations (copying back and deserializing) when the upgrade is done.&lt;br /&gt;
&lt;br /&gt;
==Behind the scenes: Implementation==&lt;br /&gt;
To serve memory contents to canister smart contracts, the IC software stack has the following design. First, it is important to mention that every N (consensus) rounds, canister state (heap, stable memory and other data structures) are checkpointed on disk. This is called a checkpoint file. Whenever a canister executes messages after a checkpoint, all its memory resides in the checkpoint file. Therefore, all memory requested will be served from the checkpoint file. Memory modifications (i.e., dirtied pages in terms of operating systems) are saved in a data structure called the heap delta. The following paragraphs describe how this design enables orthogonal persistence.&lt;br /&gt;
&lt;br /&gt;
[[File:Screen_Shot_2022-12-01_at_14.30.58.png|512px|thumb|Figure 2. The memory faulting architecture which encompasses the checkpoint file and the heap delta.&lt;br /&gt;
.]]&lt;br /&gt;
&lt;br /&gt;
Any implementation of orthogonal persistence has to solve two problems: (1) How to map the persisted memory into the Wasm memory?; and (2) How to keep track of all modifications in the Wasm memory so that they can be persisted later. Page protection is used to solve both problems.The entire address space of the Wasm memory is divided into 4KiB pages. All pages are initially marked as inaccessible using the page protection flags of the operating system.&lt;br /&gt;
&lt;br /&gt;
The first memory access triggers a page fault, pauses the execution, and invokes a signal handler. The signal handler then fetches the corresponding page from persisted memory and marks the page as read-only. Subsequent read accesses to that page will succeed without any help from the signal handler. The first write access will trigger another page fault, however, and allow the signal handler to remember the page as modified and mark the page as readable and writable. All subsequent accesses to that page (both r/w) will succeed without invoking the signal handler.&lt;br /&gt;
&lt;br /&gt;
Invoking a signal handler and changing page protection flags are expensive operations. Messages that read or write large chunks of memory cause a storm of such operations, degrading performance of the whole system. This can cause severe slowdowns under heavy load.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Versioning: Heap Delta and Checkpoint Files==&lt;br /&gt;
&lt;br /&gt;
A canister executes update messages sequentially, one by one. Queries, in contrast, can run concurrently to each other and to update messages. The support for concurrent execution makes the memory implementation much more challenging. Imagine that a canister is executing an update message at (blockchain) block height H. At the same time, there could still be a previous long-running query that started earlier, at block height H-K. This means the same canister can have multiple versions of its memory active at the same time; this is used for the parallel execution of queries and update calls.&lt;br /&gt;
&lt;br /&gt;
A naive solution to this problem would be to copy the entire memory after each update message. That would be slow and use too much storage. Thus, our implementation takes a different route. It keeps track of the modified memory pages in a persistent tree data-structure  called Heap Delta that is based on Fast Mergeable Integer Maps. At a regular interval (i.e., every N rounds), there is a checkpoint event that commits the modified pages into the checkpoint file after cloning the file to preserve its previous version. Figure 2 shows how the Wasm memory is constructed from Heap Delta and the checkpoint file.&lt;br /&gt;
&lt;br /&gt;
====Memory-related performance optimizations====&lt;br /&gt;
&#039;&#039;&#039;Optimization 1:&#039;&#039;&#039; Memory mapping the checkpoint file pages.&lt;br /&gt;
This reduces the memory usage by sharing the pages between multiple calls being executed concurrently. This optimization also improves performance by avoiding page copying on read accesses. The number of signal handler invocations remains the same as before, so the issue of signal storms is still open after this optimization.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Optimization 2:&#039;&#039;&#039; Page Tracking in Queries&lt;br /&gt;
All pages dirtied by a query are discarded after execution. This means that the signal handler does not have to keep track of modified pages for query calls. As opposed to update calls, queries saw the introduction of a fast path that marks pages as readable and writable on the first access. This low-hanging fruit optimization made queries 1.5x-2x faster on average.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Optimization 3:&#039;&#039;&#039; Amortized Prefetching of Pages&lt;br /&gt;
The idea behind the most impactful optimization is simple: to reduce the number of page faults, more work is needed per signal handler invocation. Instead of fetching a single page at a time, the signal handler tries to speculatively prefetch pages. The right balance is required here because prefetching too many pages may degrade performance of small messages that access only a few pages. The optimization computes the largest contiguous range of accessed pages immediately preceding the current page. It uses the size of the range as a hint for prefetching more pages. This way the cost of prefetching is amortized by previously accessed pages. As a result, the optimization reduces the number of page faults in memory intensive messages by an order of magnitude.&lt;br /&gt;
&lt;br /&gt;
A downside of this approach is that prefetched page content needs to be compared with previous content after message execution to determine if a page was modified instead of relying on tracking write accesses via signal handlers.&lt;br /&gt;
&lt;br /&gt;
These optimizations bring substantial benefits for the performance of the memory faulting component of the execution environment. The optimizations allow the IC to improve its throughput for memory-intensive workloads.&lt;br /&gt;
&lt;br /&gt;
==See Also==&lt;br /&gt;
* &#039;&#039;&#039;The Internet Computer project website (hosted on the IC): [https://internetcomputer.org/ internetcomputer.org]&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Sb</name></author>
	</entry>
	<entry>
		<id>https://wiki.internetcomputer.org/w/index.php?title=Node_Provider_Machine_Hardware_Guide&amp;diff=5147</id>
		<title>Node Provider Machine Hardware Guide</title>
		<link rel="alternate" type="text/html" href="https://wiki.internetcomputer.org/w/index.php?title=Node_Provider_Machine_Hardware_Guide&amp;diff=5147"/>
		<updated>2023-04-24T16:50:49Z</updated>

		<summary type="html">&lt;p&gt;Sb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What are the Hardware Requirements for Node Machines? ==&lt;br /&gt;
Node providers operate one or more node machines than run in the IC network. Gen1 Hardware requirements have been used by Node Providers to set up node machines during Genesis launch.&lt;br /&gt;
&lt;br /&gt;
The Gen2 Hardware requirements have been defined for the further growth of the IC network. The specifications for the Gen2 node machines are generic (instead of vendor specific) and supports VM memory encryption and attestation which will be needed in future features on the IC.&lt;br /&gt;
&lt;br /&gt;
Below are the up-to-date specifications for both the Gen2 node machines and Gen1 node machines.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Gen 2 Node Machine ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Generic specification Gen2 Node Machine ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| Dual Socket AMD EPYC 7313 Milan 16C/32T 3 Ghz, 32K/512K/128M&lt;br /&gt;
optionally 7343, 7373, 73F3&lt;br /&gt;
|-&lt;br /&gt;
| 16x 32GB RDIMM, 3200MT/s, Dual Rank&lt;br /&gt;
|-&lt;br /&gt;
| 5x 6.4TB NVMe Mixed Mode (DWPD &amp;gt;= 3)&lt;br /&gt;
|-&lt;br /&gt;
| Dual Port 10G SFP or BASE-T&lt;br /&gt;
|-&lt;br /&gt;
| TPM 2.0&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Note the NVMe drives should be recognized by Linux as NVMe (i.e., show up as `/dev/nvme*` devices). SATA backplanes or any other hardware which prevents this should not be used.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Validated Configurations ===&lt;br /&gt;
DFINITY has [https://forum.dfinity.org/t/draft-motion-proposal-new-hardware-specification-and-remuneration-for-ic-nodes/14202/14?u=garym validated] the following Gen2 hardware configurations.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
==== Validated configuration: Dell ====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 2 || AMD EPYC 7343 3.2GHz, 16C/32T, 128M Cache (190W) &lt;br /&gt;
|-&lt;br /&gt;
| 16 || 32GB RDIMM, 3200MT/s, Dual Rank 16Gb BASE x8&lt;br /&gt;
|-&lt;br /&gt;
| 5 || 6.4TB Enterprise NVMe Mixed Use AG Drive U.2 Gen4 with carrier&lt;br /&gt;
|-&lt;br /&gt;
| 1 || PowerEdge R6525 Motherboard, with 2 x 1Gb Onboard LOM (BCM5720)MLK V2&lt;br /&gt;
|-&lt;br /&gt;
| 2 || Dual, Hot-plug, Redundant Power Supply (1+1) 1100W, Mixed Mode Titanium&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Intel X710 Dual Port 10GbE SFP+, OCP NIC 3.0&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Trusted Platform Module 2.0 V3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Validated configuration: ASUS ====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 2 || AMD EPYC 7313 (3,00 GHz, 16-Core, 128 MB)&lt;br /&gt;
|-&lt;br /&gt;
| 16 || 32GB ECC Reg ATP DDR4 3200 RAM &lt;br /&gt;
|-&lt;br /&gt;
| 5 || 6.4 TB NVMe Kioxia SSD 3D-NAND TLC U.3 (Kioxia CM6-V)&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Asus Mainboard KMPP-D32 Series (without OCP 3.0, without Pike)&lt;br /&gt;
|-&lt;br /&gt;
| 2 || 1600 Watt redundant PSU&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Broadcom 25 Gigabit P225P SFP28 Dual Port Network Card&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
==== Validated configurations: Supermicro &amp;amp; Gigabyte ====&lt;br /&gt;
Validation is being re-run on Supermicro and Gigabyte machines which match the spec. This section will be updated when those results are ready.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Gen 1 Node Machine ==&lt;br /&gt;
=== Node Machine Type 1 - Dell ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 1 || R6525&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Chassis - Supports Up to 10 NVMe drives, 12 drives total&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Dual 1 GB on Motherboard&lt;br /&gt;
|-&lt;br /&gt;
| 3 || Low Profile PCIe Slots&lt;br /&gt;
|-&lt;br /&gt;
| - || 3 Year Basic NBD Support&lt;br /&gt;
|-&lt;br /&gt;
| - || iDrac Enterprise&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Dual port 10GbE Base - T Adapter Broadcom, PCIe Low Profile&lt;br /&gt;
|-&lt;br /&gt;
| 10 || 3.2TB NVMe, Mixed Use, 2.5&amp;quot; with Carrier&lt;br /&gt;
|-&lt;br /&gt;
| 16 || 32GB RDIMM (3200MT/s)&lt;br /&gt;
|-&lt;br /&gt;
| 2 || AMD 7302 3GHz, 16C/32T, 128M, 155W, 3200&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Single Power Supply (800W)&lt;br /&gt;
|-&lt;br /&gt;
| 1 || C13-C14, 3M, 125V 15A Power Cored&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Node Machine Type 1 - SuperMicro ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 1 || AS - 1023US - TR4&lt;br /&gt;
|-&lt;br /&gt;
| 2 || Rome 7302 DP/UP 16C/32T 3.0&lt;br /&gt;
|-&lt;br /&gt;
| 16 || 32GB DDR4-3200 2Rx4 ECC REG DIMM&lt;br /&gt;
|-&lt;br /&gt;
| 5 || Samsung PM983 3.2TB NVMe PCIE/SATA Hybrid M.2 &amp;amp; 1 PCIE&lt;br /&gt;
|-&lt;br /&gt;
| 2 || 800W Power Supply&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Std LP 2-port 10G RJ45, Intel x540&lt;br /&gt;
|-&lt;br /&gt;
| 5 || Micron 5300 PRO 7.4TB, SATA, 2.5&#039;, 3D TLC, .6DWPD (with Caddie)&lt;br /&gt;
|-&lt;br /&gt;
| 1 || C13/C14 13A Power Cord&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
=== Node Machine Type 2 - Dell ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
| 1 || R6515&lt;br /&gt;
|-&lt;br /&gt;
| 1 || 3.5&amp;quot; Chassis with up to 4 Hot-Plug Hard Drives and OS RAID&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Dual 1 Gb on Motherboard&lt;br /&gt;
|-&lt;br /&gt;
| 3 || Low Profile PCIe Slots&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Standard Fan&lt;br /&gt;
|-&lt;br /&gt;
| - || 3 Year Basic NBD Support&lt;br /&gt;
|-&lt;br /&gt;
| - || iDrac Enterprise&lt;br /&gt;
|-&lt;br /&gt;
| 2 || Dual Port 10GbE Base - T Adapter Broadcom, PCIe LOw Profile&lt;br /&gt;
|-&lt;br /&gt;
| 2 || 480GB SSD SATA Mix Use 6Gbps 512 2.5in Hot-Plug AG Drive, 3.5in&lt;br /&gt;
|-&lt;br /&gt;
| 4 || 8GB RDIMM, 3200 MT/s, Single Rank&lt;br /&gt;
|-&lt;br /&gt;
| 1 || AMD EPYC 7232P 3.10GHz, 8C/16T, 64M Cache (120W) DDR4-3200&lt;br /&gt;
|-&lt;br /&gt;
| 1 || Dual Hot-Plug Redundant Power Supply (1+1), 550W&lt;br /&gt;
|-&lt;br /&gt;
| 2 || Jumper Cord - C13/C14, .6M, 250V, 13A&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sb</name></author>
	</entry>
	<entry>
		<id>https://wiki.internetcomputer.org/w/index.php?title=Internet_Computer_performance&amp;diff=425</id>
		<title>Internet Computer performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.internetcomputer.org/w/index.php?title=Internet_Computer_performance&amp;diff=425"/>
		<updated>2021-11-10T16:42:26Z</updated>

		<summary type="html">&lt;p&gt;Sb: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This post describes the DFINITY Foundation&#039;s performance evaluation of the Internet Computer. We will periodically update the numbers on this page to reflect performance improvements realized over time.&lt;br /&gt;
&lt;br /&gt;
Scalability of the Internet Computer comes from partitioning the IC into subnetworks, aka subnets. Subnets process update calls from ingress messages independently from other subnets. They can scale up by adding more subnets at the cost of having more network traffic (as applications then need to potentially communicate across subnets). In its current form, the IC should be able to scale out to hundreds of subnets.&lt;br /&gt;
&lt;br /&gt;
Query calls are read-only calls that are processed locally on each node. Scalability comes from adding more nodes, either to an existing subnet (at the cost of making consensus i.e. update calls more expensive) or as new subnet.&lt;br /&gt;
&lt;br /&gt;
== Test setup ==&lt;br /&gt;
&lt;br /&gt;
We are running all of our experiments concurrently against all subnets other than the NNS and some of the most utilized application subnets to avoid disturbance of active IC users. &lt;br /&gt;
We send load against those subnets directly and are not using boundary nodes for those experiments. Boundary nodes have additional rate limiting which is currently set slightly more conservative compared to what the IC can handle and running against them therefore is unsuitable for performance evaluation. &lt;br /&gt;
We are targeting all nodes in every subnet concurrently, much the same as what boundary nodes would be doing if we would use them.&lt;br /&gt;
&lt;br /&gt;
We have installed one counter canister in every subnet. This counter canister is essentially a no-op canister. It only maintains a counter, which can be queries via a query call and incremented via update call. The counter value is not using orthogonal persistence, so the overhead for the execution layer of the IC is minimal. Stressing the counter canister can be seen as a way to determine the system overhead or baseline performance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Measurements ==&lt;br /&gt;
=== Update calls ===&lt;br /&gt;
&lt;br /&gt;
The Internet Computer can currently sustain more than &#039;&#039;&#039;11_000 updates/second&#039;&#039;&#039; for a period of four minutes, with peaks over &#039;&#039;&#039;11_500 updates/second.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
The update calls we have been measuring here are triggered from Ingress messages sent from outside the IC.&lt;br /&gt;
&lt;br /&gt;
[[File:update-call-performance.png|1024px|Update Call Performance]]&lt;br /&gt;
&lt;br /&gt;
=== Query calls ===&lt;br /&gt;
Arguably more important are query calls, since they contribute with more than 90% of the traffic we are observing on the IC.&lt;br /&gt;
&lt;br /&gt;
[[File:query-call-performance.png|1024px|Query Call Performance]]&lt;br /&gt;
&lt;br /&gt;
The Internet Computer can currently process up to &#039;&#039;&#039;250,000 queries per second.&#039;&#039;&#039;&lt;br /&gt;
During our experiments, we increment the load incrementally and run each load for a period of 5 minutes.&lt;br /&gt;
&lt;br /&gt;
== Conclusion and next steps ==&lt;br /&gt;
&lt;br /&gt;
The Internet Computer today already shows impressive performance. On top of that, it should be possible to further scale out the IC by:&lt;br /&gt;
&lt;br /&gt;
* More subnets: This will immediate increase the query and update throughput. While adding subnets might eventually lead to other scalability problems, the IC in its current shape should be able to support hundreds of subnets.&lt;br /&gt;
* Performance improvements: Performance can also be improved by better single machine, network and consensus performance tuning. Increasing the performance by at least an order of magnitude should be possible.&lt;/div&gt;</summary>
		<author><name>Sb</name></author>
	</entry>
</feed>