Difference between revisions of "IC execution layer"

From Internet Computer Wiki
Jump to: navigation, search
m
(Replaced content with "Please see [https://internetcomputer.org/how-it-works/execution-layer/ How the Execution layer works]")
Tag: Replaced
 
Line 1: Line 1:
==== Overview ====
+
Please see [https://internetcomputer.org/how-it-works/execution-layer/ How the Execution layer works]
The execution layer is in charge of executing canister smart contracts. The Internet Computer works in rounds, which are triggered by a consensus layer agreeing on a block of messages. During each round, the replicated subnetwork state is processed, and execution ends once a certain instruction limit has been reached.
 
 
 
At the start of a round, the messages in a block are placed into input queues for their respective destination canisters. Messages directed to the subnetwork (these are messages to the management canister)  are placed into a subnetwork input queue. The scheduler then orders these messages for execution.
 
 
 
To optimize throughput, the scheduler uses canister-level scheduling. When a canister is scheduled for execution, an available CPU core is allocated, and the messages in the canister's input queue are executed one by one until all messages have been processed. The scheduler then selects the next canister for execution, continuing this process until the instruction round limit is reached, or no more canisters can be scheduled. Each canister executes in isolation, enabling parallel processing.
 
 
 
To cover resource usage costs, each canister holds a local balance of tokens called cycles. The execution environment monitors resource usage and deducts the corresponding cycles from the canister's balance.
 
 
 
=== Scheduler ===
 
The main requirements of the scheduler are (1) it must be deterministic (2) it should distribute workloads fairly among canisters (3) optimizing for throughput over latency.
 
 
 
To ensure responsiveness under heavy load, canisters have the option of paying upfront for a compute allocation. Since canisters are single threaded, a compute allocation is a fraction of one CPU core, expressed in percentage points. Only part of a subnet’s compute capacity can be allocated, ensuring progress for canisters with zero compute allocation, i.e. best effort canisters. Fairness is defined as guaranteeing canister compute allocations (i.e., a backlogged canister with compute allocation A executing at least A full rounds out of every 100) and evenly distributing the remaining capacity ("free compute") across all canisters. Given a deterministic state machine with N CPU cores (and N × 100 compute capacity), the scheduler selects  (at least) N canisters to execute a full round: a round, in which a canister either exhaust the instruction limit or completes the execution of all their enqueued messages. The scheduling algorithm uses credits accumulated across rounds as priority: an amount of credits equal to the canister’s compute allocation plus a uniform share of the free compute is credited to every canister at the beginning of every round; canisters in the priority queue are assigned round-robin to CPU cores (each of the first N canisters are scheduled first on a CPU core), and 100 credits are debited from each canister that executes a full round.
 
 
 
=== Message execution ===
 
 
 
==== Single canister message execution  ====
 
For security and reliability, each canister  is executed within a sandboxed environment that isolates it from other canisters and the rest of the data and computation taking place on the node. For executing each individual message, the scheduler brings up the sandbox process hosting the canister and executes the canister on the provided message. If the sandbox does not yet exist, it compiles the canister, creates the sandbox which contains a Wasm runtime and loads the canister compiled code. The message is then executed by invoking the runtime.   Each message execution can lead to new messages to other canisters, memory pages of the canister's state being modified, and/or a response to be generated. If the response corresponds to an ingress message then the response is written to the ingress history from where it can be read by the sender of the ingress message.  This information, together with the number of instructions consumed are returned to the execution environment for bookkeeping purposes.
 
 
 
Bookkeeping associated with requests is maintained in a call context associated with that request. This is a data structure which keeps track of information related to the call, among other things, where it originates, whether the call has been answered, etc. Calls to other canisters are recorded in the call context and placed in the output queues of the canister. If a response is generated this is placed in the ingress history if it is a response to an ingress message; the call context records that the call had been answered and the response is stored in the ingress history for some fixed amount of time.
 
 
 
Executing a response is similar to executing a request: the difference is that the response is executed within the call context of the request that triggered it. The other steps are as above.
 
 
 
==== Heartbeat and timer ====
 
Canisters have the ability to schedule tasks at regular intervals by setting up heartbeat or time methods. Internally, this is implemented by placing messages in a canister specific task queue.  When a canister is scheduled for execution the scheduler selects in a round robin fashion if it should execute a task or a message.
 
 
 
==== Management canister messages ====
 
Several types of operations on the Internet Computer are sent to and executed by the so-called management canister.   This is not a canister per se: the messages sent to the management canister are directed to the relevant subnetwork (see below for some examples) and are intercepted by the execution environment which triggers their execution.
 
 
 
Technically this is implemented via two subnetworks specific queues, one for input messages and another for output messages.  Messages in these queues are prioritized for execution at the beginning of each round.
 
 
 
===== Managing canisters =====
 
For some of these messages, the execution is entirely confined to the execution environment. This is the case of messages that manage canisters (e.g. messages for creation, updating settings, starting, stopping canisters). When issued, these messages are routed to the subnetwork hosting the corresponding canister (for canister creation, the correct subnetwork is decided out of band) and included in the subnetwork queue. At the beginning of each round, the scheduler selects and invokes, sequentially, the execution environment on a number of these messages.   The execution results in either global replicated state changes (i.e. when a new canister is created) or changes local to some canister (i.e. installing code, or updating the canister settings). A response, e.g. indicating the status of the execution result is produced and routed back to the issuer of the canister management message.
 
 
 
===== ECDSA messages =====
 
Other management canister messages, like those for ECDSA operations are executed by involving other components of the Internet Computer software stack. Such messages are routed to a special subnetwork (the subnetwork that holds, in a threshold manner, the ECDSA master key) and enqueued on that subnetworks queue.  When scheduled for execution, they are picked up by the execution environment which in turns involves the consensus layer which then directs the replicas in a subnetwork to generate the signature shares, gathers the shares and constructs the relevant signature which is then returned to the subnetwork outqueue.
 
 
 
===== Bitcoin messages =====
 
These messages are addressed to the bitcoin canister; they are rerouted to the subnetwork hosting the bitcoin canister and enqueued in the input queue of that canister. The response provided is routed directly to the canister that requested that made the request.
 
 
 
===== Randomness generation =====
 
Canisters can send requests to the management request to get random bits from the IC.  To answer these requests, the execution environment uses random bits generated by the IC, every round via a digital signature scheme. Specifically, every round each subnetwork on the IC produces a BLS signature.  Although deterministically generated, the bits of the signature are unpredictable until the signature is produced. The entropy in this signature is extracted and used as a seed to a pseudorandom generator to generate pseudorandom bits fed as answers to randomness requests issued by canisters in the previous round.
 
 
 
=== Execution features and bounds ===
 
 
 
==== Fast-tracked executions ====
 
Messages between canisters that live on different subnetworks require at least two sequential rounds of consensus: one is the round which includes the messages into blocks on the destination subnetwork, and one which includes replies into blocks on the source subnetwork.  
 
 
 
The execution environment implements fast tracking for messages between canisters that are hosted by the same subnetwork.  A new canister message targeted at a canister in the local subnet, is queued up directly in the input queue of the target canister and scheduled in the same round or an upcoming round. This message does not need to go through consensus since the generation and enqueuing of the new message is completely deterministic and thus happens in exactly the same way on all the nodes of the subnet. If the instruction limit on the round permits, the message may get to be executed in the same round.
 
 
 
==== Deterministic time slicing ====
 
While the execution of a round is bounded by a fixed number of instructions, the execution of any individual call can span multiple rounds. Technically, this is implemented by pausing the execution in each round if the round instructions have been used, and picking up the execution from where it left off whenever the canister is scheduled next (typically in the next round).  To avoid that a canister takes over a CPU core there is a bound on the maximum number of instructions any single call to a canister can use. If a canister attempts to execute more instructions than permitted, the runtime stops the execution; the canister state is rolled back and the cycles charged for execution are forfeited.
 
 
 
==== Heap deltas bound ====
 
The execution environment also imposes a limit on the number of heap pages that a canister can change during a single round. This bound is soft, in that if a canister goes above the limit, the result of the execution is still committed to the state.  However, future executions of the canister are skipped until the amortized number of heap changes across the rounds when the canister is scheduled dips below the limit.
 
 
 
=== Cycle Charging ===
 
During execution, canisters consume resources such as CPU, network bandwidth, and memory usage, which are charged for with cycles. Each canister holds a local cycles account to pay for its resource usage on the Internet Computer.  It is important to note that costs of resources scale with the replication factor of the subnet, which is the number of nodes powering the subnet.
 
 
 
''Execution Charging'': When installed, the Wasm code in a canister gets instrumented with code that counts the executed instructions for smart contract messages. This allows for the exact amount of cycles to be charged for each executed message to be deterministically computed.
 
 
 
Right before a message is scheduled for execution, an amount of cycles that corresponds to running a maximal number of instructions is withdrawn from the cycle balance of the canister. Execution proceeds – the instrumentation returns the total number of instructions executed. If this goes above the maximum instructions permitted, the canister traps and the cycles withdrawn are forfeited. If the canister executes fewer instructions than maximum available, the cycles corresponding to the instructions that were not used are returned to the canister balance.
 
 
 
''Call Charging'': Canisters are charged for the bandwidth consumed when sending messages to other canisters.  The cost of message transmission is proportional with the size of the message that is sent, and is therefore capped, since messages on the Internet Computer are themselves capped in size.  
 
 
 
When a canister makes a call to another canister, the execution environment deducts cycles from the caller's account to cover both the cost of the outgoing call and that of the potential reply that the callee will send.  Since the size of the reply is not known a priori, cycles  deducted cover a maximal size reply, with excess cycles returned to the calling canister if the reply is shorter.
 
 
 
''Memory Charging'': The memory used by a canister, including its Wasm code and state, is also charged for with cycles. The execution environment tracks the memory usage across multiple rounds and charges periodically for this usage.
 
 
 
''Freezing threshold'': To prevent a canister suddenly running out of cycles and losing all of its stored data, the Internet Computer allows the canister owner to define a so-called freezing threshold.  When the cycle account of a canister dips below this threshold, the canister stops performing any new computation, that is it rejects all incoming requests. It still executes replies (whenever a request is made, cycles for the execution of the corresponding reply are set aside)  and only pays for memory consumption.
 
 
 
Furthermore, withdrawing cycles for execution or for message transmission does not occur if withdrawing would take the cycle account below the freezing threshold. In this case the canister would not be trying to perform the corresponding action.
 

Latest revision as of 21:17, 8 March 2024