Security Audit
January 31, 2023
Version 1.0.0
Presented by 0xMacro
This document includes the results of the security audit for Connext's smart contract code as found in the section titled ‘Source Code’. The security audit was performed by the Macro security team from November 21, 2022 to December 30, 2022.
The purpose of this audit is to review the source code of certain Connext Solidity contracts, and provide feedback on the design, architecture, and quality of the source code with an emphasis on validating the correctness and security of the software in its entirety.
Disclaimer: While Macro’s review is comprehensive and has surfaced some changes that should be made to the source code, this audit should not solely be relied upon for security, as no single audit is guaranteed to catch all possible bugs.
The following is an aggregation of issues found by the Macro Audit team:
Severity | Count | Acknowledged | Won't Do | Addressed |
---|---|---|---|---|
Critical | 1 | - | - | 1 |
High | 4 | - | - | 4 |
Medium | 4 | 1 | - | 3 |
Low | 5 | 4 | - | 1 |
Code Quality | 9 | 2 | - | 7 |
Gas Optimization | 5 | 3 | - | 2 |
Connext was quick to respond to these issues.
Our understanding of the specification was based on the following sources:
The following source code was reviewed during the audit:
5bb508f6ac4ad1572cca74ef297c2f57260e700c
Specifically, we audited the messaging layer contracts within this repository:
./packages/deployments/contracts/contracts/messaging/*
Contract | SHA256 |
---|---|
messaging/MerkleTreeManager.sol |
|
messaging/RootManager.sol |
|
messaging/WatcherClient.sol |
|
messaging/WatcherManager.sol |
|
messaging/connectors/ConnectorManager.sol |
|
messaging/connectors/Connector.sol |
|
messaging/connectors/GasCap.sol |
|
messaging/connectors/HubConnector.sol |
|
messaging/connectors/SpokeConnector.sol |
|
messaging/interfaces/IConnectorManager.sol |
|
messaging/interfaces/IConnector.sol |
|
messaging/interfaces/IHubConnector.sol |
|
messaging/interfaces/IMessageRecipient.sol |
|
messaging/interfaces/IOutbox.sol |
|
messaging/interfaces/IRootManager.sol |
|
messaging/libraries/DomainIndexer.sol |
|
messaging/libraries/MerkleLib.sol |
|
messaging/libraries/Message.sol |
|
messaging/libraries/Queue.sol |
|
messaging/libraries/RateLimited.sol |
|
messaging/connectors/arbitrum/ArbitrumHubConnector.sol |
|
messaging/connectors/arbitrum/ArbitrumSpokeConnector.sol |
|
messaging/connectors/gnosis/GnosisBase.sol |
|
messaging/connectors/gnosis/GnosisHubConnector.sol |
|
messaging/connectors/gnosis/GnosisSpokeConnector.sol |
|
messaging/connectors/mainnet/MainnetSpokeConnector.sol |
|
messaging/connectors/multichain/BaseMultichain.sol |
|
messaging/connectors/multichain/MultichainHubConnector.sol |
|
messaging/connectors/multichain/MultichainSpokeConnector.sol |
|
messaging/connectors/optimism/BaseOptimism.sol |
|
messaging/connectors/optimism/OptimismHubConnector.sol |
|
messaging/connectors/optimism/OptimismSpokeConnector.sol |
|
messaging/connectors/polygon/PolygonHubConnector.sol |
|
messaging/connectors/polygon/PolygonSpokeConnector.sol |
|
messaging/connectors/zksync/ZkSyncHubConnector.sol |
|
messaging/connectors/zksync/ZkSyncSpokeConnector.sol |
|
messaging/interfaces/ambs/GnosisAmb.sol |
|
messaging/interfaces/ambs/Multichain.sol |
|
messaging/connectors/optimism/lib/BytesUtils.sol |
|
messaging/connectors/optimism/lib/MerkleTrie.sol |
|
messaging/connectors/optimism/lib/OVMCodec.sol |
|
messaging/connectors/optimism/lib/PredeployAddresses.sol |
|
messaging/connectors/optimism/lib/RLPReader.sol |
|
messaging/connectors/optimism/lib/SecureMerkleTrie.sol |
|
messaging/connectors/polygon/lib/ExitPayloadReader.sol |
|
messaging/connectors/polygon/lib/MerklePatriciaProof.sol |
|
messaging/connectors/polygon/lib/Merkle.sol |
|
messaging/connectors/polygon/lib/RLPReader.sol |
|
messaging/connectors/polygon/tunnel/FxBaseChildTunnel.sol |
|
messaging/connectors/polygon/tunnel/FxBaseRootTunnel.sol |
|
messaging/interfaces/ambs/arbitrum/ArbitrumL2Amb.sol |
|
messaging/interfaces/ambs/arbitrum/IArbitrumInbox.sol |
|
messaging/interfaces/ambs/arbitrum/IArbitrumOutbox.sol |
|
messaging/interfaces/ambs/arbitrum/IArbitrumRollup.sol |
|
messaging/interfaces/ambs/optimism/IStateCommitmentChain.sol |
|
messaging/interfaces/ambs/optimism/OptimismAmb.sol |
|
Note: This document contains an audit solely of the Solidity contracts listed above. Specifically, the audit pertains only to the contracts themselves, and does not pertain to any other programs or scripts, including deployment scripts.
Click on an issue to jump to it, or scroll down to see them all.
RelayerProxyHub.propagate()
would allow anyone to drain the RelayerProxyHub
contract.
removeDomain
implementation
SpokeConnector
swap allows replay of messages
SpokeConnector
swap can result in duplicate messages
proveAndProcess()
is susceptible to griefing
PROCESS_GAS
guard insufficient for a batch of messages
Check Effect Interaction
not followed for send()
of spoke connector
removedCount
in dequeueVerified
dequeueVerified
We quantify issues in three parts:
This third part – the severity level – is a summary of how much consideration the client should give to fixing the issue. We assign severity according to the table of guidelines below:
Severity | Description |
---|---|
(C-x) Critical |
We recommend the client must fix the issue, no matter what, because not fixing would mean significant funds/assets WILL be lost. |
(H-x) High |
We recommend the client must address the issue, no matter what, because not fixing would be very bad, or some funds/assets will be lost, or the code’s behavior is against the provided spec. |
(M-x) Medium |
We recommend the client to seriously consider fixing the issue, as the implications of not fixing the issue are severe enough to impact the project significantly, albiet not in an existential manner. |
(L-x) Low |
The risk is small, unlikely, or may not relevant to the project in a meaningful way. Whether or not the project wants to develop a fix is up to the goals and needs of the project. |
(Q-x) Code Quality |
The issue identified does not pose any obvious risk, but fixing could improve overall code quality, on-chain composability, developer ergonomics, or even certain aspects of protocol design. |
(I-x) Informational |
Warnings and things to keep in mind when operating the protocol. No immediate action required. |
(G-x) Gas Optimizations |
The presented optimization suggestion would save an amount of gas significant enough, in our opinion, to be worth the development cost of implementing it. |
RelayerProxyHub.propagate()
would allow anyone to drain the RelayerProxyHub
contract.
In RelayerProxyHub
contract, propagate()
is an external function without any access control.
Hence, anyone can call it with an arbitrary relayer fee and valid root as input and withdraw all assets from the contract via transferRelayerFee()
.
Remediations to Consider:
Consider restricting access to the relayers only.
In the RootManager contract, propagate()
function is responsible for distributing aggregate roots to all connectors for corresponding domains. Implementation of this function does not revert if during interaction with a specific connector error is raised. This prevents a single inaccessible chain from causing a complete bridge halt. Particular functionality was introduced to fix a previously reported issue in Spearbit audit.
When propagate()
is called a set of initial validation checks is performed. If these checks are successful lastPropagatedRoot
value will be set. In the rest of propagate()
function even if interaction with all connectors fails propagate()
function will not revert and the lastPropagatedRoot
value will remain set. This means that follow-up invocations of propagate()
function will not be possible until _aggregateRoot
is updated. This is indeed the original intention of this code.
validateConnectors(_connectors);
uint256 _numDomains = _connectors.length;
// Sanity check: fees and encodedData lengths matches connectors length.
require(_fees.length == _numDomains && _encodedData.length == _numDomains, "invalid lengths");
// Dequeue verified roots from the queue and insert into the tree.
(bytes32 _aggregateRoot, uint256 _count) = dequeue();
// Sanity check: make sure we are not propagating a redundant aggregate root.
require(_aggregateRoot != lastPropagatedRoot, "redundant root");
lastPropagatedRoot = _aggregateRoot;
However, since propagate()
function can be invoked by anyone, as it doesn’t have any access control limits, a malicious actor can invoke this function with inappropriate arguments. For example, by providing encoding data of unexpected length. This will cause most if not all connectors to revert when invoked. For example, processing will fail for:
_sendMessage()
function when encodedData.length
is not 0encodedData.length
is not 32_sendMessage()
when encodedData.length
is not 0_getGasFromEncoded()
when encodedData.length
is not 32_sendMessage()
when encodedData.length
is not 96As a result of malicious actor actions, aggregate roots won’t be propagated to destination domains. The system may recover if a new root is added and propagate()
is called with proper inputs. However, an attacker can front-run these calls with their own invocation with adverse inputs.
Remediations to Consider:
Remove the sanity check present at L201 of RootManager. This would keep propagate()
permission-less, but there could be duplicate RootPropagated events.
require(_aggregateRoot != lastPropagatedRoot, "redundant root");
Restrict access to propagate()
function to relayers only.
We implemented an alternative fix to track the lastPropagatedRoot by domain, that way you could call the same function multiple times if a single domain failed. The spoke connectors will also revert if a duplicate aggregate root is received, so this seems a cleaner fix
removeDomain
implementation
RootManager
inherits from DomainIndexer
, which allows watchers to remove connectors through RootManager.removeConnector
> DomainIndexer.removeDomain
. There are three data structures associated with this method: domains[]
, connectors[]
, and domainToIndexPlusOne
. domainToIndexPlusOne
maps the domain to its index in the array. These three data structures need to be updated correctly and always in sync for domain management functionality to work properly.
However, the current removeDomain()
function implementation is incorrect when the domain to be removed is not the last. The removeDomain
function swaps the last element from the domains[]
and connectors[]
array with the to-be-removed element and pops the last element. Also, the domainToIndexPlusOne
mapping is cleared for the to-be-removed element. But the update to domainToIndexPlusOne
for the swapped last connector is missing. As a result, the swapped domain will have an incorrect index which will cause incorrect system behavior.
Due to the incorrect index for the swapped domain, it will not be possible for the corresponding connector to invoke RootManager.aggregate()
function. When invoked, aggregate()
function will fail in onlyConnector(_domain)
modifier as getConnectorForDomain()
will revert due to an unexpected connector address or due to an index array out-of-bounds error. Therefore message propagation originating from this particular domain will be halted.
To recover from the data corruption contract upgrade is necessary. Current contract functionality does not provide means to reset data structures to a valid state. For example, a subsequent call to removeConnector()
for the now corrupted domain may have an unexpected effect in removing a connector for a totally different domain due to an incorrect domainToIndexPlusOne
record, further exacerbating data corruption. Also, a subsequent call to addConnector()
will fail since the domain already exists.
In summary, watchers accidentally or intentionally may halt root propagation for one or more domains due to incorrect removeDomain()
function implementation, and the current system does not provide means to the owner to recover from this issue without RootManager contract redeployment.
Remediations to Consider:
Consider updating domainToIndexPlusOne
mapping to reflect the new position of swapped last element.
To propagate a root from Mainnet to the Gnosis chain, anyone can invoke propagate()
function on the RootManager with the appropriate value in encodedData
. This variable encodes the maximum gas to be used to process this message at the destination (Gnosis chain). propagate()
triggers HubConnector.sendMessage()
which itself relies on GnosisHubConnector._sendMessage()
to deliver a message to the Gnosis AMB by calling requireToPassMessage()
.
GnosisAmb(AMB).requireToPassMessage(
mirrorConnector,
abi.encodeWithSelector(Connector.processMessage.selector, _data),
_getGasFromEncoded(_encodedData)
);
On the other hand, to send a new origin root from the Gnosis chain to Mainnet, anyone can invoke send()
function on the corresponding GnosisSpokeConnector with the appropriate value in _encodedData
. As in the previous case, this variable is used to define the maximum gas to be used to process this message at the destination (Mainnet). send()
function relies on GnosisSpokeConnector._sendMessage()
to deliver a message to the GnosisAmb by calling requireToPassMessage()
.
The only validation in Connext contracts mentioned above and related to encodedData is to verify that encodedData.length
is 32 bytes long so that it can be decoded as uint256 which Gnosis AMB expects. Also, it validates that provided gas is less than the gasCap. However, within Gnosis AMB itself - more precisely MessageDelivery._sendMessage()
, there is additional validation that this value should be bigger than the constant MIN_GAS_PER_CALL
which is 100.
function requireToPassMessage(address _contract, bytes memory _data, uint256 _gas) public returns (bytes32) {
return _sendMessage(_contract, _data, _gas, SEND_TO_ORACLE_DRIVEN_LANE);
}
function _sendMessage(address _contract, bytes memory _data, uint256 _gas, uint256 _dataType)
internal
returns (bytes32)
{
// it is not allowed to pass messages while other messages are processed
// if other is not explicitly configured
require(messageId() == bytes32(0) || allowReentrantRequests());
require(_gas >= MIN_GAS_PER_CALL && _gas <= maxGasPerTx());
...
If a message that is successfully posted from either side to the Gnosis AMB bridge has a gas set to value which is insufficient to guarantee proper processing, that message won’t be properly processed on the destination.
Due to the above, an attacker may invoke RootManager.propagate()
on Mainnet or GnosisSpokeConnector.send()
on the Gnosis chain, and provide intentionally insufficient max gas in encodedData
, e.g. value of 100. This amount of gas is not enough for roots to be processed at the destination, but it is enough to pass validation at Gnosis AMB. As a result, the attacker may halt processing on one or both sides of the chain. On the origin, the state will be updated correctly, but on the destination, there will be no updates.
Remediations to Consider
SpokeConnector
swap allows replay of messages
Connext
enables replacing connectors without dropping the messages by maintaining the Merkle tree in a separate contract from actual connectors.
However, if it's done to any spoke connector in the current code’s state, one can replay the already executed message on swapped spoke connector.
Spoke connectors record if the message is processed inside messages
mapping, and doesn't allow to replay it through calculateMessageRoot
.
function process(bytes memory _message) internal returns (bool _success) {
...
messages[_messageHash] = MessageStatus.Processed;
...
}
function calculateMessageRoot(
bytes32 _messageHash,
bytes32[32] calldata _messagePath,
uint256 _messageIndex
) internal view returns (bytes32) {
// Ensure that the given message has not already been proven and processed.
require(messages[_messageHash] == MessageStatus.None, "!MessageStatus.None");
...
}
However, since this messages
mapping resides in the spoke connector contract only, if it is swapped with a new one, the new one won't have the old state, allowing one to process an already executed message again.
Remediation to Consider
messages
mapping in a separate contractacknowledged — currently when the liquidity layer is the only whitelisted sender, and there is protection against replays at that level, this attack should not be possible. however, this is a requirement to fix as more consumers use the messaging layer.
In the RootManager contract, callers of propagate()
function submit assets necessary to cover fees for processing of aggregateRoot
propagation on destination chains.
However, if interaction with a particular connector fails for any reason, propagate()
function will still succeed, and sent assets will remain in the RootManager contract.
Currently, there is no mechanism in the RootManager contract to withdraw these unspent assets, and therefore they will remain stuck.
Remediations to Consider
Consider:
withdrawFunds()
functionSpokeConnector
swap can result in duplicate messages
Connext
upgradeability approach relies on swapping contracts, such as SpokeConnector instances, in order to update code logic without dropping the messages. Most of the state is maintained in the Merkle tree in a separate contract from actual connectors, which facilitates upgrades using this approach.
However, one important part of the state is handled by the SpokeConnector contract itself. That is nonces
mapping which ensures that messages originating from that particular domain and directed towards a concrete destination chain are unique. nonces
variable is used in SpokeConnector in the dispatch()
function as part of the message wrapper. A message with a wrapper is then formatted and hashed. The hash of the message is then used as an identifier within the system.
// Get the next nonce for the destination domain, then increment it.
uint32 _nonce = nonces[_destinationDomain]++;
// Format the message into packed bytes.
bytes memory _message = Message.formatMessage(
DOMAIN,
TypeCasts.addressToBytes32(msg.sender),
_nonce,
_destinationDomain,
_recipientAddress,
_messageBody
);
// Insert the hashed message into the Merkle tree.
bytes32 _messageHash = keccak256(_message);
// Returns the root calculated after insertion of message, needed for events for
// watchers
(bytes32 _root, uint256 _count) = MERKLE.insert(_messageHash);
When SpokeConnector
is swapped, the state is reset which includes nonces
mapping. As a result, it is possible that the same combination of parameters ends up in Message.formatMessage()
. In this case, a new message with the same identifier will be inserted in the Merkle tree on the origin and propagated to the destination. However, on the destination, it won’t be possible to process it as it would be considered a duplicate message.
This would be most problematic in cases where the external protocol is using the connext
as messaging bridge to do various defined actions on cross-chain destinations.
Remediation to Consider:
nonces
mapping outside of the SpokeConnector contract.acknowledged — currently when the liquidity layer is the only whitelisted sender, and there is protection against duplicate hashes because of unique transfer ids (passed in message body) at the liquidity layer. unique ids generated here:
ZkSync 2.0 peculiarity of preserving msg.sender
for L1 to L2 calls introduces a need for additional functionality in ZkSyncSpokeConnector. When ZkSyncHubConnector initiates a call to the processMessage()
function of ZkSyncSpokeConnector, the caller (msg.sender
) on L2 will have the address of mirrorConnector
.
Currently, the processMessage()
function is protected by the onlyAMB
modifier which performs a check that msg.sender == AMB
. This check will be satisfied only when AMB == mirrorConnector
.
In case, when the mirror connector contract address changes on L1, the contract owner of ZkSyncSpokeConnector, has the capability to update the mirrorConnector
variable on L2 by calling setMirrorConnector()
.
However, there is no corresponding functionality for the owner to update the value of the AMB
variable. Therefore, in the case of the mirror connector contract address change on L1, it will not be possible to update and have a properly operating ZkSyncSpokeConnector.
Remediations to Consider:
processMessage()
in ZkSyncSpokeConnector and change access control guard to compare msg.sender
with mirrorConnector
instead of AMB
Fixed this in a different way then suggested — made Connector.processMessage(...) (which holds the onlyAMB modifier) a virtual function, which is overridden in the ZKSyncSpokeConnector without the modifier. This means the _processMessage msg.sender check in the spoke connector is preserved, while the check that the AMB is the sender. Making Connector.processMessage virtual was done to fix arbitrum aliasing
proveAndProcess()
is susceptible to griefing
In the SpokeConnector contract, the proveAndProcess()
function, which is responsible for executing messages associated with propagated roots, can be called by anyone. This function is expected to be called by relayer infrastructure in normal circumstances, but it also allows permissionless access in order to reduce centralization risk. This function also supports handling a batch of messages for execution as a performance optimization.
However, when the relayer submits a batch to proveAndProcess()
, an attacker or interested 3rd party may front-run that transaction with a call to proveAndProcess()
with the message that is the last message in the batch. That last message from the attacker’s transaction will be successfully processed, but the batch transactions submitted by the relayer will fail at the end of message array processing.
Based on the provided information, there is no complex logic on retry logic on the relayer execution side for failed proveAndProcess()
call. Therefore, an attacker may cause disruption in the processing of messages on the destination chain, unnecessary gas expenses related to the relayer infrastructure, and reduce or practically remove performance optimization benefits of processing a batch of messages in a single call.
Remediations to Consider:
proveAndProcess()
are in an adequate state before processing any of them, orproveAndProcess()
acknowledged, but likely will not fix. fixing this issue requires offchain logic (which will not be enforceable as this is open to any relayer network), or instituting a relayer whitelist at the messaging layer. the worst case is the optimization of batch processing falls back to the simple case of processing a single message
GnosisHubConnector contract inherits functionality from HubConnector. This inherited functionality includes the sendMessage()
function which is payable
.
function sendMessage(bytes memory _data, bytes memory _encodedData) external payable onlyRootManager {
_sendMessage(_data, _encodedData);
emit MessageSent(_data, _encodedData, msg.sender);
}
However, GnosisHubConnector doesn’t have any use for assets sent to it, nor any means to withdraw these assets in case they are incorrectly sent to it from the RootManager contract.
Remediations to Consider:
sendMessage()
in GnosisHubConnector and remove payable
, oracknowledged — this would require bad inputs to the fees argument in the propagate function. will fallback to the maxim of “dont send stupid things” instead of explicitly reverting here!
In the ZkSyncHubConnector contract, the _sendMessage()
function forwards the fee
amount of assets to ZkSync AMB. fee
amount of assets is determined based on msg.value
and value of configured gasCap
on the ZkSyncHubConnector contract.
// In ZkSyncHubConnector
uint256 fee = _getGas(msg.value);
// In GasCap contract
function _getGas(uint256 _gas) internal view returns (uint256) {
if (_gas > gasCap) {
_gas = gasCap;
}
return _gas;
}
When msg.value
is less or equal to configured gasCap
all assets will be forwarded to the ZkSync AMB. However, when msg.value
is greater, then only the gasCap
of assets will be forwarded while msg.value - gasCap
of assets will remain stored in ZkSyncHubConnector. Since currently there is no mechanism to withdraw this remaining amount of assets they will remain stuck in the contract.
The same issue is present MultichainHubConnector, more precisely in the BaseMultichain contract _sendMessage()
function.
// Get the max fee supplied
uint256 supplied = _getGas(msg.value); // fee paid on origin chain, up to cap
Remediations to Consider:
ended up going with a simpler solution — allowing admin withdrawals on the connector directly. there is no incentive to use funds above the cap, and they are retrievable by the owner where implemented. this also mirrors existing spoke connector functionality
The MainnetSpokeConnector contract differs from other SpokeConnectors since it integrates both Hub and Spoke functionality within a single contract. As a result, some functions such as the processMessage()
function, are not needed, and even though they are present, they are not called since there is no AMB.
However, this means that corresponding event emissions are not happening. For example, the MessageProcessed
event in Connector.processMessage
will never be emitted in the case of MainnectSpokeConnector. This inconsistent behavior with how other Connectors function may impact off-chain tools and systems relying on these events. Related issue.
Remediations to Consider:
MessageProcessed
event to the _sendMessage()
function, andMessageSent
event emission in sendMessage()
top-level function to come before _sendMessage()
call which will generate the corresponding MessageProcessed
event.Multichain connectors use anyExecute
instead of Connector.processMessage
for processing messages on both the hub and spoke side.
function anyExecute(bytes memory _data) external returns (bool success, bytes memory result) { // @audit what its calling
_processMessage(_data);
}
However since the MessageProcessed
event is not emitted here, it creates an inconsistency with respect to other connectors.
Remediations to Consider:
MessageProcessed
event to the anyExecute()
functionyeah i think we are okay with this! we use the more specific function, and processing message where they are functionally “processed” seems logically consistent
Issue
SpokeConnector, while doing an external call on the destination, forwards only PROCESS_GAS
for message execution on the destination, and keeps RESERVE_GAS
to be able to complete the execution.
The issue with PROCESS_GAS
and RESERVE_GAS
is that they are immutable. Hence, if there is a change in gas cost for some opcode, the message processing that was working initially may fail. And since these two configuration values are immutable, it won't be possible to update these values.
Dependence on hardcoded gas costs isn’t a good practice, they have changed in the past, and there is no guarantee they won't change in the future.
Informational
In the SpokeConnector constructor, provided _processGas
and _reserveGas
values need to be larger than 850_000 and 15_000 correspondingly. These hardcoded values may not be appropriate for chains with different opcode gas prices, such as ZkSync 2.0 (for which gas price tables are not yet available).
Remediations to Consider:
PROCESS_GAS
and RESERVE_GAS
mutable variables.agree with acknowledge, the goal for connectors here is to keep operational costs as cheap as possible, meaning max immutables over mutables
PROCESS_GAS
guard insufficient for a batch of messages
In the SpokeConnector contract, when the relayer submits a batch to proveAndProcess()
, each message in the batch is processed individually in the underlying process()
function. In this function, an external call is made with PROCESS_GAS
forwarded to the callee. Before this call is performed there is a guard which checks the current gas left. This guard is effective when the proveAndProcess()
executes a batch of size 1 and guarantees successful transaction completion independent of the status of the external call.
// A call running out of gas TYPICALLY errors the whole tx. We want to
// a) ensure the call has a sufficient amount of gas to make a
// meaningful state change.
// b) ensure that if the subcall runs out of gas, that the tx as a whole
// does not revert (i.e. we still mark the message processed)
// To do this, we require that we have enough gas to process
// and still return. We then delegate only the minimum processing gas.
require(gasleft() > PROCESS_GAS + RESERVE_GAS - 1, "!gas");
However, in the case when the batch size is bigger than 1, to guarantee the execution of proveAndProcess()
for all messages without reverts, the caller needs to provide at least a number of messages * (PROCESS_GAS + RESERVE_GAS - 1)
.
Otherwise, a first message in the batch can consume all of the PROCESS_GAS
provided and cause the processing of messages that follow in the batch to fail due to not enough gas left to satisfy the guard condition listed above. In that case, a revert will be raised in the process function where this condition is checked and will cause the whole proveAndProcess()
transaction to fail.
Remediations to Consider:
proveAndProcess()
function that the amount of provided gas for processing a whole batch of messages is appropriateThe require is in the process function, which is called within a for loop in the top-level proveAndProcess. If there is insufficient gas for one of the messages, the whole call should revert. But yes, this does contradict the comment and safeguards in case the entire tx doesn't revert.
In the GasCap contract, gasCap
variable misses the visibility specifier. When visibility is not defined, it is internal
by default, making it inaccessible to query it externally. Other connectors have similar variables with public visibility so to be consistent make gasCap public also.
In the ZkSyncHubConnector contract, _gasCap()
function and underlying gasCap
variable represent submission cost capacity, while in the case of GnosisHubConnector, gasCap
variable represents only one factor of submission cost, more precisely the maximum amount of gas to be used. Consider introducing a new variable/function to indicate different cap mechanics.
acknowledge — the consistent naming may not be suitable for all the ways its used but the mechanism can be shared (as it all provides some degree of protection from overpaying). would rather keep the logic in a single contract / place
The Mailbox contract in Zk Sync AMB is incomplete, and additional changes are to be expected. For example, the implementation of the l2TransactionBaseCost() function is marked as TODO, while comments in requestL2Transaction() seem to indicate that additional functionality will be added in the near future. Thus, when this happens, code on the Connext side which interacts with the ZkSync AMB will need to be reviewed and potentially updated.
In the SpokeConnector contract, in addSender()
and removeSender()
functions, there is no check if the operation to be performed will result in state changes. Therefore, events like SenderAdded and SenderRemoved will be emitted even if no state change happened, which may confuse off-chain monitoring tools and services.
In MainnetSpokeConnector, processMessage()
on Connector is not used but the underlying implementation in MainnetSpokeConnector _processMessage()
does not revert.
In MultichainHubConnector and MultichainSpokeConnector, processMessage()
is not used. However, the underlying implementation _processMessage()
is used by the anyExecute()
function in the BaseMultichain contract. Consider overriding processMessage()
in MultichainHubConnector and MultichainSpokeConnector with the implementation that reverts if invoked.
In SpokeConnector, removePendingAggregateRoot()
represents a safety mechanism that can be used to protect from issues in the transport layer. However, it cannot be used as a fallback safety mechanism in the case when fraud detection has already failed on the mainnet. Therefore, document the capabilities and limitations of removePendingAggregateRoot so it is not possible to misunderstand the safety properties which it provides.
allowlistedSenders
public variable doesn’t have a corresponding “@notice” natspec commentsend()
) while others are missing return values descriptions (e.g. dispatch()
)send()
function which is a copy of a natspec comment for a different function outboundRoot()
receiveAggregateRoot()
is confusing since we are not able to distinguish between these two invocation paths mentioned in the commentCheck Effect Interaction
not followed for send()
of spoke connector
function send(bytes memory _encodedData) external payable whenNotPaused rateLimited {
// check
require(sentMessageRoots[root] == false, "root already sent");
......
// interaction
**_sendMessage(_data, _encodedData);
external call to AMB**
// effect
**sentMessageRoots[root] = true;
.........**
}
Since check effect interaction is not followed, one can reenter and send messages again; however, we couldn't find an incentive for one to do so and a AMB which would allow one to reenter, marking this as code quality.
Consider moving effect before an interaction to not have any exposure moving forward.
ZkSync 2.0 preserves msg.sender
value for L1 to L2 calls. This applies to the ZkSyncHubConnector’s call to processMessage()
on ZkSyncSpokeConnector. As a result, the mirrorConnector
and AMB
variables must be set to the same value in ZkSyncSpokeConnector. Also due to this peculiarity, the onlyAMB
modifier defined on processMessage()
function and guard _verifySender(mirrorConnector)
in _processMessage()
function perform the same check. Consider removing _verifySender(mirrorConnector)
in ZkSyncSpokeConnector._processMessage()
as it is not needed.
In the MainnetSpokeConnector contract, in the sendMessage()
function there is a check that _encodedData.length
is 0. This same check is present in the internal _sendMessage()
function which is called immediately after and does most of the processing. Consider removing the check in the top-level sendMessage()
function.
Merkle tree count represents a continuously increasing counter. A new uint256 storage variable on SpokeConnector that relies on Merkle tree count can remove the need for sentMessageRoots
mapping that is used in SpokeConnector send()
function. Consider replacing it to reduce contract storage needs and corresponding gas usage.
// instead of following
bytes32 root = MERKLE.root();
require(sentMessageRoots[root] == false, "root already sent");
bytes memory _data = abi.encodePacked(root);
_sendMessage(_data, _encodedData);
sentMessageRoots[root] = true;
emit MessageSent(_data, _encodedData, msg.sender);
// consider following
(bytes32 root, uint256 count) = MERKLE.rootAndCount();
require(count > lastSent, "root already sent");
bytes memory _data = abi.encodePacked(root);
_sendMessage(_data, _encodedData);
lastSent = count;
emit MessageSent(_data, _encodedData, msg.sender);
acknowledged — generally, this will not be reliable once we move the tracking of roots sent to within the merkle tree manager
removedCount
in dequeueVerified
removedCount
is only used to identify if there are any removed items in the queue; hence instead of incrementing it for each removed item, consider using a boolean identifier.
function dequeueVerified()
-------
while (!(first > last)) {
bytes32 item = queue.data[first];
if (!queue.removed[item]) {
-------
} else {
// The item was removed. We do NOT increment the index (we will re-use this position).
unchecked {
++removedCount; // @audit you just need identifier
}
}
------
}
// Update the value for `first` in our queue object since we've dequeued a number of elements.
queue.first = first;
if (removedCount == 0) {
return items;
} else {
-------
}
dequeueVerified
Inside dequeueVerified
, items
is defined to be of the length of the queue. If some positions of items remained unfilled due to removed items, the items
array is currently copied into a new array called amendedItems
upto index
.
if (removedCount == 0) {
return items;
} else {
// If some items were removed, there will be a number of trailing 0 values we need to truncate
// from the array. Create a new array with all of the items up until these empty values.
bytes32[] memory amendedItems = new bytes32[](index); // The last `index` is the new length.
for (uint256 i; i < index; ) {
amendedItems[i] = items[i];
unchecked {
++i;
}
}
return amendedItems;
}
This is an anti-pattern; even if the removed item is one, the whole array is copied.
Instead, consider changing the length of the items array through the assembly. mstore(items, length)
Macro makes no warranties, either express, implied, statutory, or otherwise, with respect to the services or deliverables provided in this report, and Macro specifically disclaims all implied warranties of merchantability, fitness for a particular purpose, noninfringement and those arising from a course of dealing, usage or trade with respect thereto, and all such warranties are hereby excluded to the fullest extent permitted by law.
Macro will not be liable for any lost profits, business, contracts, revenue, goodwill, production, anticipated savings, loss of data, or costs of procurement of substitute goods or services or for any claim or demand by any other party. In no event will Macro be liable for consequential, incidental, special, indirect, or exemplary damages arising out of this agreement or any work statement, however caused and (to the fullest extent permitted by law) under any theory of liability (including negligence), even if Macro has been advised of the possibility of such damages.
The scope of this report and review is limited to a review of only the code presented by the Connext team and only the source code Macro notes as being within the scope of Macro’s review within this report. This report does not include an audit of the deployment scripts used to deploy the Solidity contracts in the repository corresponding to this audit. Specifically, for the avoidance of doubt, this report does not constitute investment advice, is not intended to be relied upon as investment advice, is not an endorsement of this project or team, and it is not a guarantee as to the absolute security of the project. In this report you may through hypertext or other computer links, gain access to websites operated by persons other than Macro. Such hyperlinks are provided for your reference and convenience only, and are the exclusive responsibility of such websites’ owners. You agree that Macro is not responsible for the content or operation of such websites, and that Macro shall have no liability to your or any other person or entity for the use of third party websites. Macro assumes no responsibility for the use of third party software and shall have no liability whatsoever to any person or entity for the accuracy or completeness of any outcome generated by such software.