Q&A - Toshi Transfer Transaction

January 3, 2024

Toshi recently wrote an article about the transfer transaction plugin. He did a very good job, but had a few unresolved questions. Let's answer them!

Remote Interaction Validator

The RemoteInteractionValidator is a stateful validator that accepts an AddressInteractionNotification. Its purpose is to prevent remote accounts from being used in transactions other than account key link transactions.

Since the validator is stateful, it is run sequentially during processing of the BlockSyncConsumer. The notification that it validates is composed of three fields:

  • Source - The address of the account initiating the interaction, typically the transaction signer.
  • TransactionType - The type of transaction initiating the interaction.
  • ParticipantsByAddress - All other accounts participating in the interaction.

To help understand, let's see how this notification is raised by the TransferTransactionPlugin:

sub.notify(AddressInteractionNotification(context.SignerAddress, transaction.Type, { transaction.RecipientAddress }));

The transaction signer is the Source since that account is initiating a balance transfer from itself to another account. The other account - the recipient - is a participant in the transaction, so it is listed in ParticipantsByAddress. The transaction type is Entity_Type_Transfer since that is the type used for all transfer transactions (both top-level and embedded).

Now, let's look closer at the validator implementation. First, we check the transaction type:

if (model::AccountKeyLinkTransaction::Entity_Type == notification.TransactionType)
    return ValidationResult::Success;

If the transaction type initiating this interaction is an account key link transaction, the validator passes. Remote accounts can be a participant in these transactions so that they can be linked or unlinked.

Next, we create a predicate to check if an address is associated with a remote account:

const auto& cache = context.Cache.sub<cache::AccountStateCache>();
const auto& addresses = notification.ParticipantsByAddress;
auto predicate = [&cache, &context](const auto& address) {
    return IsRemote(cache, GetResolvedKey(address, context.Resolvers));
};

GetResolvedKey accepts an unresolved address, applies any applicable address aliases and returns a resolved address. IsRemote looks up the corresponding account in the account state cache and returns true if the account is remote.

ℹ️ An account is considered remote if its type is either AccountType::Remote or AccountType::Remote_Unlinked.

Finally, we apply the predicate to all the participant addresses:

return std::any_of(addresses.cbegin(), addresses.cend(), predicate)
        ? Failure_AccountLink_Remote_Account_Participant_Prohibited
        : ValidationResult::Success;

If any are remote, validation fails and the transaction is rejected. Otherwise, validation passes and processing continues.

Transfer Message Observer

The transfer message observer is used to identify harvest request messages. If it finds one, it will extract it and write it to a file-based message queue for further processing. This is partly a performance operation since the correctness (or not) of a harvest request message is not defined by the Symbol protocol. Since it is not part of the consensus, it can be processed later and happen outside of the BlockSyncConsumer.

Let's see how the observer is registered:

auto encryptionPrivateKeyPemFilename = config::GetNodePrivateKeyPemFilename(manager.userConfig().CertificateDirectory);
auto encryptionPublicKey = crypto::ReadPublicKeyFromPrivateKeyPemFile(encryptionPrivateKeyPemFilename);
auto recipient = model::PublicKeyToAddress(encryptionPublicKey, manager.config().Network.Identifier);
auto dataDirectory = config::CatapultDataDirectory(manager.userConfig().DataDirectory);
manager.addObserverHook([recipient, dataDirectory](auto& builder) {
    builder.add(observers::CreateTransferMessageObserver(0xE201735761802AFE, recipient, dataDirectory.dir("transfer_message")));
});

The observer is created with three arguments:

  1. A magic 64-bit marker (0xE201735761802AFE). Any message starting with this sequence of magic bytes will be processed.
  2. A recipient address derived from the node certificate. Remember, the node certificate is the bottom level of the two-level certificate chain used by Catapult nodes.
  3. A directory (transfer_message) that should be used as a file-based message queue.

Now, let's look closer at the observer implementation. First, we check if the transaction message starts with the desired magic bytes:

if (notification.MessageSize <= Marker_Size || marker != reinterpret_cast<const uint64_t&>(*notification.MessagePtr))
    return;

If it doesn't, processing is skipped.

Next, we check if the transaction is being sent to the configured recipient:

if (recipient != context.Resolvers.resolve(notification.Recipient))
    return;

If it doesn't processing is skipped.

At this point we have a transfer message starting with the harvest request magic bytes sent to the node's account. The observer pushes a new file to the file-based message queue to indicate further processing is required. The file is composed of a short header followed by the message, excluding the magic bytes.

io::FileQueueWriter writer(directory.str());
io::Write8(writer, NotifyMode::Commit == context.Mode ? 0 : 1);
io::Write(writer, context.Height);
writer.write(notification.SenderPublicKey);
writer.write({ notification.MessagePtr + Marker_Size, notification.MessageSize - Marker_Size });
writer.flush();

ℹ️ This observer can be registered multiple times with different markers, if you want to implement something similar.

Since this observer is writing to a file queue, there must be something reading from the file queue. This is the UnlockedFileQueueConsumer, which is defined here. Notice where it is called, it is passed the same directory (transfer_message) that was passed to the observer.

Mosaic Ordering

When sending multiple mosaics within a transfer, Catapult requires them to be ordered by mosaic id. In order for consensus to be reached, every node must globally agree on the hash of a transaction.

Consider a transfer transaction that transfers 3 CAT and 4 DOG tokens. There are two possibilities for mosaic ordering:

  1. 3 CAT then 4 DOG
  2. 4 DOG then 3 CAT

Given the way hashes work, the two transactions above will have different hashes. In many databases (like MongoDB), the mosaic ordering will not be preserved without storing additional information. As a result, requesting the component parts of a transaction, and calculating the correct hash would be nondeterministic. In order to make this deterministic, we chose to require the mosaics to be specified in ascending order of mosaic id. While this puts an extra requirement on the SDKs, it reduces data storage and leads to better performance in Catapult.

ℹ️ In NEM, NIS automatically sorts the mosaics prior to calculating the hash. This could have also worked in Symbol, but we wanted to avoid the extra processing prior to calculating a hash as a performance optimization.