Ecosystem Explorer — Exploring Security Risks in AI Blockchain Projects

ChainLight
ChainLight Blog & Research
22 min readApr 25, 2024

--

1. AI with Blockchain — Research Scope

Artificial Intelligence is currently experiencing the fastest mass adoption among emerging technologies. ChatGPT, Claude, and Copilot have already been deeply integrated into our daily lives and work. In line with this trend, there have been numerous attempts to combine blockchain and AI.

The AI with Blockchain ecosystem is vast, but it can be roughly divided into two categories:

  • Projects that share revenue of AI applications and operate governance through blockchain tokenomics
  • Projects that decentralize existing centralized AI models and infrastructure

This article will cover both types of projects but will focus more on the latter. We will examine the problems with the existing centralized AI ecosystem, analyze how decentralized infrastructure aims to solve these issues, and explain how each project addresses potential security concerns that may arise during this process.

2. Motivations of Decentralized AI

Projects advocating for the decentralization of AI primarily point out the following problems with centralized AI systems:

  • Lack of Verifiability: For most of the AI services that is commonly used nowadays, users have no way to verify its honest operation. For example, users may believe they are querying the cutting-edge model, but in reality, a different model may be used on the backend due to traffic issues or other reasons. Moreover, even for services claiming to use open-source models, users mostly have no way to independently verify whether the model deployed for the service is the same as the one released as open-source as AI model’s behavior is indeterministic.
  • Centralized Data & Model Training: Commercial AI typically does not fully disclose the sources and distribution of its training data. Therefore, there is a possibility that models trained according to corporate intentions may be biased in certain areas, and illegally collected data may be included in the training process. This might be checked through audits at the Web2 level, but it still cannot be fully trusted. Additionally, users’ query history or feedback may be reused for training without their consent. From the perspective of model training, data clearly has monetary value. Some projects aim to address this by revenue sharing through their tokens.
  • System Failure Issues: For AI services operating in a Web2 environment, there may be instances where users cannot use the service due to server outages or systemic bugs.
  • Low Accessibility of Training Local AI: There is an increasing number of cases where privacy-sensitive data, such as medical data or client code, is being trained on local models for use. However, along with model performance, hardware requirements such as computing power (CPU, GPU, …) and storage size are increasing exponentially. As a result, there is a growing number of cases where computing power is borrowed and used in a decentralized environment.
  • Censorship Issue: In the case of centralized AI services, even if the user’s query is properly delivered to the model, there is a possibility that the response may be censored if it violates the service’s set standards.

3. Building Blocks of AI on Blockchain

Various attempts have been made to decentralize AI to address these issues, ultimately forming different areas. Projects in the Decentralized AI ecosystem can be classified as follows. (*The following categorization is based on the author’s arbitrary separation according to project characteristics and may vary by individual.)

  • Decentralized Computation: Projects that decentralize computing power or storage to decentralize model training and operation.
  • Decentralized AI: Bittensor is a prime example, implementing a method where multiple models perform a single task, and the best result is determined through consensus. Decentralized AI projects aim to resolve issues such as censorship and single point of failure.
  • Middleware: Ritual is a representative example, delivering the results of off-chain AI models through the blockchain, similar to existing decentralized oracle networks. In this case, the purpose is to enable smart contracts to utilize AI models, but there are also projects like Phala Network that aim to provide data privacy using a trusted execution environment (TEE).
  • Application: Blockchain projects utilizing AI are emerging in various fields such as Agents, DeFi, and GameFi. Most of these execute AI off-chain and utilize blockchain mostly for user identification or revenue sharing through their tokenomics.

In this article, we will analyze the structure of representative projects in each layer, discuss potential problems that projects in each layer may face, and examine how the mentioned protocols address these issues.

4. Case Study — Decentralized AI

4.1. Bittensor

Bittensor, well-known for its TAO token, is a representative project of Decentralized AI. Bittensor is designed to compete with multiple AI operating entities based on PoW token rewards to deliver the best results to users. Initially launched as a parachain on Polkadot, it pivoted to its own chain in March 2023.

Architecture

Bittensor consists of subnets with different purposes onboarded onto the main network, Subtensor, as illustrated in the image below.

Architecture of Bittensor (Source)

The components of each subnet are as follows:

  • Miner: An entity that operates an AI model and aims to mine as much TAO as possible by competing with other miners according to the incentive mechanism.
  • Validator: Receives and individually evaluates the work results of miners. To become a validator, TAO must be staked, and validators receive a distributed reward for validating. Evaluation in the subnet operates based on PoS, so the more staked, the more weight is assigned to the evaluation of miners.
  • Incentive Mechanism: Defined by the subnet operator and includes the roles of miners and validators, as well as evaluation methods. It specifies how to query miners for work, the objective of miners, the format of miner return values, the evaluation criteria for validators, and the TAO reward distribution details. Various elements, such as external databases, can be included to evaluate miners’ work.

Once validators complete the evaluation of each miner’s work according to the incentive mechanism, the results are delivered to the Bittensor mainnet, Subtensor, through the Bittensor API. Subtensor aggregates the evaluation results from validators and reaches a consensus on the amount of TAO to be allocated to miners and validators, which is called Yuma Consensus.

Yuma Consensus is performed every 360 blocks (approximately 72 minutes) on Subtensor to wait for the complete delivery of the evaluation results from subnet validators. Validators submit evaluation results for miners every 100 to 200 blocks. The lowest-rated miner is expelled from the subnet each cycle, and new miner applications are accepted. The remaining surviving miners strive to reduce errors to achieve higher scores based on the feedback received from Yuma Consensus. Bittensor claims that the system is designed from a game-theoretic perspective for each node to honestly return better results, as miners receive rewards exponentially based on their ranking. Subnet operators can manually change the incentive mechanism in order to encourage miners.

Security Analysis

  • Validator Collusion: Some validators may collude with miners to form a group and conspire to give them high evaluation scores to fraudulently receive rewards. To prevent such cases, Bittensor has adopted a countermeasure in Yuma consensus, by assigning a separate trust score to validators and gives low incentives if they do not belong to the majority. However, this countermeasure is based on the assumption that most validators are honest. If validators are divided into several small colluding groups or are separated into two large groups to respond to game theory and obtain rewards, it may result in honest validators receiving lower rewards. Furthermore, since a relatively small number of 64 validators participate in consensus based on their staking amount in a subnet, the possibility of collusion attacks increase.
  • Incentive Mechanism Implementation Issues: The incentive mechanism should be implemented to “appropriately” suppress malicious behavior by miners and validators, but this criterion is very ambiguous. There has already been a case where problems arose due to incorrect implementation of the incentive mechanism. In this case, miners cached validators’ responses and repeated them for a short period to derive the optimal result. These kinds of attacks lead to the results contrary to the user’s original intention of receiving the best result and cause damage to the subnet operator. Ultimately, users experience a low level of user experience, which may result in the subnet’s reputation being lowered and expelled.
  • Attacks on Miners: Depending on the task given to the subnet, some AI models, such as LLMs, may be subject to targeted attacks. Malicious users can perform system attacks through prompt-level command injection that compromises miner’s environment, or trigger edge cases with input values that produce very low accuracy for specific models. In the Bittensor ecosystem, the responsibility for such attacks lies with the miners, as the targeted miners are highly likely to be deregistered from the subnet. Therefore, miners need to be aware of and prepare for the possibility of such attacks. Regarding this issue, Bittensor operates a separate subnet that defends attacks on LLM applications.
  • Governance: Bittensor’s governance has strong authority, such as registering and terminating subnets. However, the participants in Bittensor’s governance are quite centralized, and both proposal submitters and voters are whitelisted entities. The Triumvirate, which are the proposal submitters, consists of three executives from the Opentensor Foundation, and the Senate, which are the proposal voters, consists of 29 groups with disclosed addresses that receive TAO token delegations from users. However, even if the Senate colludes, the scope of its impact is limited to rejecting submitted proposals, and it is nearly impossible for individuals to operate Bittensor’s subnets due to its high budget requirement.
  • Lack of Work Proof for Miners: What users expect from Bittensor is to receive the best results through competition among miners, but currently, there is no way for users and validators to verify whether each miner actually used an AI model. This issue can be resolved in the future as researches on Verifiable AI is ongoing, while it is quite premature in current state.

In addition to Bittensor, other projects aiming to decentralize AI models include Zero1 Labs and Tau, but they have not yet released public products or detailed documentations.

5. Case Study — Decentralized Computation

Decentralization of AI includes not only the decentralization of the models themselves but also the decentralization of computational resources, which mainly includes two directions:

  • Improving the accessibility of local model training: Decentralized computational resources allow individuals to easily access high-performance computational resources. This eases individual users to train their own local models to mitigate system failure or censorship issues of centralized AI platforms.
  • Decentralization of data and enhancing censorship resistance: Through decentralized data marketplaces, individuals can collect data that is difficult to collect or uncensored to train local models. In this case, individuals can verify the bias of the training data and observe uncensored return values. Although not covered in this article, Ocean Protocol is a representative example.

In addition, various projects are emerging to decentralize computational resources and training data for various purposes. Most of them trade computational resources deployed in a cloud environment through a P2P marketplace and upload proof of trades or computation to the blockchain. This section will briefly explain the structure of some representative projects and discuss the potential security risk factors that decentralized computational resource projects may have.

5.1. Akash Network

Akash Network is a P2P computational resource marketplace that allows individuals to trade their computational resources in the form of Docker container images on the marketplace. Akash Network consists of four layers.

  • Blockchain Layer: Similar to the case of the Render Network, the blockchain layer of Akash Network primarily handles payments using AKT tokens and is also used for governance, among other things. Blockchain layer also communicates with the providers for resource allocation and access control management.
  • Application Layer: It acts as an intermediary between buyers and sellers. When a buyer submits detailed specifications for computational resources, sellers bid on them in a reverse auction manner.
  • Provider Layer: This is the layer where computational resource sellers are located. Sellers communicate with the Akash blockchain through the Daemon provided by Akash and process resource allocation requests.
  • User Layer: This is the layer where computational resource buyers are located. Buyers make resource purchase requests to the Akash Network through a CLI environment or web app and deploy their applications.

Apart from the blockchain layer, it can be seen as having a structure very similar to other P2P cloud platforms. However, Akash Network offers relatively competitive pricings compared to commercial cloud services such as AWS through the tokenomics of the blockchain layer.

Security Considerations

As can be seen from the brief description, Akash Network uses blockchain for relatively simple tasks such as payments. Since the blockchain layer of Akash Network also has a structure very similar to other chains in the Cosmos network, it inherently considers all the aspects that need to be considered in existing blockchains, such as validation. A security incident related to Akash Network is the slashing of tokens delegated by individuals due to the failure of the Strangelove validator’s system operation, but this is far from the vulnerability of the Akash service itself, so a detailed explanation will be omitted.

Rather, from the user perspective, most of the important tasks occur in the Docker container environment provided by the seller, so the security considerations are closer to cloud security.

  • Malicious or Vulnerable Docker Images: If a Docker image provider distributes images with embedded malware or backdoors, users may have their critical data or system privileges compromised. Conversely, there is a possibility that the Docker image provider may unintentionally provide a vulnerable version of the Docker image, in which case an attacker may exploit the vulnerability to access or damage the image provider’s hosting system. For the mitigation of such cases, Akash Network provides a separate feature that allows users to receive services only from verified providers, and users should be aware of this. If a scanning or verification process is performed on the Docker images submitted in the Application Layer or other layers, a more secure service can be provided.
  • Authentication: If secure authentication does not occur between the seller and the buyer, access control issues may arise, allowing invalid users to access resources. In the case of Akash Network, it employs a mutual authentication method through mutual TLS (mTLS) between the buyer and the seller to prepare for such problems.
  • Malicious Provider: Akash Network does not impose economic constraints such as staking on computational resource providers. Therefore, it is difficult to directly penalize malicious providers who arbitrarily provide services according to their convenience, and actual inconvenience cases from users were found during the research. The protocol can partially resolve these issues by operating a management system for individual services, assigning reputation scores to service providers, and introducing a slashing mechanism for payment amounts.

In addition, for cases like Akash Network where resources are traded in the form of Docker containers between individuals, it is necessary to periodically update security elements to prepare for various web2 attack factors that may occur during the image building process, such as supply chain attacks. The protocol that mediates this should also onboard the marketplace after going through a verification process.

5.2. Render Network

Render Network is a project that operates a P2P GPU rendering task marketplace through blockchain. Users of the Render Network can operate OTOY, Render Network’s GPU rendering software, through GPU resources purchased from the marketplace.

GPU resource sellers operate network nodes through software provided by Render Network and receive work assignments from the network. The rendering work is performed when the resource seller agrees within a certain period after the work assignment, and the resource buyer, such as the rendering work requester, can review the resource seller’s work and decide whether to pay the cost. In Render Network, the blockchain primarily performs the role of payment and ledger, and key functions such as work assignment to resource sellers are performed by centralized services operated by Render Network. Therefore, this structure can be considered closer to a centralized intermediary rather than a blockchain-based marketplace.

Security Considerations

From a security perspective, the following aspects should be considered in the structure of Render Network:

  • Data Privacy: Artists or others delegating GPU work may be reluctant to have their work exposed to external computation agents or Render Network operators, and some tasks may require privacy-sensitive input values. Regarding this, Render Network claims that it manages all work products uploaded to the network in an encrypted manner, and individual asset storage is only performed for a short period. However, if a means of verifying this is provided, its reliability can be enhanced.
  • Malicious Customer: GPU rendering tasks require a significant amount of computation. Moreover, since the cost is not incurred until the work requester reviews and approves the work product, it is vulnerable to DoS attacks. To prepare for this, Render Network employs a method of canceling the entire work and connecting to the support team if the work product is rejected three times consecutively. However, countermeasures are needed if a malicious user creates a large number of ghost accounts and engages in trolling.
  • Malicious Provider: In the case of malicious resource providers, they may deliver work products of a lower level than the specifications stated during initial registration or arbitrarily delay the delivery of work products. To prevent this, Render Network has built a Reputation Scoring system for resource providers, significantly reducing the work allocation to GPU nodes with low reputation. It also allows resource buyers to review work products and determine whether to pay, reducing buyer inconvenience.

6. Case Study — Middleware

Projects classified as Middleware allow smart contracts to query off-chain AI models and use the results in contracts. Representative examples include Ritual’s Infernet SDK and Phala Network.

6.1. Ritual Infernet SDK

Infernet SDK enables smart contracts to use the inference results of AI models at the contract level. The structure of Infernet is as follows. The smart contract sends a query to the off-chain Infernet node through the Infernet SDK, and the result is returned to the smart contract in the form of an oracle.

Architecture of Ritual Infernet (Source)

A project using the Infernet SDK has the following components:

  • Subscription: Contains the context of the off-chain computation requested by the user to the Infernet node. Each subscription is identified by a unique ID and includes parameters such as the caller contract address, container ID for the computation to be performed, input data, duration, and gas. Each fulfilled Subscription is identified by a combination of Subscription ID, response time, and response node address.
  • Coordinator: Manages the interaction between smart contracts and Infernet nodes. It is responsible for creating and managing Subscriptions, managing the lifecycle of Infernet nodes, and registering them on the network.
  • Consumer: Requests the creation of Subscriptions from the Coordinator and receives responses through callback functions. Smart contracts that require off-chain computation must inherit the Consumer contract.
  • Infernet Node: An off-chain node that tracks the state of Subscriptions and delivers output values to smart contracts through the Coordinator. The output of the off-chain computation is returned in the format specified by the user in the Subscription. Each node is deployed as a Docker container image, and there is a container manager for managing Docker images.
  • In addition, there are Guardians that verify the work of Infernet nodes and manages container access permissions, and Job Orchestrators that assign tasks to each node, and Metric Senders that record node statistics in the Infernet database.

Security Considerations

The high level structure of Infernet is generally similar to other decentralized oracle networks, thus vulnerabilities can occur in both smart contracts and Infernet nodes.

  • Misimplemented Coordinator: The Coordinator contract is the only gateway for smart contracts to communicate with off-chain entities. Therefore, vulnerabilities on Coordinator act as a single point of failure. In use cases such as AI-assisted DeFi protocols, it would be critical if vulnerabilities allow the exploiter to arbitrarily replace or tamper with the result of the inference. While Ritual recommends separate audits when custom functions are implemented rather than the standard coordinator, we could not find public audit reports on standard Infernet SDK too.
  • Misimplemented Consumer: The Infernet node calls the rawReceiveCompute function of the user-developed Consumer to deliver the result of the off-chain computation, which is a function that can be customized by the project using the Infernet SDK. When Infernet node’s transaction to Consumer fails, this would lead to the improper delivery of the computation result. In this case, the Infernet node may unnecessarily consume transaction fees, or the project utilizing Infernet may fail to function properly. To prepare for this, Ritual requires Infernet nodes to verify this through transaction simulation before sending the actual on-chain transaction. However, as the success of on-chain transactions cannot be fully guaranteed through simulation, Infernet nodes should consider fallback mechanism for failed transactions and implement reorg handling mechanism.
  • Malicious Node: A malicious Infernet node may return incorrect results or censor computation results. The current structure of Infernet nodes does not have a mechanism to aggregate computation values from multiple nodes for consensus or adjust the workload of Infernet nodes through reputation scores.
  • Data Privacy: It is unclear whether input values and output values are encrypted during interactions with Infernet nodes. Thus, users should care not to use sensitive data as input values or receive and use them as return values when interacting off-chain.

6.2. Phala Network

Phala Network is a blockchain network that natively supports off-chain computation with hardware security. Smart contract computations are performed in a trusted execution environment through an off-chain relayer, preserving data privacy. Phala Network claims to have implemented an AI Agent Contract that preserves data privacy through this.

The components of Phala Network are as follows:

  • Worker: An off-chain server that provides computational resources to Phala Network.
  • Contract Cluster: A runtime execution environment for smart contracts, where each smart contract is assigned to a cluster and supported by computations from multiple Workers.
  • Phat Contract: It refers to the smart contracts of Phala Network. Phat Contracts pay PHA tokens when requesting computation from Workers and stake PHA tokens for this purpose.
  • Gatekeeper: The management entity of Workers operated by Phala Network. It performs the role of managing keys for entities involved in the network, such as Workers.

Trusted Execution Environment

To better explain the computation process of Phala Network, we briefly introduce the basics of the Trusted Execution Environment(TEE). TEEs are hardware security features that allow applications to perform computations without exposing their code and data to external entities, including the operating system itself. As Phala Network uses Intel SGX among TEEs, we will focus a bit more about it.

Intel SGX is a hardware security feature built into most modern Intel processors. SGX loads the code or data to be protected into a dedicated memory area within the processor called the Enclave Page Cache, and the data in this area is encrypted in real-time by the processor so that the original data cannot be accessed from the outside. It also communicates remotely with the Intel’s remote attestation server to issue a report proving the integrity of the enclave, allowing users to verify the authenticity of the enclave and create a secure communication channel to communicate with certificates. As a result, when off-chain computation is delegated to an Intel SGX enclave, it prevents the computation server from arbitrarily modifying or observing the data it sends/receives.

Architecture

Architecture of Phala Network (Source)

The upper image illustrates Phala Network’s architecture. Client deploys a Phat Contract to the Phala Network. The contract state of Phala Network is stored in an encrypted state, so Phala Network itself plays the role of a message queue rather than a computation environment. When a user queries a contract to request a state transition, each contract is assigned a Worker located in its cluster and delivers its state to the Worker’s enclave through a secure communication channel. The Worker decrypts the received state value in the enclave, performs the computation, and uploads the encrypted state value to the blockchain. As Workers are off-chain entities, it seems possible for them to run small AI models in the enclave and utilize them. Accordingly, Phala Network recently announced an AI agent contract that runs inside within their environment.

Security Considerations

  • Key Management: As Phala Network uses a trusted execution environment, it can be said that the integrity and reliability of data are guaranteed under normal circumstances. However, Phala Network operates various keys to exchange encrypted data or manage off-chain entities, and if the keys are leaked, these premises may become meaningless. Phala Network summarized the impact of key leakage as follows:

Worker Key — Attacker can decrypt all data delivered to the Worker. Additionally, the attacker can impersonate the Worker and modify the values delivered to the user. To address this case, Phala Network has adopted a method of comparing the result values derived from multiple workers and slashing the stake of outliers.

Contract Key — Attacker can decrypt the state value and all historical inputs of the corresponding contract.

Cluster Key — Attacker can decrypt the state values and all historical inputs of all contracts within the cluster.

Master Key — All historical data is exposed. In particular, the Master Key of Phala Network is shared with all gatekeepers, and if even one of the gatekeepers is attacked, the master key may be leaked. To address this, Phala Network is planning a master key management mechanism through MPC.

  • Limitations of TEE for AI: TEEs, especially SGX, have some limitations when it comes to operating AI models. First, the encrypted memory area supported by SGX is limited to 256MB, which may be somewhat unsuitable for computations of large models. Intel SGX attempts to solve the capacity issue through its later version, SGX2, but hardware supporting it is currently difficult to find. Additionally, as TEE is a hardware security device, it imposes constraints on the computation environment. Likewise, Intel SGX also requires the Worker to appropriately select and operate supporting hardware, and Intel SGX currently does not support protection for GPU computations, so models must be operated CPU-only to guarantee integrity. Lastly, since SGX has seen various critical side-channel bugs every year since its release, there is a burden of continuously tracking the latest attacks. Phala Network is aware of this burden and has logic in place to verify the safety of hardware/software through Hardware verification when registering Workers. They have also implemented a hierarchical key management system to enable quick action on Worker misconduct through the Gatekeepers.

7. Case Study — Application

AI applications utilizing blockchain are emerging in various fields and can be classified as follows:

  • AI Agent: This is the most common type of project. Projects that handle AI chatbots through LLMs or Image Generation tasks fall into this category. Most of the services we observed performed AI-user interactions off-chain and use blockchain for governance or revenue sharing through token issuance, so on-chain operations were not the main functionality in most cases.
  • AI DeFi: AI DeFi projects aim to capture higher yield farming opportunities compared to other DeFi platforms using AI. Users stake assets in the project’s vault, and the project generates profits by synthesizing or rebalancing assets using its AI model and shares the yield. However, in most cases, these projects require significant trust from users because there is a lack of means to verify whether the project’s judgment was made through an AI model or whether the AI model’s results were manipulated by an intermediary.
  • AI / Data Marketplace: These are projects that tokenize access rights to AI models or data and sell them on an on-chain marketplace. In the case of a P2P marketplace, there is a possibility that the seller may embed malicious code or backdoors in their assets or storage space, so transactions should only be made on assets deployed in a verified environment, and the integrity of the assets being sold should be verified. Additionally, there is a possibility that buyers may share purchased assets with others, damaging the market structure, so marketplace projects should design their access control structure with this in mind.
  • AI-assisted Security Tool: There are projects that detect exploits within the blockchain by detecting abnormal transactions or state value changes through AI. If the existence of an exploit is disclosed to the public, it can cause significant market fluctuation, so the project needs to go through a process of verifying the authenticity before disclosing information.

8. AI with Blockchain: Limitations

8.1. Poor Motivation for Adapting Blockchain

Currently, most AI projects that claim to utilize blockchain lack clear motivation for using blockchain. Some projects use blockchain only for revenue sharing through tokens or for governance. I believe the ultimate reason for using blockchain in AI is to protect user data privacy and verify data bias due to centralization, but it is regrettable that only a few infrastructure layers or decentralized AI projects exist to support this. AI oracle projects like Ritual, which allow smart contracts to access AI models, are expected to be exemplary cases of AI combined with blockchain if proper monitoring of off-chain node computations is performed.

8.2. Lack of Proof for AI Computation

The poor explainability and verifiability of AI is one of the main reasons why it is difficult to verify the integrity of off-chain computations. There is a lack of means to verify whether the computation was performed by the model with the specifications required by the user, and furthermore, whether it was actually computed through AI. Even if the normal operation of the AI model during the inference process is guaranteed, the bias of the data during the training process can also play an important role in the reliability of decentralized AI. Therefore, for the implementation of fully decentralized AI, we believe additional measures should be included, such as uploading and disclosing proofs on the blockchain, including a snapshot of the distribution of collected data prior to training, traces of the model training process, and whether the verified trained model matches the version deployed to the actual off-chain node.

9. Conclusion

In this article, we examined the structure of blockchain projects being developed for the decentralization of AI and discussed the security considerations they should consider. As mentioned in the article, the decentralization of AI through blockchain is in a very early stage, and projects that ensure the integrity of computation results through verification logic while protecting user data through complete decentralization do not seem to have been completed yet. We believe a decentralized computation result verification/relaying layer or a more widely usable data privacy layer will be needed in the future. For the former cases, there are some projects that are working on verifiable AI or under the concept called “optimistic ML” that implements challenge processes for AI computation, just like optimistic rollups. In the case of the latter, there are projects attempting to develop fully encrypted AI through homomorphic encryption machine learning, but this is also in a very early stage and has many constraints on computation.

We believe there is sufficient motivation to decentralize the centralized AI platforms, but decentralized AI platforms through blockchain have a complex attack surface that encompasses both Web2 and Web3. Projects where Web2-Web3 element interactions serve as the main function will require a strong background in both Web2 and Web3 for security audits. Above all, we would like to emphasize that proactively responding to new security issues triggered by the interaction between AI and blockchain will play an important role in enhancing the reliability of the decentralized AI ecosystem.

Writer’s Comment

As can be seen in the article, the current decentralized computing environment for AI through blockchain appears to be somewhat centralized. The initial thought during the research was that, in order to fully decentralize AI, the entire process of model training and inference should be recorded on the blockchain so that it can be verified by everyone. However, when considering efficiency, it was concluded that the ultimate direction might be to provide verifiability to AI. Recently, AI that performs privacy-native computation using homomorphic encryption is emerging, and although there are still many problems with practical use, it seems necessary to pay attention to such concepts. The area investigated this time is developing very rapidly, so I plan to write a follow-up article later. Please look forward to it!

c4lvin, Research Analyst at ChainLight

References

About ChainLight

Ecosystem Explorer is ChainLight’s report introducing internal analysis on trending projects of web3 ecosystem in a security perspective, written by our research analysts. With the mission to assist security researchers and developers in collectively learning, growing, and contributing to making Web3 a safer place, we release our report periodically, free of charge.

To receive the latest research and reports conducted by award-winning experts:

👉 Subscribe to our newsletter

👉 Follow @ChainLight_io

Established in 2016, ChainLight’s award-winning experts provide tailored security solutions to fortify your smart contract and help you thrive on the blockchain.

--

--

ChainLight
ChainLight Blog & Research

Established in 2016, ChainLight's award-winning experts provide tailored security solutions to fortify your smart contract and help you thrive on the blockchain