On June 19, Ethereum blocks the explorer and analytics platform Etherscan launched a new tool called “Code Reader” that uses artificial intelligence to retrieve and interpret the source code of a specific contract address. Upon entering a user prompt, Code Reader generates a response via OpenAI’s Large Language Model (LLM) and provides insight into the contract’s source code files. Etherscan developers wrote:

“You need a valid OpenAI API key and sufficient OpenAI usage limits to use this tool. This tool does not store your API keys.”

Examples of uses for Code Reader include gaining deeper insight into contract code through AI-generated explanations, getting comprehensive lists of smart contract features related to Ethereum data, and understanding how the underlying contract interacts with decentralized applications (dApps). “Once the contract files are loaded, you can select a specific source code file to read. Additionally, you can edit the source code directly in the user interface before sharing it with the AI,” the developers wrote.

Code Reader demo. Source: Etherscan

Amid the AI ​​boom, some experts have warned about the feasibility of current AI models. According to a recent message published Singaporean venture capital firm Foresight Ventures, “computing resources will be the next big battleground in the coming decade.” Despite growing demand for training large-scale artificial intelligence models on decentralized distributed computing networks, researchers say current prototypes face significant limitations, such as complex data synchronization, network optimization, data privacy, and security issues.

In one example, Foresight researchers noted that training a large model with 175 billion parameters with a simple floating-point representation would require approximately 700 gigabytes. However, distributed training requires these parameters to be frequently transferred and updated between computing nodes. For 100 compute nodes and each node needing to update all parameters at each unit step, the model would require the transfer of 70 terabytes of data per second, far exceeding the capacity of most networks. The researchers summarized:

“In most scenarios, small AI models are still a viable option and should not be overlooked too early in the tide of FOMO on large models.”