DWF Labs Talks Roadblocks and Solutions
The increasing investment in AI agents shows that the future of widespread automation may be more transformative than the industrial revolution. Like any technological innovation, AI agents will surely face problems in their development. Continuous improvement is critical to responsible use and realizing the full potential of AI agents.
In Hong Kong Consensus, Beincrypto interviewed Andrei Grachev, managing partner at DWF Labs, to introduce the major challenges AI agents face in achieving large-scale adoption and the appearance of widespread use.
Traditional technology fields and Web3 embrace AI
At this point, it is safe to say that adopting artificial intelligence (AI) will be inevitable. Tech giants including Meta, Amazon, Alphabet and Microsoft have announced plans to invest in AI and data centers in 2025.
During his first week of office, the U.S. President Trump announces Stargate, a new private joint venture Focus on the development of AI data centers. By Openai, SoftBank and Oracleplans to build up to 20 large AI data centers throughout the United States.
The initial investment is estimated at $100 billion, and by 2029, the expansion plan could make a total of $500 billion.
Web3 projects are also making similar investments in AI. In December, DWF Labs, the leading crypto venture capital firm, Launched $20 million AI Agent Fund Accelerate the innovation of automatic AI technology.
Earlier this month, near-basis support Near the agreement,,,,, It also announced its own $20 million fund Focus on extending the development of fully autonomous and verifiable agents based on near-tech.
“History shows that everything that can be automated will be automated, and definitely some Business and normal life processes will be replaced By AI agents,” Grachev told Beincrypto.
However, with the acceleration of AI’s development, its potential for abuse has become an increasingly concerned issue.
Malicious use of AI agents
In Web3, Artificial Intelligence Agent It has quickly become mainstream. They offer a variety of capabilities, from market analysis to automated crypto trading.
However, their growing integration also presents key challenges. AI abuse by malicious actors is a major problem, including scenarios ranging from simple phishing activities to complex ransomware attacks.
Since the end of 2022, the widespread availability of generative AI has fundamentally changed the creation of content, while also attracting malicious actors seeking to leverage the technology. Democratization of computing power enhances adversary capabilities and potentially reduces barriers to entry for less mature threat actors.
According to the entrustment ReportAI tools facilitate forward-looking digital documents now exceed physical forgery, up 244% year-on-year in 2024. Meanwhile, Deepfakes account for 40% of all biometric frauds.

“It has been used scam. “It is used for video calls when it’s distorted people and misrepresenting their voices,” Grachev said.
Examples of such exploitation have made headlines. Earlier this month, a financial worker at a Hong Kong multinational company was tempted to authorize payments of $25 million. Fraudsters using Deepfake technology.
The worker participated in the video call with someone he believed was a colleague, including the company’s chief financial officer. Despite initial hesitation, the workers reportedly continued to make payments after other participants appeared and sounded real. Later, it was discovered that all the participants were Crafting of deep ice cubes.
From early adoption to mainstream acceptance
Grachev believes that this malicious use is inevitable. He noted that technological development is often accompanied by initial errors and decreases as the technology matures. Grachev gives two different examples to prove his point: the World Wide Web and the early stages of Bitcoin.
“We should remember that the internet starts with porn sites. It’s like the first bitcoin that starts with drug dealers and then improves,” he said.
Several reports related to Grachev. They believe that the adult entertainment industry played a crucial role in the early adoption and development of the Internet. In addition to providing a consumer base, it has pioneered technologies such as VCR, video streaming, virtual reality and any form of communication.
Porn acts as an onboarding tool. The adult entertainment industry has historically driven consumers to adopt new technologies.
Its early embrace and application innovation, especially when it successfully meets the demands of its listeners, often leads to wider mainstream adoption.
“It starts with fun, but it’s fun. Then, you can build something in the audience,” Grachev said.
Over time, safeguards have also been established to limit the frequency and accessibility of adult entertainment. In any case, it is still one of several services the internet offers today.
Bitcoin’s journey from darknet to interruption
this The evolution of Bitcoin Closely reflects the earliest use cases of the Internet. The early adoption of Bitcoin is significantly associated with dark web markets and illegal activities, including drug trafficking, fraud and Money laundering. Its pseudo-nature and the convenience of global fund transfer make it attractive to criminals.
Although Bitcoin continues to be used in criminal activities, many legal applications have been found. this Blockchain technology Basic cryptocurrencies provide solutions to real-world problems and disrupt traditional financial systems.
Although they are still very nascent industries, cryptocurrency and blockchain applications will continue to evolve. According to Gachev, the same thing will happen with the gradual use of AI technology. For him, mistakes must be welcomed to learn from them and make adjustments accordingly.
“We should always remember that fraud happens first and then people start thinking about how to prevent fraud. Of course it happens, but it’s a normal process, it’s a learning process. curve,” Grachev said.
However, knowing that these situations will happen in the future will also raise questions about who should be responsible.
Issues of responsibility
Identifying liability for harm caused by the actions of an agent is a complex legal and moral issue. How to inevitably lead to AI problems.
The complexity of AI systems poses a challenge in determining liability for damages. Their “black box” nature, unpredictable behavior and continuous learning ability make it difficult to have typical ideas about who is wrong when problems arise.
In addition, multi-party participation in AI development and deployment complicates responsibility assessment, making it difficult to identify the culprit of AI failure.
The responsibility may be related to the manufacturer of design or production defects, software developers of code issues, or users do not follow the instructions, install updates or maintenance Safety.
“I think the whole thing is too new and I think we should be able to learn from it. We should be able to block certain AI agents if needed. But from my point of view, no one is going to do this without any bad intentions. Take charge, because you are really new,” Grachev told Beincrypto.
However, according to him, these situations require careful efforts to avoid impacting ongoing innovation.
“If you blame this entrepreneur, it will kill innovation because people will be scared. But if it works in a bad way, right, it can eventually work. We need to have a way to stop it, Learn, improve and re-learn. ” Grachev added.
However, the thin thread is still razor thin, especially in more extreme cases.
Solve the trust issue responsible for AI adoption
When discussing the future of artificial intelligence, the widespread fear involves situations where AI agents are more powerful than humans.
“There are a lot of movies. If we’re talking about, arguably police or government controls, or some army in some kind of war, automation is certainly like a huge fear. “Some of it,” Grachev said. Things can be automated to such a huge level where they will harm humans. ”
When asked if this could happen, Grachev said that, in theory, this could be the case. In any case, he admitted that he had no idea what would happen in the future.
However, such a situation symbolizes the basic trust problem between humans and artificial intelligence. Grachev said the best way to solve this problem is to expose humans to situations where AI actually helps.
“AI can be hard to believe. That’s why it should start with something simple, because when someone explains it’s trustworthy, trust in AI agents is not built. People should be used to using it. For example, if you’re talking about It’s a cryptocurrency that can start memes, assuming it’s on pump.fun, but why not start it with a voice message? With an AI proxy, just send a voice message that says “Please start this question and start it.” Then, the next step (will be) to trust the agent with some more important decisions. ” he said.
Ultimately, there is no doubt that the journey to adopt widespread AI adoption will be marked by significant progress and unpredictable challenges.
Balanced innovation with responsible implementation in this development is crucial to shaping the future of AI that benefits all humanity.
Disclaimer
follow Trust Project Guide, this feature article introduces the opinions and opinions of industry experts or individuals. Beincrypto is committed to transparent reporting, but the views expressed in this article do not necessarily reflect the views of Beincrypto or its employees. Readers should independently verify the information, make decisions based on this content, and consult professionals. Please note that our terms and Conditions,,,,, Privacy Policyand Disclaimer Updated.