Underlying Technology - Apta
A Comprehensive Exploration of the AI Technology Powering Hero.io
Last updated
A Comprehensive Exploration of the AI Technology Powering Hero.io
Last updated
Hero.io employs a sophisticated architecture combining the natural language capabilities of LLMs with its proprietary bespoke AI expert models and the interaction layer between the two systems to enhance the reliability and depth of insights.
Aptaβs pre-emptive agentic flow architecture combines the natural language abilities of LLMs with proprietary AI expert models, which perform advanced reasoning and access live data streams for accurate, up-to-date answers. An interaction layer seamlessly connects these systems, allowing users to access reliable, deep insights through simple language. Conversational AI systems are reactive, initiating computational inference only when prompted by the user. While this works for simple tasks, complex analytics would cause significant delays and be incompatible with the user experience. Apta's framework is industry-leading by being pre-emptive, anticipating key user queries in the crypto and Web3 space, and pre-computing complex intermediaries to deliver rapid responses for multi-step and advanced tasks.
The technical setup of Hero.io's platform is rooted in Apta's founding team's PhD research from the University of Cambridge. So far, Apta's team has published more than 60 academic articles in the field of AI and machine learning, including papers that have been presented at prestigious conferences like the ICML (International Conference on Machine Learning). Hero.io has formed a strategic joint venture with Apta to utilize its AI platform for web3 and crypto applications.
Apta specializes in integrating logic-based large language models (LLMs) with expert modules, delivering precise and context-aware insights across various industries. Their advanced technology powers applications by analyzing diverse data sources in real-time, ensuring unparalleled accuracy and reliability. Renowned for its cutting-edge advancements, Apta is committed to pushing the boundaries of AI, creating impactful and user-friendly solutions that drive innovation and excellence.
LLMs are good at understanding natural language and are designed to predict the next word in a probabilistic manner. They are not optimized to carry out logic-based or rules-based tasks. When a user asks an LLM to do a calculation, the model cannot actually do the maths. It simply outputs the most common word that follows the statement. This probabilistic nature of such models makes them unfit for applications that require high-accuracy up-to-data information like decentralized finance (DeFi) and cryptocurrencies in general.
Hero.io is powered by the next-generation architecture that significantly advances the LLM paradigm through the integration of LLMs with a collection of logic-based expert AI models.
Bespoke expert models developed by Apta are tailored for particular functions, such as assessing cryptocurrency health or analyzing the charts, ensuring high precision in task execution.
These individual models are built using highly specialized ML and deep learning architectures that are optimized for their specific tasks. This approach ensures that LLMs are not utilized for logic-based and rules-based tasks as they are not capable of executing accurately.
Instead of having one big generalist system, our system is a collection of individual AI experts, with the LLM acting as the expert communicator, explainer, and translator for the user.
By integrating these AI expert models with the LLM interface, we enable the inference of insights from within data with access to these through the intuitive medium of natural language.
Specifically, Apta's intellectual property lies in sophisticated prompt routing, task decomposition, and advanced methods for efficient model interface to make sure that a query formulated in natural language is correctly understood and relevant parts of it are routed to the appropriate AI expert models to provide the accurate and logical answer.
RAG enhances LLM outputs by integrating them with specifically retrieved information from a reliable source, ensuring the responses are not only relevant but also accurate, which is essential in the cryptocurrency space.
In addition to public data, Hero.io leverages proprietary data sources collected over the last 1.5. years by a group of data researchers. These data sets are difficult to gather individually and contain alpha to provide unique insights, especially in the cryptocurrency domain.
Data sources examples:
Market data: Coingecko, Coinmarketcap, Dextools.io, and similar providers
Social media dynamic data: X, YouTube, LinkedIn, Reddit, Medium
Static data sources: Whitepapers, articles, various reputable databases
On-chain data: Data from various chains
Proprietary data sets: Meticulously curated proprietary sets collected and owned by Hero.io
Hero.io employs a multi-layer approach:
Layer One - Core Integration - focuses on synthesizing information from various sources.
Layer Two - Insight Engine - uses expert logic modules to identify patterns and make predictions.
Layer Three - Dynamic Execution - moves towards automatic execution, enabling actions based on analyzed data.