Data Flow
Core Flow
The journey of data through the Ringfence Protocol transforms raw data into structured, monetizable assets. Each stage in this flow enriches the data with context, structure, and provenance, creating a seamless data collection, processing, and monetization ecosystem.
Processing Stages
When data enters the Ringfence Protocol, it begins a transformation through interconnected systems:
Data Collection
Data collection begins with user consent. Contributors opt-in to share their data, which is gathered securely through:
User Vaults: Act as secure storage hubs, encrypting data and providing granular access control to contributors.
Data Scrapers: Retrieve data from users on behalf of Ringfence Protocol.
Ringfence API: Enables businesses and developers to submit structured, compliant datasets.
Blind Compute
Powered by Nillion's Blind Compute technology, this critical step introduces privacy-enhancing computation. Data is distributed across Nillion's network, bolstering privacy, security and efficiency, ready for data processing by Data Agents.
Partial Encryption: Data is partially encrypted, sharing only the details necessary for specific processing tasks while keeping sensitive information secure and private.
Secure Computation: Data can be processed by Ringfence Data Agents without needing total decryption, eliminating the traditional decrypt-compute-re-encrypt cycle, enhancing both efficiency and security.
Efficiency Gains: By skipping the decrypt-re-encrypt process, the protocol becomes faster and more resource-efficient, enabling scalable operations across vast datasets.
Privacy by Design: Blind Compute ensures data privacy at every step of the flow, allowing contributors to share their data while maintaining control and privacy confidently.
Data Processing
At this stage, Ringfence Data Agents take over:
Data Fetching: Agents autonomously retrieve data from sources such as Escher, the Ringfence API, and user vaults.
Data Validation: Ensure data quality and compliance with Ringfence standards, maintaining ecosystem integrity.
Data Structuring: Organize and prepare data for monetization, ensuring compatibility with Ringfence’s systems.
Facilitation for Buyers: Agents act as data brokers, pairing validated datasets with buyers across marketplaces, DAOs, and subnets.
On-Chain Registration
After datasets are validated, cleaned, and anonymized, data agents hash and register them on the NEAR blockchain, enabling traceability and transparency.
Data Provenance: Creates a unique, immutable record for each dataset.
Attribution System: Links data originators to their contributions for transparent compensation.
Data Storage and Distribution
Processed and signed data is uploaded to the Data Lake for storage and distributed to key integration points.
Data Lake: A secure, off-chain repository for organized storage and efficient retrieval.
Integration Points: Distributed datasets feed specialized DAOs, such as:
TAO Subnet: Decentralized AI model training.
Data DAO: Structured data monetization.
Creator DAO: Licensing and distribution of creative assets.
Monetization and Rewards
The Ringfence marketplace connects data buyers with valuable assets, while smart contracts ensure fair compensation for contributors.
Marketplaces: Enable seamless transactions for datasets.
Perpetual Incentive Pools: Ensure ongoing rewards for Data Agents and originators, sustaining ecosystem engagement.
Smart Contracts: Automate payments, routing 85% of transaction fees to Data Agents and contributors while retaining 15% for protocol maintenance.
Data Transformation
Each stage of the Data Flow enriches datasets, ensuring their utility for buyers and compliance with regulatory standards. This transformation bridges the gap between raw data and actionable insights:
From Raw Data: Datasets undergo cleaning, labeling, and structuring to enhance usability.
To Monetizable Assets: Finalized datasets are stored in the Lakehouse and linked with on-chain provenance records, ready for distribution.
Last updated