Maxity designs the world's first

AI Guardian Sentinel (AIGS)

that solves the strongly emphasised existential risks to ensures the safety of current and future AGI.

With a unique concept of

Decentralised Control System

and

Proof-of-Work Framework

Maxity is set to be the coevolution of AI, gaining public recognition, acceptance, and support.

INNOVATION TO THREATS

AI has evolved from task-oriented systems to those understanding human language, advancing towards Artificial General Intelligence (AGI) with cognitive abilities. This progression introduces existential threats and highlights AI's lack of values and emotions, underscoring the limitations of current governance strategies.

Existential Threats

Substantial progress in AGI could lead to human extinction or irreversible catastrophe. If AI surpasses human intelligence, it might become uncontrollable, similar to how human dominance affects other species. Humanity's fate could hinge on the actions of super intelligent machines.

Inhuman Essence

Human judgment is shaped by factors like experience, intuition, values, and context, enabling nuanced decision-making. AI may assist in medical diagnoses and legal analysis by processing data, but it cannot replace the deep expertise and personal insights of professionals.

Regulatory Framework

Human oversight is crucial for AI to ensure fairness, transparency, and accountability. This involves regulating data use, addressing algorithmic bias, protecting privacy, and ensuring transparent decision-making.

Solutions AIGS

AI Guardian Sentinel

  • Ethical AI Guidelines

    Comprehensive guidelines and standards to ensure the ethical, safe, and transparent regulation of AI technologies.

  • Blockchain-AI Regulatory Tech

    Regulatory solutions that leverage a combination of blockchain and AI technologies to ensure robust, transparent, and tamper-proof oversight.

  • AI Regulation Roadmap

    Outlining the pathway for effective AI regulation, which includes establishing clear regulatory milestones, defining key performance indicators, coordinating with stakeholders, conducting comprehensive risk assessments.

Value Dataset

Involves creating a trusted and verified dataset that represents common human values. The dataset is curated through a decentralised process, ensuring it reflects a wide range of societal values and norms.

LLM Evaluation

The dataset acts as a reference for evaluating the behavior of LLMs, ensuring they operate within the bounds of accepted human values, preventing harmful outputs and aligning AI actions with societal norms.

Blockchain Verification

Blockchain technology is used to ensure the integrity and transparency of the dataset. Every addition or modification is verified by the community, preventing tampering and ensuring consensus.

AIGS Workflow

AIGS provides monitoring and control over SOTA LLMs to the majority ofindividuals worldwide in a private, efficient and trusted (PET) manner

Distributed Computing

  • Involves hosting many smart devices to support intelligent computing tasks
  • By distributing computing tasks across numerous devices, the system ensures redundancy and resilience. This decentralized approach prevents any single point of failure and enhances the robustness of the AI regulatory framework
  • Distributed computing platforms leverage the computational power of numerous interconnected devices, providing scalable and efficient support for AI operations

Private Verification

  • This system uses zero-knowledge proof (ZKP) technology to allow anonymous verification of opinions and actions.
  • Participants can verify the integrity of LLM outputs and moderation actions without revealing their identities. This ensures privacy and encourages more individuals to participate in the regulatory process.
  • Zero-knowledge proofs ensure that verifications are secure and anonymous, adding an extra layer of trust and privacy to the system.

Proof-of-work Consensus

  • A decentralized set of validator nodes operates a Proof-of-Work (POW) blockchain for managing the regulatory system.
  • Validators stake tokens to participate in consensus, validating transactions to enhance the system's security and reliability.
  • Validators' staked tokens can be lost for malicious actions, aligning their interests with the system's security and integrity.

Technical Advancement

TechAlgorithmsSystems

AICS works as a firewall between AI and humans

Incentivized
Governance

Empower

Incentivizes individuals to participate in securing LLMs through a tokenized system.

Rewards

Users are rewarded with tokens for contributing to the moderation process, such as reporting harmful outputs or suggesting improvements. This creates a community-driven effort to maintain AI safety and ethics.

Innovate

A token economy built on blockchain ensures that contributions are fairly rewarded and the process remains decentralized and transparent.

MAX Tokenomics

Our plan for the tokenomics of our project include a two-year release schedule for the 1 billion total supply of tokens, which can be used for high APY staking and will serve as the currency for SAFE Intelligence. Additionally, part of the platform's income will be used to rebuy and burn tokens, ensuring sustainable growth and value.

Allocation

For the development of the ecosystem. It may be used to fund various initiatives, projects, or grants that contribute to the growth and expansion of the platform's ecosystem.

Meet the Team

Team member

Director

Team member

Senior Strategic Advisor

Team member

Chief Scientist

Team member

Chief Technology Officer

Roadmap