NFT Sales For Edge AI Development
Everything Not Prohibited Is Mandatory
There are several reasons to host AI locally, including:
- Privacy and data security: Conversations and data stay on the device, reducing the need to transmit sensitive information online.
- Offline access: AI capabilities can be used without an internet connection, making them accessible in remote or underserved areas.
- Customization: Models can be tailored to specific needs and preferences.
- Cost: Subscription fees associated with cloud-based AI services can be avoided.
- Personal AI chatbot: Local AI models with personalities can act as personal assistant chatbots.
As you may imagine, one can:
- Quickly test and integrate models into apps and prototypes, and avoid cloud lock-in
- Experiment with AI capabilities offline using my data
- Practice real-world AI skills without access to enterprise hardware
- Explore state-of-the-art AI with easy setup
This all requires working with open-source software.
Proprietary software will not allow one to do these things.
8-30–2024: OpenAI and Anthropic signed a groundbreaking agreement with the U.S. Artificial Intelligence Safety Institute to allow government access and testing of their AI models before public release.
The U.S. AI Safety Institute will have access to major new models from both companies before and after their public release.
This collaboration is a step toward AI regulation and safety efforts, with the U.S. government evaluating AI models’ capabilities and associated risks.
The institute will provide feedback to OpenAI and Anthropic on potential safety improvements that should be made.
These agreements come as AI companies face increasing regulatory scrutiny. California legislators passed a broad AI regulation bill.
but … on October 1, 2024
California Governor Gavin Newsom just vetoed S.B. 1047, a groundbreaking AI safety bill that would have imposed stricter regulations on Silicon Valley AI firms and the release of new models in the state.
The two most popular AI companies worldwide are granting the U.S. government access to unreleased models before release.
This could reshape how AI is developed, tested, and deployed worldwide, with major implications for innovation, safety, and international competition in the AI space.
Both the “responsible technology” crowd and law enforcement believe there ought to be a cop in every conversation… that we ought to accept — even expect — that our devices and software are built for surveillance and control from the ground up.
I can not make a better argument for locally hosted AI than that.