Introduction
LI Innovation LYNX Cloud is a cloud solution designed for modern applications, data lakes, large-scale data processing, AI large language model (LLM) usage, AI image generation, and cloud storage needs. It provides secure and reliable storage for a wide range of use cases, including:
- Data persistence for cloud-native applications
- Static hosting for web content
- Streaming media storage
- Data lake storage for analytics and big data
- Bulk processing output for machine learning model artifacts and datasets
- Cloud drive and file sharing
- Large language model (LLM) text generation
- AI image generation
LYNX utilizes cloud-native technology to provide scalable, highly available, and durable storage. It integrates with AI and machine learning services to support innovative use cases. These integrations make LYNX a suitable platform for object storage and artificial intelligence research and usage, enabling organizations to easily train, deploy, and scale their AI applications. Additionally, LYNX's high throughput and low latency make it a suitable choice for processing and storing large AI datasets, such as images and text.
LYNX AI
LI Innovation LYNX AI is an AI application platform that integrates a large number of mainstream open-source large language models (LLMs) and mainstream open-source image generation models through Google AI API, Cloudflare AI Gateway, Cloudflare Workers AI, and Hugging Face. It is available for any user to use for AI chat, writing, code modification and writing, and image creation. The content it generates is licensed under the CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0) International License Agreement.
Product access address: LI Innovation AI Platform
Aliases: LYNX AI, LI Innovation Cloud AI, LI Innovation AI Platform
Product language: English
Current status: Open Beta
LYNX Cloud Storage
LYNX Shared Cloud Storage is a highly responsive and unlimited upload/download speed public cloud drive object storage application that connects to the LI Innovation Global Database for fast file hosting, static web page hosting, and other purposes. Powered by Cloudflare R2.
Product access address: LI Innovation Cloud Drive
Aliases: LI Innovation Global Database, LI Innovation Asia Pacific Database, LI Innovation Cloud Storage/Shared Cloud Storage, LYNX Cloud Storage, LI Innovation Cloud Drive, LYNX Drive
Product language: Chinese
Current status: Semi-public Alpha
Getting Started
Getting started with LI Innovation LYNX Cloud
Table of Contents
|
Basics
Authorized Access
- AI Platform (LYNX AI): LYNX provides open access and usage of language models and image generation models to normal human users. Users can access LI Innovation Cloud LYNX AI through LI Innovation AI Platform. It is free and does not require registration.
- Cloud Storage (LI Innovation Global Database): LYNX Drive is currently in beta and is not yet open to the public. Although Internet users can read file content from LI Innovation Cloud Drive and LI Innovation Global Database, they do not have permission to modify or write. If you wish to use this service, please send an email to moc.esreviil|troppus#moc.esreviil|troppus and explain your intended use in detail to be added to the waiting list.
Setting up Cloud Storage
Prerequisites:
- Successfully requested and obtained a username and password
Steps:
Create a new folder named after your title on the home page (note that it is best to use only lowercase English letters or numbers for optimal compatibility) and click OK. LYNX will pop up asking you to authenticate, please enter your username and password.
Click on your folder and enter the folder, and upload/edit your files.
Note:
Please only upload/edit files in your own folder, and do not edit or change the content of the home page or other people's folders.
If we update LYNX or you have not authenticated in the relevant browser for a long time, LYNX will ask you to re-authenticate when you try to change the storage again. Please make sure you are authenticated when uploading files larger than 100MB.
If you interrupt an ongoing file upload, the incomplete body of the relevant file will be retained in the LI Innovation Global Database for one week.
Your uploaded files will be accessible through the cdn.liiproject.org or drive.liiverse.com/raw path. It is recommended to use cdn.liiproject.org as the access address for the object.
Uploading, Deleting and Downloading Data
Cloud Storage:
- Upload: Click on the icon in the lower right corner and start uploading or create a new folder.
- Delete: Click on the three dots icon to the right of the specific file and select Permanent Delete. The file will be permanently deleted from the LI Innovation Global Database.
- Download: Click on the three dots icon to the right of the specific file (some file formats can be downloaded directly by accessing the file link) and select Download. The file will be downloaded to your local device.
AI Image Generation:
- Download: In the LYNX AI platform, select an image generation model and use English instructions in the chat box and send. After the model generates the image, right-click and select download or save as.
- Delete: Delete the conversation.
AI Text Generation:
- Send image: After selecting the language model, use HTML code to insert the image URL in the chat area and send it. (Please note that our language model currently cannot parse images)
- Delete: Delete the conversation.
AI Platform Settings
Model Prompt:
- In the LYNX AI platform, click the "Settings" icon in the lower left corner of the sidebar, and change the model's personality and identity in the "System Prompt" section using English descriptions.
Model Selection:
- In the LYNX AI platform, click on the box above the chat input box to select the AI model to use. After selecting the model, click on the blank space with the mouse or key to close the selection module. Be careful not to touch the buttons of other models.
- If you want to use OpenAI's ChatGPT, you need your own OpenAI API Key. Please set it in the "Settings" icon.
Chat:
- Enter text in the chat input box to chat with the LLM model or generate code, or use English descriptions to have the image generation model generate images.
- The "History" button on the left side of the chat input box is used to enable or disable the model history record. When enabled, the LLM model can refer to all interactions of the chat (including previous interactions) to reply, a feature of most AI chat platforms, which is enabled by default.
AI Models
Language Models
Name | Developer (Operator) | Introduction |
---|---|---|
qwen1.5-14b-chat-awq | Alibaba (Cloudflare) | Qwen 1.5 is an improved version of Tongyi Qianwen, a large language model series independently developed by Alibaba Cloud. It is an efficient, accurate, and extremely fast low-bit quantization method that currently supports 4-bit quantization. |
openchat-3.5-0106 | OpenChat (Cloudflare) | OpenChat is an innovative open-source language model library fine-tuned with C-RLFT - a strategy inspired by offline reinforcement learning. |
gemma-7b-it-lora | Google (Cloudflare) | This is Cloudflare's dedicated Gemma-7B base model for inference using the LoRA adapter. Gemma is a lightweight, advanced open model family launched by Google, built with the same research and technology that created the Gemini model. |
openhermes-2.5-mistral-7b-awq | Teknium (Cloudflare) | OpenHermes 2.5 Mistral 7B is an advanced Mistral Fine-tune, a continuation of the OpenHermes 2 model, trained on an additional code dataset. |
neural-chat-7b-v3-1-awq | TheBloke (Hugging Face) | This model is a 7B parameter LLM fine-tuned from mistralai/Mistral-7B-v0.1 on the open-source dataset Open-Orca/SlimOrca on an Intel Gaudi 2 processor. |
starling-lm-7b-beta | Nexusflow (Hugging Face) | We introduce Starling-LM-7B-beta, an open large language model (LLM) trained by artificial intelligence feedback reinforcement learning (RLAIF). Starling-LM-7B-beta is trained from Openchat-3.5-0106, using Nexusflow/Starling-RM-34B and the policy optimization method to fine-tune the language model (PPO) based on human preferences. |
llama-3-8b-instruct | Meta Platforms, Inc. (Cloudflare) | Meta Llama 3 demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning. |
Gemini Pro | 1.5 Pro introduces a groundbreaking context window of up to 2 million tokens, the longest context window of any large foundation model to date. It achieves near-perfect recall on cross-modal long-context retrieval tasks, enabling it to accurately process large documents, thousands of lines of code, hours of audio and video, and more. | |
Gemini Flash | 1.5 Flash has a default context window of one million tokens, meaning you can process an hour of video, 11 hours of audio, codebases containing over 30,000 lines of code, or over 700,000 words. |
Image Generation Models
Name | Developer (Operator) | Introduction |
---|---|---|
dreamshaper-8-lcm | Lykon (Cloudflare) | Stable Diffusion model fine-tuned for better photorealism without sacrificing range. |
stable-diffusion-xl-base-1.0 | Stability AI (Cloudflare) | Stability AI's diffusion-based text-to-image generation model. Generates and modifies images based on text prompts. |
stable-diffusion-xl-lightning | ByteDance (Cloudflare) | SDXL-Lightning is an extremely fast text-to-image generation model. It can generate high-quality 1024px images in just a few steps. |
Troubleshooting
AI Platform Error Table
Error | Description | Solution |
---|---|---|
400/500 | API error/limit reached or configuration error | If this error persists, please contact moc.esreviil|troppus#moc.esreviil|troppus and report the issue |
1101 | Cloudflare AI Gateway or Workers AI runtime error | If this error persists, please contact moc.esreviil|troppus#moc.esreviil|troppus and report the issue |
1103 | Cloudflare AI Gateway or Workers AI runtime error | If this error persists, please contact moc.esreviil|troppus#moc.esreviil|troppus and report the issue |
! Security restrictions have been triggered, please restart the conversation! | Offensive content blocking has been triggered, please restart the conversation | Please restart the conversation and make sure the topic does not violate LI Innovation's Terms of Service and Community Guidelines. |
Limitations
- Security Restrictions: To prevent offensive use or attacks on the service, we use Cloudflare Gateway, Cloudflare's globally interconnected cloud, Cloudflare Turnstile, and the LI Innovation "Guanshan Taibao" joint security monitoring network for security monitoring and web offensive behavior/bot blocking.
- Usage Limits:
- Google Gemini: 15 RPM (requests per minute) per minute, 1 Million TPM (tokens per minute), 1,500 RPD (requests per day) per day.
- Cloudflare: 10,000 Neurons per day1 , with a security limit of no more than 10 RPM (requests per minute).
Terms of Use
- Please refer to User Agreement, Community Guidelines, Privacy Policy
Source Code
Cloud Storage: https://github.com/ljxi/Cloudflare-R2-oss Author: https://github.com/ljxi
AI Platform: https://github.com/Jazee6/cloudflare-ai-web Author: https://github.com/Jazee6
Support
If you have any questions or feedback, please visit our Discord Community or contact moc.esreviil|troppus#moc.esreviil|troppus
Operationally supported by Cloudflare, Google, Hugging Face, Vercel.
Security monitoring is provided and protected by Cloudflare Global Network & LI Innovation "Mountain Watcher" Central Security Monitoring Network.