As a developer interested in frontend performance optimization, I conducted a small experiment to explore how different access methods affect the response speed of AI services.
Experiment Background
I noticed many people complaining about long wait times when accessing Manus AI during peak hours. So I built a simplified access page: https://www.manusai.info/
This page embeds the original service via iframe, but with several frontend optimizations:
- Reduced unnecessary resource loading
- Optimized page structure
- Implemented lazy loading strategies ## My Questions
- Under the same network conditions, which responds faster: the optimized page or direct access to the original site?
- During peak hours, which requests does the API prioritize? The official page or third-party embeds?
-
Does iframe embedding affect the model's performance?
Initial Observations
After several days of testing, I discovered some interesting phenomena:
During off-peak hours, there's little difference in response speed between the two
At certain times, the optimized page seems to load faster
-
There's no noticeable difference in the quality of generated results
Invitation to Participate in Testing
If you're also interested in this topic, feel free to visit https://www.manusai.info/ for testing and compare it with the official page. I'm particularly interested in learning:
How fast is the loading speed on each side?
Have you experienced waiting in a queue?
-
Is there any difference in the quality of results?
Technical Discussion
From a technical perspective, I'm curious about how the backend service handles API request priorities. Is it based on chronological order, or are there other factors?
This small experiment might provide some insights into understanding load balancing and access optimization for AI services.
Top comments (0)