Core Viewpoint - Jan-v1, an open-source model with a size of only 4 billion parameters, claims to be a free alternative to Perplexity Pro, boasting a SimpleQA accuracy rate of 91% and superior performance in local environments [1][3][33]. Group 1: Model Features and Performance - Jan-v1 is based on Qwen3-4B-Thinking and has been fine-tuned for reasoning and tool usage, making it suitable for web search and deep research [5][12]. - The model achieves a SimpleQA accuracy of 91.1%, demonstrating strong factual question-answering capabilities [9]. - Jan-v1 performs well in dialogue and instruction tasks, showcasing its versatility [10]. - The model supports a context length of up to 256k, allowing for effective long-text analysis [21][25]. Group 2: Comparison with Perplexity Pro - A comparative evaluation of Jan-v1 and Perplexity Pro was conducted using complex queries, revealing that Jan-v1 can dynamically integrate web search results to generate traceable answers, similar to Perplexity Pro [15][18]. - In a test involving summarizing a research paper, Jan-v1's performance was closer to Qwen-4B, indicating its advanced reasoning capabilities [25]. Group 3: User Experience and Accessibility - Jan-v1 can be run on platforms like Jan, llama.cpp, or vLLM, and is available for local deployment, with a straightforward installation process taking only about two minutes [8][29][32]. - The model is available in four variants, with the largest being 4GB and the smallest at 2.3GB, making it accessible for various users [30]. Group 4: Community Feedback and Future Potential - Overall, the online reception of Jan-v1 has been positive, particularly due to its free nature and high accuracy rate [33]. - Some users have expressed interest in a more comprehensive technical report to better understand the model's capabilities [34].
实测Perplexity Pro平替模型,免费开源仅4B
量子位·2025-08-15 04:21