Workflow
Ollama
icon
Search documents
你还在 draw.io 里拖拖拽拽?一句话让架构图自己长出来~
菜鸟教程· 2025-12-08 03:30
在线地址: https://next-ai-drawio.jiang.jp/ 平时搞搞开发,老板或产品经理突然跑过来,给你扔了一堆需求文档,然后轻飘飘的说一句:" 把这个系统架构图画一下,要用最新的图标标注,今天 下班前给我 。" 打开 draw.io,找图标,连线,摆布局,调样式…… 一张复杂的架构图画下来,半天过去了, 发现箭头歪了、框大小不一致、图层乱成一锅粥,最后成品还被说"嗯……能不能更美观一点"。 今天介绍个好用的画图 项目 -- Next AI Draw.io ,它可以 让 AI 替你画图、改图、重画图的 draw.io 超级外挂。 现在 Star 数都来到了 4k+: | | ⊙ Watch | 235 | | ಕ್ಕಿ Fork 3.3k | ▼ | Star | 45.5k | | --- | --- | --- | --- | --- | --- | --- | --- | | Add file | | く> Code V | | About | | | | | 2 days ago | 1 447 Commits | | Windows inside a Docker containe ...
从 Apple M5 到 DGX Spark ,Local AI 时代的到来还有多久?
机器之心· 2025-11-22 02:30
Group 1 - The recent delivery of the DGX Spark AI supercomputer by Huang Renxun to Elon Musk has sparked community interest in local computing, indicating a potential shift from cloud-based AI to local AI solutions [1][4] - The global investment in cloud AI data centers is projected to reach nearly $3 trillion by 2028, with significant contributions from major tech companies, including an $80 billion investment by Microsoft for AI data centers [4][5] - The DGX Spark, priced at $3,999, is the smallest AI supercomputer to date, designed to compress vast computing power into a local device, marking a return of computing capabilities to personal desktops [4][5] Group 2 - The release of DGX Spark suggests that certain AI workloads are now feasible for local deployment, but achieving a practical local AI experience requires not only powerful hardware but also a robust ecosystem of local models and tools [6] Group 3 - The combination of new architectures in SLM and edge chips is expected to push the boundaries of local AI capabilities for consumer devices, although specific challenges remain to be addressed before widespread adoption [3]
大模型“带病运行”,漏洞占比超六成
3 6 Ke· 2025-11-17 10:34
2025年3月,国家网络安全通报中心紧急通报开源大模型工具Ollama存在严重漏洞,存在数据泄露、算力盗取、服务中断等安全风险,极易引发网络和数 据安全事件;2025年6月,英国高等法院发现数十份法律文书中含ChatGPT生成的虚构判例,其中一起高额索赔案件中,多项判例引用均为伪造…… 当大模型以"基础设施"姿态渗透到各种关键领域,其自身存在的数据安全、算法鲁棒性、输出可信度等"内生风险"已从理论隐患变为现实威胁,甚至关乎 公共利益与社会秩序。 在今年世界互联网大会乌镇峰会期间,360安全发布《大模型安全白皮书》,提到当前大模型安全漏洞呈指数级增长,2025年国内首次AI大模型实网众测 发现281个安全漏洞,其中大模型特有漏洞占比超60%。 无论是企业面对漏洞时的被动修复,还是行业缺乏覆盖全链路的风险管控工具,都让大模型安全防护陷入"事后补救"的困境。近日,安远AI发布前沿AI风 险监测平台,这是专注于评估与监测前沿AI模型灾难性风险的第三方平台,通过基准测试和数据分析,对全球15家领先模型公司的前沿大模型的滥用和 失控风险进行针对性评估和定期监测,动态掌握AI模型风险现状及其变化趋势,为破解大模型"带病运行 ...
X @Avi Chawla
Avi Chawla· 2025-09-27 19:58
RT Avi Chawla (@_avichawla)I just built my own multi-agent deep researcher!It uses a 100% local LLM and MCP.Here's an overview of how it works:- User submits a query- Web agent searches with Bright Data MCP tool- Research agents generate insights using platform-specific tools- Response agent crafts a coherent answer with citationsTech stack:- Bright Data MCP for real-time web access- CrewAI for multi-agent orchestration- Ollama to locally serve GPT-OSSWhy Bright Data MCP?To build this workflow, we needed to ...
X @Avi Chawla
Avi Chawla· 2025-09-27 06:33
Technology Stack - The multi-agent deep researcher utilizes a 100% local LLM and MCP [1] - The system employs CrewAI for multi-agent orchestration and Ollama to locally serve GPT-OSS [2] Web Access Solution - Bright Data Web MCP is used to gather information from several sources, addressing issues like IP blocks and CAPTCHA blocks [1] - Bright Data MCP offers platform-specific tools compatible with major agent frameworks [2] - Bright Data MCP provides real-time web access [2] Workflow - The workflow involves a user submitting a query, followed by a web agent searching with the Bright Data MCP tool [2] - Research agents generate insights using platform-specific tools, and a response agent crafts a coherent answer with citations [2]
深度 | 安永高轶峰:AI浪潮中,安全是新的护城河
硬AI· 2025-08-04 09:46
Core Viewpoint - Security risk management is not merely a cost center but a value engine for companies to build brand reputation and gain market trust in the AI era [2][4]. Group 1: AI Risks and Security - AI risks have already become a reality, as evidenced by the recent vulnerability in the open-source model tool Ollama, which had an unprotected port [6][12]. - The notion of "exchanging privacy for convenience" is dangerous and can lead to irreversible risks, as AI can reconstruct personal profiles from fragmented data [6][10]. - AI risks are a "new species," and traditional methods are inadequate to address them due to their inherent complexities, such as algorithmic black boxes and model hallucinations [6][12]. - Companies must develop new AI security protection systems that adapt to these unique characteristics [6][12]. Group 2: Strategic Advantages of Security Compliance - Security compliance should be viewed as a strategic advantage rather than a mere compliance action, with companies encouraged to transform compliance requirements into internal risk control indicators [6][12]. - The approach to AI application registration should focus on enhancing risk management capabilities rather than just fulfilling regulatory requirements [6][15]. Group 3: Recommendations for Enterprises - Companies should adopt a mixed strategy of "core closed-source and peripheral open-source" models, using closed-source for sensitive operations and open-source for innovation [7][23]. - To ensure the long-term success of AI initiatives, companies should cultivate a mindset of curiosity, pragmatism, and respect for compliance [7][24]. - A systematic AI security compliance governance framework should be established, integrating risk management into the entire business lifecycle [7][24]. Group 4: Emerging Threats and Defense Mechanisms - "Prompt injection" attacks are akin to social engineering and require multi-dimensional defense mechanisms, including input filtering and sandbox isolation [7][19]. - Companies should implement behavior monitoring and context tracing to enhance security against sophisticated AI attacks [7][19][20]. - The debate between open-source and closed-source models is not binary; companies should choose based on their specific needs and risk tolerance [7][21][23].
X @Avi Chawla
Avi Chawla· 2025-07-22 19:12
Open Source LLM Framework - A framework connects any LLM to any MCP server (open-source) [1] - The framework enables building custom MCP Agents without closed-source apps [1] - Compatible with Ollama, LangChain, etc [1] - Allows building 100% local MCP clients [1]
X @Avi Chawla
Avi Chawla· 2025-07-22 06:30
LLM & MCP Integration - A framework enables connecting any LLM to any MCP server [1] - The framework facilitates building custom MCP Agents without relying on closed-source applications [1] - It is compatible with tools like Ollama and LangChain [1] - The framework allows building 100% local MCP clients [1]
X @Avi Chawla
Avi Chawla· 2025-06-24 06:30
We have fine-tuned DeepSeek (distilled Llama).Now we can interact with it like any other model running on Ollama using:- The CLI- Ollama's Python package- Ollama's LlamaIndex integration, etc. https://t.co/bCNUqtLgaJ ...
靠"氛围编程"狂揽 2 亿美金,Supabase 成 AI 时代最性感的开源数据库
AI前线· 2025-05-20 01:24
Core Insights - Supabase has successfully positioned itself at the forefront of the "Vibe Coding" trend, completing a $200 million Series D funding round with a post-money valuation of $2 billion, reflecting its rapid growth and the increasing importance of open-source databases in the AI application era [1][22]. Group 1: Supabase's Growth and Funding - Supabase raised $200 million in its Series D funding round, led by Accel, with participation from Coatue, Y Combinator, Craft Ventures, and existing investors, bringing its total funding to nearly $400 million [1]. - The company has seen a significant increase in its valuation, reaching $2 billion just seven months after its previous funding round of $80 million [1]. - Supabase's user base has expanded to over 2 million developers, managing 3.5 million databases, and its GitHub repository has surpassed 81,000 stars, doubling in just two years [17]. Group 2: Vibe Coding and Development Workflow - The "Vibe Coding" workflow emphasizes rapid completion of the entire development process using various AI tools, from product documentation to database design and service implementation [2][5]. - Developers utilize generative AI tools to draft product requirement documents and generate database schemas, facilitating the creation of initial data models [4]. - The integration of Supabase with tools like Lovable and Bolt.new allows users to deploy full-stack applications without server setup, enhancing the development experience [5][8]. Group 3: AI Integration and Features - Supabase has integrated PGVector to support embedding storage, crucial for building retrieval-augmented generation (RAG) applications and other AI-related tasks [11]. - The company launched its AI assistant, which can automatically generate database schemas and fill in sample data, significantly aiding non-developers in backend prototype development [13]. - Recent developments include the launch of an official MCP server, enabling developers to connect popular AI tools directly to Supabase for various database management tasks [14]. Group 4: Competitive Positioning and Future Outlook - Supabase's open-source model and reliance on PostgreSQL differentiate it from other backend-as-a-service (BaaS) platforms like Firebase, which lock users into their ecosystems [22]. - The company aims to become the default backend for AI and enterprise applications, leveraging its funding to accelerate the adoption of "Vibe Coding" tools and large-scale deployments [22]. - Accel partners believe Supabase has the potential to dominate the high-value database sector, drawing comparisons to the rise of Oracle and MongoDB [22].