- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
Vector Search Engine for AI applications
深度学习已经被证明是一种处理非结构化数据(图片、视频、文字)的有效手段。随着开源深度模型越来越成熟、可用,AI 应用开始大范围落地。AI 应用开发的难点已经逐渐从模型训练/优化,转向对海量非结构化数据(特征向量)的高效搜索上。讲者将向听众解释 Milvus 如何加速海量特征向量的搜索。
展开查看详情
1 .
2 .Vector Search Engine for AI applications An Open Source Approach
3 . Background – Deep learning has been proven to be an effective way to process unstructured data like image, video, sound, text, etc. – The data management service and similarity search service of feature vectors are general components in many AI applications. – To build a vector similarity search engine will help people to put their AI applications in production much easier.
4 .Content-based Retrieval
5 .Vectors Are Different
6 . How Milvus Helps AI Developers Common requirements of Challenge Benefit of Milvus vector search To call library like Faiss, SPTAG needs additional effort on Provide user friendly SDK and integrated data management Ease of use programing, performance tuning and data management capability. Multiple Similarity Metrics In different scenarios, people may need different metrics Support Euclidean distance, dot product and more on roadmap. High performance is the key if an AI application is doable Milvus is designed for similarity search over billions of High Performance in the real world vectors. The vector similarity search is a computing intensive task. If it requires a large number of servers to perform this Not only provide CPU option, we also try to adopt ASIC like Cost Effective task, then the AI applications will be less likely to be put GPU to reduce the hardware cost. into production Unstructured data are explosive growing. The amount of On single node, Milvus could support up to billions of Scalability vector data would continuously increasing in next decade vectors. Milvus also provide distributed scale out solution.
7 .Milvus Overview
8 .Vector Data Management Sharding by growth Searching across multi shards Easy to append new vectors
9 .Flexible Distributed Policy Scale Out for Capacity Scale Out for HA
10 . Indexes for Different Scenarios IVF-CPU VS GRAPH IVF-CPU VS IVF-GPU Index Build Search Accuracy Accuracy IVF • CPU 3 3 • CPU 2.5 • GPU 2.5 • GPU 2 2 • CPU + GPU Memory… Query time Memory… Query time 1.5 1.5 Graph • CPU 1 1 • CPU + GPU • CPU 0.5 0.5 0 0 – For CPU model Computing power Build time • IVF index: better for scale-up and batch processing Computing power Build time (n:N) IVF-CPU Graph-NSG IVF-CPU IVF-GPU Batch support Batch support • Graph index: fast response time for single query (1:N) IVF-CPU VS IVF-HYBRID GRAPH VS IVF-HYBRID – For CPU + GPU model, more indexes available • IVF GPU index: optimized for large batch size (n:N) Accuracy Accuracy 3 3 2.5 2.5 • IVF Hybrid index: optimized for most scenarios, but 2 2 Memory… Query time Memory… Query time 1.5 1.5 requires both CPU and GPU (still in experiment) 1 1 0.5 0.5 0 0 Best practice tip for IVF index: Computing power Build time Computing power Build time • Build index with GPU IVF-CPU IVF-Hybrid IVF-Hybrid Graph-NSG • Query with CPU Batch support Batch support
11 . Milvus Performance Overview Recall (accuracy) Test Data: nprobe IVF_SQ8 IVF_SQ8 (nlist=16384) CPU GPU – ANN_SIFT1B (128d, 1 billion vectors) 1 39.30% 39.30% 8 78.20% 78.20% Test Server: Milvus 0.5.3 32 93.40% 93.40% CPU: Intel Xeon E5-2683 V3 * 2 Memory: 256 GB, DDR4 64 96.60% 96.60% 128 GPU: Nvidia 2080Ti, 11 GB * 2 PCIE: 3.0, 40 Lanes 97.90% 97.90% OS: Ubuntu 18.04 IVF SQ8 CPU IVF SQ8 GPU IVF SQ8H Batch Size Top 1 Top 64 Batch Size Top 1 Top 64 Batch Size Top 1 Top 64 1 0.88 0.84 1 15.78 15.68 1 0.38 0.33 10 1.16 1.60 10 15.68 15.86 10 1.14 0.84 100 4.78 4.68 100 15.91 16.80 100 2.55 2.42 200 6.70 6.66 200 16.24 16.95 200 4.01 3.92 500 13.09 13.15 500 16.93 16.56 500 8.58 8.6 1000 25.85 26.07 1000 18.65 19.18 1000 16.68 16.84 (Response time: Seconds)
12 . milvus.io github.com/milvus-io/milvus Find Milvus on milvusio.slack.com twitter.com/milvusio www.facebook.com/io.milvus.5 zhuanlan.zhihu.com/milvus medium.com/@milvusio