标签:name collection centos7 client #----------------------+-------------------------
-
参考链接
https://milvus.io/cn/docs/v0.11.0/milvus_docker-cpu.md
-
新启一个虚拟机,配置yum源
-
安装docker,因为milvus是通过docker安装的
-
安装docker
yum -y install docker
-
启动 Docker 后台服务
service docker start
-
执行如下命令,确认docker可用
docker image ls
-
下载 Milvus Docker 镜像文件
docker pull milvusdb/milvus:0.11.0-cpu-d101620-4c44c0
-
下载配置文件
mkdir -p /home/$USER/milvus/conf cd /home/$USER/milvus/conf wget https://raw.githubusercontent.com/milvus-io/milvus/0.11.0/core/conf/demo/milvus.yaml
-
如果wget失败,可以在/home/$USER/milvus/conf目录下手动创建milvus.yaml文件
-
在milvus.yaml文件中写入如下内容(vi milvus.yaml)
# Copyright (C) 2019-2020 Zilliz. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software distributed under the License # is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express # or implied. See the License for the specific language governing permissions and limitations under the License. version: 0.6 #----------------------+------------------------------------------------------------+------------+-----------------+ # Cluster Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # enable | If running with Mishards, set true, otherwise false. | Boolean | false | #----------------------+------------------------------------------------------------+------------+-----------------+ # role | Milvus deployment role: rw / ro | Role | rw | #----------------------+------------------------------------------------------------+------------+-----------------+ cluster: enable: false role: rw #----------------------+------------------------------------------------------------+------------+-----------------+ # General Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # timezone | Use UTC-x or UTC+x to specify a time zone. | Timezone | UTC+8 | #----------------------+------------------------------------------------------------+------------+-----------------+ # meta_uri | URI for metadata storage, using SQLite (for single server | URI | sqlite://:@:/ | # | Milvus) or MySQL (for distributed cluster Milvus). | | | # | Format: dialect://username:password@host:port/database | | | # | Keep 'dialect://:@:/', 'dialect' can be either 'sqlite' or | | | # | 'mysql', replace other texts with real values. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ general: timezone: UTC+8 meta_uri: sqlite://:@:/ #----------------------+------------------------------------------------------------+------------+-----------------+ # Network Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # bind.address | IP address that Milvus server monitors. | IP | 0.0.0.0 | #----------------------+------------------------------------------------------------+------------+-----------------+ # bind.port | Port that Milvus server monitors. Port range (1024, 65535) | Integer | 19530 | #----------------------+------------------------------------------------------------+------------+-----------------+ # http.enable | Enable HTTP server or not. | Boolean | true | #----------------------+------------------------------------------------------------+------------+-----------------+ # http.port | Port that Milvus HTTP server monitors. | Integer | 19121 | # | Port range (1024, 65535) | | | #----------------------+------------------------------------------------------------+------------+-----------------+ network: bind.address: 0.0.0.0 bind.port: 19530 http.enable: true http.port: 19121 #----------------------+------------------------------------------------------------+------------+-----------------+ # Storage Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # path | Path used to save meta data, vector data and index data. | Path | /var/lib/milvus | #----------------------+------------------------------------------------------------+------------+-----------------+ # auto_flush_interval | The interval, in seconds, at which Milvus automatically | Integer | 1 (s) | # | flushes data to disk. | | | # | 0 means disable the regular flush. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ storage: path: /var/lib/milvus auto_flush_interval: 1 #----------------------+------------------------------------------------------------+------------+-----------------+ # WAL Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # enable | Whether to enable write-ahead logging (WAL) in Milvus. | Boolean | true | # | If WAL is enabled, Milvus writes all data changes to log | | | # | files in advance before implementing data changes. WAL | | | # | ensures the atomicity and durability for Milvus operations.| | | #----------------------+------------------------------------------------------------+------------+-----------------+ # path | Location of WAL log files. | String | | #----------------------+------------------------------------------------------------+------------+-----------------+ wal: enable: true path: /var/lib/milvus/wal #----------------------+------------------------------------------------------------+------------+-----------------+ # Cache Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # cache_size | The size of CPU memory used for caching data for faster | String | 4GB | # | query. The sum of 'cache_size' and 'insert_buffer_size' | | | # | must be less than system memory size. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # insert_buffer_size | Buffer size used for data insertion. | String | 1GB | # | The sum of 'insert_buffer_size' and 'cache_size' | | | # | must be less than system memory size. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # preload_collection | A comma-separated list of collection names that need to | StringList | | # | be pre-loaded when Milvus server starts up. | | | # | '*' means preload all existing tables (single-quote or | | | # | double-quote required). | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # max_concurrent_insert_request_size | | | | # | A size limit on the concurrent insert requests to process. | String | 2GB | # | Milvus can process insert requests from multiple clients | | | # | concurrently. This setting puts a cap on the memory | | | # | consumption during this process. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ cache: cache_size: 4GB insert_buffer_size: 1GB preload_collection: max_concurrent_insert_request_size: 2GB #----------------------+------------------------------------------------------------+------------+-----------------+ # GPU Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # enable | Use GPU devices or not. | Boolean | false | #----------------------+------------------------------------------------------------+------------+-----------------+ # cache_size | The size of GPU memory per card used for cache. | String | 1GB | #----------------------+------------------------------------------------------------+------------+-----------------+ # gpu_search_threshold | A Milvus performance tuning parameter. This value will be | Integer | 1000 | # | compared with 'nq' to decide if the search computation will| | | # | be executed on GPUs only. | | | # | If nq >= gpu_search_threshold, the search computation will | | | # | be executed on GPUs only; | | | # | if nq < gpu_search_threshold, the search computation will | | | # | be executed on both CPUs and GPUs. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # search_devices | The list of GPU devices used for search computation. | DeviceList | gpu0 | # | Must be in format gpux. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # build_index_devices | The list of GPU devices used for index building. | DeviceList | gpu0 | # | Must be in format gpux. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ gpu: enable: false cache_size: 1GB gpu_search_threshold: 1000 search_devices: - gpu0 build_index_devices: - gpu0 #----------------------+------------------------------------------------------------+------------+-----------------+ # Logs Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # level | Log level in Milvus. Must be one of debug, info, warning, | String | debug | # | error, fatal | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # trace.enable | Whether to enable trace level logging in Milvus. | Boolean | true | #----------------------+------------------------------------------------------------+------------+-----------------+ # path | Absolute path to the folder holding the log files. | String | | #----------------------+------------------------------------------------------------+------------+-----------------+ # max_log_file_size | The maximum size of each log file, size range | String | 1024MB | # | [512MB, 4096MB]. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # log_rotate_num | The maximum number of log files that Milvus keeps for each | Integer | 0 | # | logging level, num range [0, 1024], 0 means unlimited. | | | #----------------------+------------------------------------------------------------+------------+-----------------+ # log_to_stdout | Whether logging to standard output. | Boolean | false | #----------------------+------------------------------------------------------------+------------+-----------------+ # log_to_file | Whether logging to log files. | Boolean | true | #----------------------+------------------------------------------------------------+------------+-----------------+ logs: level: debug trace.enable: true path: /var/lib/milvus/logs max_log_file_size: 1024MB log_rotate_num: 0 log_to_stdout: false log_to_file: true #----------------------+------------------------------------------------------------+------------+-----------------+ # Metric Config | Description | Type | Default | #----------------------+------------------------------------------------------------+------------+-----------------+ # enable | Enable monitoring function or not. | Boolean | false | #----------------------+------------------------------------------------------------+------------+-----------------+ # address | Pushgateway address | IP | 127.0.0.1 + #----------------------+------------------------------------------------------------+------------+-----------------+ # port | Pushgateway port, port range (1024, 65535) | Integer | 9091 | #----------------------+------------------------------------------------------------+------------+-----------------+ metric: enable: false address: 127.0.0.1 port: 9091
-
启动 Docker 容器,将本地的文件路径映射到容器中
docker run -d --name milvus_cpu_0.11.0 \ -p 19530:19530 \ -p 19121:19121 \ -v /home/$USER/milvus/db:/var/lib/milvus/db \ -v /home/$USER/milvus/conf:/var/lib/milvus/conf \ -v /home/$USER/milvus/logs:/var/lib/milvus/logs \ -v /home/$USER/milvus/wal:/var/lib/milvus/wal \ milvusdb/milvus:0.11.0-cpu-d101620-4c44c0
-
发现报错如下
/usr/bin/docker-current: Error response from daemon: oci runtime error: container_linux.go:235: starting container process caused "process_linux.go:258: applying cgroup configuration for process caused \"Cannot set property TasksAccounting, or unknown property.\"".
-
执行如下命令,更新rpm包版本,该操作需要一些时间
yum -y update
-
重新创建容器
docker run -d --name milvus_cpu_0.11.0 \ -p 19530:19530 \ -p 19121:19121 \ -v /home/$USER/milvus/db:/var/lib/milvus/db \ -v /home/$USER/milvus/conf:/var/lib/milvus/conf \ -v /home/$USER/milvus/logs:/var/lib/milvus/logs \ -v /home/$USER/milvus/wal:/var/lib/milvus/wal \ milvusdb/milvus:0.11.0-cpu-d101620-4c44c0
-
创建成功,用代码简单使用下
-
安装下python
yum install -y python36 yum install -y python36-pip yum install -y python36-devel pip install pymilvus
-
测试代码内容如下:
import random from pprint import pprint from milvus import Milvus, DataType _HOST = '127.0.0.1' _PORT = '19530' client = Milvus(_HOST, _PORT) collection_name = 'demo_films' if collection_name in client.list_collections(): client.drop_collection(collection_name) collection_param = { "fields": [ {"name": "duration", "type": DataType.INT32, "params": {"unit": "minute"}}, {"name": "release_year", "type": DataType.INT32}, {"name": "embedding", "type": DataType.FLOAT_VECTOR, "params": {"dim": 8}}, ], "segment_row_limit": 4096, "auto_id": False } client.create_collection(collection_name, collection_param) client.create_partition(collection_name, "American") print("--------get collection info--------") collection = client.get_collection_info(collection_name) pprint(collection) partitions = client.list_partitions(collection_name) print("\n----------list partitions----------") pprint(partitions) The_Lord_of_the_Rings = [ { "title": "The_Fellowship_of_the_Ring", "id": 1, "duration": 208, "release_year": 2001, "embedding": [random.random() for _ in range(8)] }, { "title": "The_Two_Towers", "id": 2, "duration": 226, "release_year": 2002, "embedding": [random.random() for _ in range(8)] }, { "title": "The_Return_of_the_King", "id": 3, "duration": 252, "release_year": 2003, "embedding": [random.random() for _ in range(8)] } ] ids = [k.get("id") for k in The_Lord_of_the_Rings] durations = [k.get("duration") for k in The_Lord_of_the_Rings] release_years = [k.get("release_year") for k in The_Lord_of_the_Rings] embeddings = [k.get("embedding") for k in The_Lord_of_the_Rings] hybrid_entities = [ {"name": "duration", "values": durations, "type": DataType.INT32}, {"name": "release_year", "values": release_years, "type": DataType.INT32}, {"name": "embedding", "values": embeddings, "type": DataType.FLOAT_VECTOR}, ] ids = client.insert(collection_name, hybrid_entities, ids, partition_tag="American") print("\n----------insert----------") print("Films are inserted and the ids are: {}".format(ids)) before_flush_counts = client.count_entities(collection_name) client.flush([collection_name]) after_flush_counts = client.count_entities(collection_name) print("\n----------flush----------") print("There are {} films in collection `{}` before flush".format(before_flush_counts, collection_name)) print("There are {} films in collection `{}` after flush".format(after_flush_counts, collection_name)) info = client.get_collection_stats(collection_name) print("\n----------get collection stats----------") pprint(info) films = client.get_entity_by_id(collection_name, ids=[1, 200]) print("\n----------get entity by id = 1, id = 200----------") for film in films: if film is not None: print(" > id: {},\n > duration: {}m,\n > release_years: {},\n > embedding: {}" .format(film.id, film.duration, film.release_year, film.embedding)) query_embedding = [random.random() for _ in range(8)] query_hybrid = { "bool": { "must": [ { "term": {"release_year": [2002, 2003]} }, { "range": {"duration": {"GT": 250}} }, { "vector": { "embedding": {"topk": 3, "query": [query_embedding], "metric_type": "L2"} } } ] } } results = client.search(collection_name, query_hybrid, fields=["duration", "release_year", "embedding"]) print("\n----------search----------") for entities in results: for topk_film in entities: current_entity = topk_film.entity print("- id: {}".format(topk_film.id)) print("- distance: {}".format(topk_film.distance)) print("- release_year: {}".format(current_entity.release_year)) print("- duration: {}".format(current_entity.duration)) print("- embedding: {}".format(current_entity.embedding)) client.delete_entity_by_id(collection_name, ids=[1, 2]) client.flush() # flush is important result = client.get_entity_by_id(collection_name, ids=[1, 2]) counts_delete = sum([1 for entity in result if entity is not None]) counts_in_collection = client.count_entities(collection_name) print("\n----------delete id = 1, id = 2----------") print("Get {} entities by id 1, 2".format(counts_delete)) print("There are {} entities after delete films with 1, 2".format(counts_in_collection)) client.drop_partition(collection_name, partition_tag='American') if collection_name in client.list_collections(): client.drop_collection(collection_name)
标签:name,collection,centos7,client,#----------------------+-------------------------
来源: https://blog.csdn.net/qq_34648165/article/details/113365615
本站声明:
1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。