docker 配置服务器 Linux + php + nginx + redis + mysql + es + kibana

配置服务器docker


docker 配置服务器 Linux + php + nginx + redis + mysql + es + kibana

基础镜像下载

拉取需要安装的镜像:命令如下

docker pull redis
docker pull nginx
docker pull php
docker pull mysql
docker pull kibana:7.12.1
docker pull elasticsearch:7.12.1
docker pull gozer/go-mysql-elasticsearch
docker pull logstash:7.12.1

拉取完成之后的效果如下

REPOSITORY                               TAG                 IMAGE ID            CREATED             SIZE
docker.io/redis latest fad0ee7e917a 8 weeks ago 105 MB
docker.io/nginx latest d1a364dc548d 2 months ago 133 MB
docker.io/php 7.4-fpm bfdbfe3debeb 2 months ago 405 MB
docker.io/mysql latest c0cdc95609f1 2 months ago 556 MB
docker.io/kibana 7.12.1 cf1c9961eeb6 3 months ago 1.06 GB
docker.io/elasticsearch 7.12.1 41dc8ea0f139 3 months ago 851 MB
docker.io/gozer/go-mysql-elasticsearch latest 25676b5896fb 19 months ago 56.6 MB

配置一个网段

[root@localhost docker]# docker network create --subnet=172.200.7.0/20  mynetwork
349a9580b40572d117ada95065e3af1b6d19e23f95c42add6a30feb66df8cf0c

配置nginx服务器

配置nginx的配置文件
[root@localhost nginx]# pwd
/docker/nginx //创建文件夹
[root@localhost nginx]# ls
default.conf //创建文件
[root@localhost nginx]# more default.conf //编辑配置文件
server {
listen 80;
listen [::]:80;
server_name localhost;
root /docker/www/lmrs-2008/public; //项目文件地址
index index.php index.html;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /docker/www/lmrs-2008/public; //项目文件地址
}
location ~ .php$ {
root /docker/www/lmrs-2008/public;//项目文件地址
fastcgi_pass 172.17.0.3:9000; //php的地址
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}

创建nginx容器

命令:
[root@localhost nginx]# docker run -d --name nginx --network=mynetwork -p 80:80 --ip 172.200.7.2 --restart=always -v /docker/nginx/default.conf:/etc/nginx/conf.d/default.conf -v /docker/www:/docker/www --privileged=true nginx
862a46262aa37f4d537afa25dc69bb5b7278b3383914330f7d1ccf1c02de7a2e
解释
docker run
-d //后台运行
--name nginx //别名
--network=mynetwork //关联网络
-p 80:80 //映射端口
--ip 172.200.7.2 //指定IP地址
--restart=always //开机自启动
-v /docker/nginx/default.conf:/etc/nginx/conf.d/default.conf //映射配置文件
-v /docker/www:/docker/www //映射代码位置
--privileged=true //以root权限运行
nginx

查询结果

[root@localhost nginx]# docker inspect nginx | grep  "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.200.7.2",

配置php运行环境-容器

[root@localhost lmrs-2008]# docker run  -d --name php -p 9000:9000 --network=mynetwork --ip 172.200.7.3 --restart=always -v /docker/www:/docker/www --privileged=true php:7.4-fpm
74b8eed93b4b0418c7c72edb6fc06d608ec4ec610e718f1f7deaf4c1bfd4b478
[root@localhost lmrs-2008]#

查询结果

[root@localhost lmrs-2008]# docker inspect php | grep  "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.200.7.3",

配置mysql数据库容器

创建mysql 配置文件目录

/docker/mysql/conf/my.cnf

内容如下:

[client]
port= 3306
socket= /tmp/mysql.sock
[mysqld]
secure_file_priv=/var/lib/mysql
port= 3306
socket= /tmp/mysql.sock
datadir = /usr/local/mysql/data
default_storage_engine = InnoDB
performance_schema_max_table_instances = 400
table_definition_cache = 400
skip-external-locking
key_buffer_size = 32M
max_allowed_packet = 100G
table_open_cache = 128
sort_buffer_size = 768K
net_buffer_length = 4K
read_buffer_size = 768K
read_rnd_buffer_size = 256K
myisam_sort_buffer_size = 8M
thread_cache_size = 16
tmp_table_size = 32M
default_authentication_plugin = mysql_native_password
lower_case_table_names = 1
sql-mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
explicit_defaults_for_timestamp = true
max_connections = 500
max_connect_errors = 100
open_files_limit = 65535
log-bin=mysql-bin
binlog_format=mixed
server-id = 1
binlog_expire_logs_seconds = 600000
slow_query_log=1
slow-query-log-file=/usr/local/mysql/data/mysql-slow.log
long_query_time=3
early-plugin-load = ""
innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data
innodb_buffer_pool_size = 128M
innodb_log_file_size = 64M
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
innodb_max_dirty_pages_pct = 90
innodb_read_io_threads = 1
innodb_write_io_threads = 1
[mysqldump]
quick
max_allowed_packet = 500M
[mysql]
no-auto-rehash
[myisamchk]
key_buffer_size = 32M
sort_buffer_size = 768K
read_buffer = 2M
write_buffer = 2M
[mysqlhotcopy]
interactive-timeout

创建mysql 容器

[root@localhost conf]# docker run  -d --name mysql  -p 3306:3306 --network=mynetwork --ip 172.200.7.4 --restart=always  -v /docker/mysql/conf/my.cnf:/etc/mysql/my.cnf --privileged=true -e MYSQL_ROOT_PASSWORD=root mysql
8cb121e176c9c71554ec1b4bf4804e74d5dfd42f4595ce4dff5b4195bd02de7e
[root@localhost conf]#

结果

[root@localhost conf]# docker inspect mysql | grep  "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.200.7.4",
[root@localhost conf]#

显示如下内容则数据库安装成功

查看以上安装状态:查看是否安装成功

[root@localhost conf]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8cb121e176c9 mysql "docker-entrypoint..." 3 minutes ago Up 21 seconds 0.0.0.0:3306->3306/tcp, 33060/tcp mysql
74b8eed93b4b php:7.4-fpm "docker-php-entryp..." 20 minutes ago Up 25 seconds 0.0.0.0:9000->9000/tcp php
862a46262aa3 nginx "/docker-entrypoin..." 29 minutes ago Up 24 seconds 0.0.0.0:80->80/tcp nginx
[root@localhost conf]#

php 配置swoole扩展

1进入php容器中

docker exec -it php bash
执行命令:docker-php-ext-install swoole

如果出现以下错误则 需要手动安装swoole扩展

error: /usr/src/php/ext/swoole does not exist

usage: /usr/local/bin/docker-php-ext-install [-jN] [--ini-name file.ini] ext-name [ext-name ...]
ie: /usr/local/bin/docker-php-ext-install gd mysqli
/usr/local/bin/docker-php-ext-install pdo pdo_mysql
/usr/local/bin/docker-php-ext-install -j5 gd mbstring mysqli pdo pdo_mysql shmop

if custom ./configure arguments are necessary, see docker-php-ext-configure

2 下载扩展

地址:https://pecl.php.net/package/swoole

2.1 解压 tar zxvf swoole-4.6.3.tgz 2.2 重命名 mv swoole-4.6.3 swoole 2.3 复制到容器内 docker cp ./swoole php:/usr/src/php/ext 3 再次进入容器中执行 开启扩展命令 docker exec -it php bash docker-php-ext-install swoole

3查看扩展安装结果

root@9d4047433f56:/usr/src/php/ext/swoole# php -m |grep swoole
swoole

安装laravel

从网上下载laravel 文件 解压到 /docker/www/目录中 并执行命令

chmod -R 777 lmrs-2008

将目录中文件 权限更改为全部读写执行

安装laravel插件

执行命令:composer require hhxsv5/laravel-s

执行安装命令提示以下信息

说明缺少ext-pcntl 插件 需要先按照该插件 才能安装laravel-s插件

需要在composer.json中添加内容

"config": {
"optimize-autoloader": true,
"preferred-install": "dist",
"sort-packages": true,
"platform": {
"ext-pcntl": "7.2", //这两个要补充上 再次执行命令就不会报错了
"ext-posix": "7.2"
}
},

设置laravels 创建配置项

php artisan laravels publish

配置 。env的配置项

LARAVELS_LISTEN_IP=0.0.0.0 监听地址

LARAVELS_LISTEN_PORT=5200 监听端口

LARAVELS_WORKER_NUM=4 开启进程数

更多的配置项内容参考: https://github.com/hhxsv5/laravel-s/blob/master/Settings-CN.md

laravels 的基本命令

命令

说明

start

启动laravelS,展示已启动的进程列表 "ps -ef / grep laravelS"

支持选项 "-d /--daemonize" 以守护进程的方式运行,

此选项将覆盖laravels.php中swoole.daemonize设置;

支持选项 "-e /--env" 用来指定运行的环境,

如 --env=testing将会优先使用配置.env.testing,这个特性要求laravel5.2+

stop

停止laravelS

restart

停止laravelS,支持选项 “-d / --daemonize ”和 “-e / --env”

reload

平滑重启索引Task/Worker,这些进程包含了你的业务代码,

不会重启Master/Manger/Timer/Custom进程

info

显示组件的版本信息

help

显示帮助信息

最后进入容器中启动laravels 服务

docker ps docker exec -it php bash cd /docker/www/lmrs-2008 php bin/laravels start 显示以下图片说明启动成功

laravels配置开机自启动的脚本

1进入php的容器中

docker exec -it php bash
编写定时任务
#centos启动 crond服务
service crond start
#alpine linux启动crond服务
crond
#定义定时任务执行格式文本
# min hour day month weekday command
*/1 * * * * /data/www/lmrs-sh/laravels.sh

根据脚步统计系统数据入库:

#!/bin/sh
php="/usr/local/bin/php"
echo "统计系统开始统计order的数量 !".n
if php -v ; then
php /docker/www/lmrs-2008/bin/laravels restart -d >> /data/www/storage/logs/sh.log
else
/usr/local/bin/php /docker/www/lmrs-2008/bin/laravels restart -d >> /data/www/storage/logs/sh.log
fi

启动服务命令

CentOS6上的cron命令
service crond start //启动服务
service crond stop //关闭服务
service crond restart //重启服务
service crond reload //重新载入配置
service crond status //查看状态

CentOS7上的cron命令
systemctl start crond.service //启动服务
systemctl stop crond.service //关闭服务
systemctl restart crond.service //重启服务
systemctl reload crond.service //重新载入配置
systemctl status crond.service //查看状态
//或者
crond start
crond stop
crond restart
crond reload
crond status

配置nginx的负载均衡

更改配置文件内容

地址:/docker/nginx/default.conf
内容如下:
[root@localhost nginx]# more default.conf
upstream swoole {
server 172.200.7.3:5200 weight=5 max_fails=3 fail_timeout=30s;
keepalive 16;
}

server {
listen 80;
listen [::]:80;
server_name localhost;
root /docker/www/lmrs-2008/public;
index index.php index.html;

location / {
try_files $uri @laravels;
}

location @laravels {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Real-PORT $remote_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header Scheme $scheme;
proxy_set_header Server-Protocol $server_protocol;
proxy_set_header Server-Name $server_name;
proxy_set_header Server-Addr $server_addr;
proxy_set_header Server-Port $server_port;
proxy_pass http://swoole;
}
}

说明 172.200.7.3:5200

可以由以下命令查询到 ip地址 和端口信息

[root@localhost ~]# docker inspect php | grep  "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.200.7.3",

[root@localhost nginx]# docker exec php /bin/sh -c "php /docker/www/lmrs-2008/bin/laravels restart -d"
[2021-07-30 03:29:58] [INFO] The max time of waiting to forcibly stop is 60s.
[2021-07-30 03:29:58] [INFO] Waiting Swoole[PID=31541] to stop. [1]
[2021-07-30 03:29:59] [INFO] Swoole [PID=31541] is stopped.
_ _ _____
| | | |/ ____|
| | __ _ _ __ __ ___ _____| | (___
| | / _` | '__/ _` / / _ |___
| |___| (_| | | | (_| | V / __/ |____) |
|________,_|_| __,_| _/ ___|_|_____/

Speed up your Laravel/Lumen
>>> Components
+---------------------------+---------+
| Component | Version |
+---------------------------+---------+
| PHP | 7.4.19 |
| Swoole | 4.7.0 |
| LaravelS | 3.7.19 |
| Laravel Framework [local] | 7.30.4 |
+---------------------------+---------+
>>> Protocols
+-----------+--------+-------------------+---------------------+
| Protocol | Status | Handler | Listen At |
+-----------+--------+-------------------+---------------------+
| Main HTTP | On | Laravel Framework | http://0.0.0.0:5200 |
+-----------+--------+-------------------+---------------------+
>>> Feedback: https://github.com/hhxsv5/laravel-s
[2021-07-30 03:30:00] [TRACE] Swoole is running in daemon mode, see "ps -ef|grep laravels".

重启nginx服务器 在浏览器上显示以下内容 则修改成功

配置redis数据

[root@localhost nginx]# docker run  -d --name redis  -p 6379:6379 --network=mynetwork --ip 172.200.7.5 --restart=always -v /docker/redis:/etc/redis --privileged=true  redis
8573b2b3e6a4950ea783de864bf9f2716eda3956d6156b5f43dc6cb5c78bad3d

配置成功显示以下结果

分布式搜索引擎-elasticsearch安装

docker pull elasticsearch:7.12.1
[root@localhost ~]# mkdir /docker/es
[root@localhost ~]# cd /docker/es
[root@localhost es]# mkdir conf //配置文件目录
[root@localhost es]# mkdir data //数据目录
[root@localhost es]# mkdir plugins

创建配置文件

touch /docker/es/conf/elsaticsearch.yml

内容:
cluster.name:my-application #集群名称
node.name:node-1 #j节点名称
path.data: /usr/share/elasticsearch/data #数据存储目录
path.logs: /usr/share/elasticsearch/logs #日志存储目录
network.host: 0.0.0.0 #绑定IP地址 0。0。0。0 任意计算机都可以访问
http.port: 9200 #绑定端口
## 设置集群节点名称 可以使用默认 如果是单机 放入一个节点即可
cluster.initial_master_nodes: ["node-1"]
indices.fielddata.cache.size: 50% #限制内存使用情况

冒号后面必须有一个空格 不能用tab
去除描述的内容

cluster.name: my-application
node.name: node-1
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
indices.fielddata.cache.size: 50%





创建容器
[root@localhost conf]# docker run -d --name es -p 9200:9200 --network=mynetwork --ip 172.200.7.6 --restart=always -e ES_JAVA_OPTS="-Xms512m -Xmx512m" -v /docker/es/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /docker/es/data:/usr/share/elasticsearch/data -v /docker/es/plugins:/usr/share/elasticsearch/plugins --privileged=true elasticsearch:7.12.1
07178920a6f4880c73ffa0e09306f52ac6e9d0bca64d2b539960ef58e1061558

-d :守护进程
--name es 进程的名称
-e 设置容器的参数 es_java_opts 设置运行内容512M
-v 映射的目录
--privileged
elsaticsearch:7.12.1 镜像的名称 和标签

安装失败时查看一下

安装异常处理

使用命令查看报错信息 docker logs -f es es为容器名称

坑点1:

一开始我没有做第0步,就是直接执行docker run的语句,结果报错了。

docker: Error response from daemon: OCI runtime create failed: container_linux.go:349:
starting container process caused "process_linux.go:449: container init caused
"rootfs_linux.go:58: mounting \"/home/myEs03/elastcsearch.yml\" to rootfs \
"/var/lib/docker/overlay2/ebdec218f44d495d05b5f265745fec5e53c57a1e3d43858f5f338d92a52ccc34/merged\"
at \"/var/lib/docker/overlay2/ebdec218f44d495d05b5f265745fec5e53c57a1e3d43858f5f338d92a52ccc34/merged/usr/
share/elasticsearch/config/elasticsearch.yml\" caused \"not a directory\""": unknown: Are you trying
to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

对这个结果我很疑惑为什么会这样呢?看了错误提示,他说试图将目录挂载到文件上,我有些蒙蔽,我明明是制定了.yml文件的呀!

然后我就去了相应目录看了

docker 配置服务器 Linux + php + nginx + redis + mysql + es + kibana

结果是这样的。按道理说结合上面的错误提示和这个蓝色的标识,我就应该想到这个.yml文件其实是一个文件夹!

然而由于我对linux不熟悉,还以为文件就是这样的,然后又在网上查啊查,查了好久才发现,我这个.yml是个文件夹!不得不说太具有迷惑性了。。。我怪我自己对linux系统太不熟悉,以至于搞出这样的笑话。

于是我touch了一个正常的yml,这才不报这个错误,然而又一个错误不期而至。

坑点2:

解决了坑点1之后,我就准备看看运行成功后的结果啦,然而事情又有波折。

再次报错:

[2020-06-13T08:19:44,278][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]org.elasticsearch.bootstrap.StartupException: java.lang.IllegalArgumentException: unknown setting [uster.name] did you mean [clust at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:136) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:123) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:70) ~[elasticsearch-5.6.12.jar:5.6.1 at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:134) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:91) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:84) ~[elasticsearch-5.6.12.jar:5.6.12] Caused by: java.lang.IllegalArgumentException: unknown setting [uster.name] did you mean [cluster.name]? at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:293) ~[elasticsearch-5.6. at org.elasticsearch.common.settings.AbstractScopedSettings.validate(AbstractScopedSettings.java:256) ~[elasticsearch-5.6. at org.elasticsearch.common.settings.SettingsModule.(SettingsModule.java:139) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.node.Node.(Node.java:344) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.node.Node.(Node.java:245) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Bootstrap$5.(Bootstrap.java:233) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:233) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:342) ~[elasticsearch-5.6.12.jar:5.6.12] at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:132) ~[elasticsearch-5.6.12.jar:5.6.12] ... 6 more

核心的报错是unknown setting [uster.name] did you mean [clust

经过研究后发现,可能是我的空格引起的,yml中的配置我是直接复制的,可能我的文本编辑环境的空格和linux的空格不一样,所以产生这种情况。

我再次打开yml文件,将里面的空格重新输入了一遍就解决了这个问题!

cluster.name: my-application
node.name: node-1
path.data: /usr/share/elasticsearch/data
path.logs: /usr/share/elasticsearch/logs
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
indices.fielddata.cache.size: 50%

坑点3:

解决了坑点2之后,我就准备看看运行成功后的结果啦,然而事情又有波折。

我在执行docker run之后,又报了错误。

ERROR: [1] bootstrap checks failed[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

这次的报错也简单易懂,拥有的内存分配太少了

但是我不知道为什么会这样,之前运行都是直接运行的,从来没有报过这种错误。有没有大佬能告诉下为什么会这样呢?

不过这个报错还好解决,百度了下,只需要设置一下即可。

切换到root用户执行命令:sysctl -w vm.max_map_count=262144
查看结果:sysctl -a|grep vm.max_map_count
显示:vm.max_map_count = 262144

三个坑点解决之后,我就成功运行起了elasticsearch!

在生产环境中的Docker使用Elasticsearch 需要对vm.max_map_count进行如下配置

• Linux
• 修改配置文件
• grep vm.max_map_count /etc/sysctl.confvm.max_map_count=262144
• 启动配置
• sysctl -w vm.max_map_count=262144

• Mac
• 启动​​命令行​​执行
• screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
• 回车然后确认输入配置
• sysctl -w vm.max_map_count=262144


• Windows and macOS with Docker Desktop
• 通过docker-machine进行设置
• docker-machine sshsudo
• sysctl -w vm.max_map_count=262144


• Windows with Docker Desktop WSL 2 backend
• wsl -d docker-desktop
• sysctl -w vm.max_map_count=262144

问题4 启动之后 访问链接不成功

查看配置文件的权限 有没有可读写的权限

chmod -R 777 es

查看是否安装成功

安装kibana 软件

创建配置文件 /docker/kibana/conf/kibana.yml

[root@localhost ~]# cd /docker/
[root@localhost docker]# ls
es nginx www
[root@localhost docker]# mkdir kibana
[root@localhost docker]# cd kibana/
[root@localhost kibana]# ls
[root@localhost kibana]# mkdir conf
[root@localhost kibana]# cd conf/
[root@localhost conf]# touch kibana.yml
[root@localhost conf]# ls
kibana.yml
[root@localhost conf]#
配置文件内容
[root@localhost conf]# more kibana.yml
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://172.200.7.6:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true
xpack.encryptedSavedObjects.encryptionKey: encryptedSavedObjects12345678909876543210
xpack.security.encryptionKey: encryptionKeysecurity12345678909876543210
xpack.reporting.encryptionKey: encryptionKeyreporting12345678909876543210

注:配置文件中参数和值之间一定有一个空格 否则不生效

创建容器

docker run -d --name kibana  -p 5601:5601  --network=mynetwork --ip 172.200.7.7 --restart=always  -v /docker/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml --privileged=true kibana:7.12.1


[root@localhost conf]# docker run -d --name kibana -p 5601:5601 --network=mynetwork --ip 172.200.7.7 --restart=always -v /docker/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml --privileged=true kibana:7.12.1
8bdb5e732c3d230296cbef877c8ad12891c6246abb9c0f1e40371cc398ad6f20
[root@localhost conf]#

出现以下结果 则说明安装成功

laravels 引入elasticsearch

通过composer 在项中安装扩展包

composer require elasticsearch/elasticsearch "7.12.X" --ignore-platform-reqs

安装需要

php 7,3 -8.0
ext-json :>=1.3.7
ezimuel/ringphp:^1.1.2
psr/log:^1.0

配置env信息

查看容器ip地址 docker inspect 容器ID docker inspect es 获取es的IP地址 "IPAddress": "172.17.0.4", 在laravel的env配置文件中补充参数 ES_HOSTS =172.0.0.3

config配置文件中补充es信息

config/database.php

'elasticsearch' => [
'hosts' => explode(',',env("ES_HOSTS")) //多个es配置时使用逗号分隔成数组
],

服务容器注册

namespace AppProviders;

use IlluminateSupportServiceProvider;
use ElasticsearchClientBuilder as ESClientBuilder;
class AppServiceProvider extends ServiceProvider
{
public function register()
{
//在laravel的容器中注册一个es的单例
$this->app->singleton('es',function (){
$builder = ESClientBuilder::create()->setHosts(config('database.elasticsearch.hosts'));
if (app()->environment() === 'local'){
$builder->setLogger(app('log')->driver());
}
return $builder->build();
});
}
public function boot()
{
//
}
}

mysql导入到es的软件1:go-mysql-es

创建配置文件:/docker/go-mysql-es/go_mysql_river.toml


my_addr = "172.200.7.4:3306"
my_user = "root"
my_pass = "root"
my_charset = "utf8"
enable-relay = true
es_addr = "172.200.7.6:9200"
es_user = ""
es_pass = ""
data_dir = "/docker/go-mysql-es/data"
stat_addr = "127.0.0.1:12800"
stat_path = "/metrics"
server_id = 1001
flavor = "mysql"
mysqldump = ""
#skip_master_data = false
bulk_size = 128
flush_bulk_time = "200ms"
skip_no_pk_table = false
[[source]]
schema = "project_laravel"
tables = ["sp_system_operation_log"]
[[rule]]
schema = "project"
table = "lmrs_operation_log"
index = "products"
type = "_doc"
filter = ["id", "uid","user_agent","ip","param","created_at"]
[rule.filed]
mysql = "created_at"
elastic = "created_time"

创建容器

[root@localhost go-mysql-es]# docker run  -d --name go-mysql-es --network=mynetwork -p 12345:12345 --ip 172.200.7.8 --restart=always -v /docker/go-mysql-es/go_mysql_river.toml:/config/river.toml:ro --privileged=true gozer/go-mysql-elasticsearch
d21379fad598063e47971a7dfb4bdd5413994cc4435572710965e38a4e2dd560
[root@localhost go-mysql-es]#

注意:mysql 数据库的格式必须是行

更改mysql配置项为row

binlog_format=row

mysql 导入到es的工具2: logstash

下载镜像需要和es的版本一致

docker pull logstash:7.12.1
docker run -d --name logstash --network=mynetwork -p 9900:9900 --ip 172.200.7.9 --restart=always -v /docker/logstash:/etc/logstash/pipeline --privileged=true logstash:7.12.1

安装jdbc和elsatisearch插件

一般系统会自动安装但是 防止版本问题 需要重新安装一遍

logstash-plugin install logstash-input-jdbc
logstash-plugin install logstash-output-elasticsearch

下载jdbc的mysql-connection。jar包 下载的包要和mysql的版本一致

路径:https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.25/mysql-connector-java-8.0.25.jar

mysql版本号查看

创建容器映射目录

cd /docker
mkdir logstash
touch logstash.conf

配置 logstash.conf文件

input {
stdin { }
jdbc {
#mysql的jdbc连接方式以及mysql地址与端口和数据设置
jdbc_connection_string => "jdbc:mysql://172.200.7.4/project_laravel"
jdbc_user => "root"
jdbc_password => "root"
#mysql连接驱动的jar地址,这个地址需要写的是容器内部的地址
jdbc_driver_library => "/etc/logstash/pipeline/mysql-connector-java-8.0.25.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => true
#每次同步的数量
jdbc_page_size => "5000"
statement => "select id,field,name,remark,sort,created_at from sp_system_setting"
schedule => "* * * * *"
}
}

output {
elasticsearch {
hosts => "172.200.7.6:9200"
index => "system"
document_type => "setting"
document_id => "%{id}"
}

stdout {
codec => json_lines
}
}

查询mysql和es的ip地址

[root@localhost logstash]# docker inspect es |grep "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.200.7.6",
[root@localhost logstash]# docker inspect mysql |grep "IPAddress"
"SecondaryIPAddresses": null,
"IPAddress": "",
"IPAddress": "172.200.7.4",
[root@localhost logstash]#

操作步骤

创建容器

[root@localhost logstash]# docker run -d --name logstash --network=mynetwork -p 9900:9900 --ip 172.200.7.9 --restart=always -v /docker/logstash:/etc/logstash/pipeline  --privileged=true logstash:7.12.1
33af34c75c2d9059cf35bee2424f8d5357c04fa12c42dc4d61251fe8505d8b5f
[root@localhost logstash]#

安装插件

[root@localhost logstash]# docker exec -it logstash bash
bash-4.2$ logstash-plugin install logstash-input-jdbc
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/share/logstash/hs_err_pid53.log

bash-4.2$ logstash-plugin install logstash-output-elasticsearch
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /usr/share/logstash/hs_err_pid89.log
bash-4.2$

编辑配置文件

bash-4.2$ cd config/
bash-4.2$ ls
jvm.options log4j2.properties logstash-sample.conf logstash.yml pipelines.yml startup.options
bash-4.2$ more logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://172.200.7.6:9200" ]

更改管道映射目录
bash-4.2$ more pipelines.yml
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
path.config: "/etc/logstash/pipeline/logstash.conf"
bash-4.2$
发表评论

相关文章