其他分享
首页 > 其他分享> > 【转载】incubator-dolphinscheduler 如何在不写任何新代码的情况下,能快速接入到prometheus和grafana中进行监控

【转载】incubator-dolphinscheduler 如何在不写任何新代码的情况下,能快速接入到prometheus和grafana中进行监控

作者:互联网

一、prometheus和grafana 简介

Prometheus 是继 Kubernetes 之后的第二个 CNCF “毕业” 项目,其监控理念传承于由谷歌研发的一款内部监控软件,现主要开发语言为 go,代码目前已经托管在 github 中,遵从 apache 2.0 开源协议,受欢迎的程度非常高,github 地址为:https://github.com/prometheus/。

  监控通常可以分为白盒监控和黑盒监控。

prometheus的优势:

  备注:此架构图摘自prometheus官方网站

 prometheus根据配置定时可以去拉取各个节点的数据,默认使用的拉取方式是pull,当然也可以支持使用pushgateway提供的push方式去获取各个监控节点的数据。将获取到的数据存入TSDB(时序型数据库),pushgateway 就是 外部的应用可以将监控的数据主动推送给pushgateway,然后prometheus 自动从pushgateway进行拉取。此时prometheus已经获取到了监控数据,可以使用内置的PromQL进行查询。它的报警功能使用Alertmanager提供,Alertmanager是prometheus的告警管理和发送报警的一个组件。prometheus原生的图标功能由于过于简单,尤其是可视化的界面比较简单,因此建议将prometheus数据接入grafana,由grafana进行统一管理。

Grafana是开源的可视化监控、数据分析利器,支持多种数据库类型和丰富的套件,目前已支持超过50多个数据源,50多个面板,17个应用程序和1700多个不同的仪表图。(本文作者:张永清,转载请注明来源博客园:https://www.cnblogs.com/laoqing/p/14538635.html)

 

 

二、incubator-dolphinscheduler 简介

incubator-dolphinscheduler是一个由国内公司发起的大数据领域的开源调度项目,incubator-dolphinscheduler 能够支撑非常多的应用场景,包括:

 

 备注:此架构图摘自社区官方网站

三、incubator-dolphinscheduler 如何快速接入到prometheus和grafana 中进行监控

1、通过prometheus中push gateway的方式采集监控指标数据。

需要借助push gateway一起,然后将数据发送到push gateway 地址中,比如地址为http://10.25x.xx.xx:8085,那么就可以写一个shell 脚本,通过crontab调度或者incubator-dolphinscheduler调度,定期运行shell脚本,来发送指标数据到prometheus中。

复制代码

#!/bin/bash
failedTaskCounts=`mysql -h 10.25x.xx.xx -u username -ppassword -e "select 'failed' as failTotal ,count(distinct(process_definition_id))
as failCounts from dolphinscheduler.t_ds_process_instance where state=6 and start_time>='${datetimestr} 00:00:00'" |grep "failed"|awk -F " " '{print $2}'`
echo "failedTaskCounts:${failedTaskCounts}"
job_name="Scheduling_system"
instance_name="dolphinscheduler"
cat <<EOF | curl --data-binary @- http://10.25x.xx.xx:8085/metrics/job/$job_name/instance/$instance_name
failedSchedulingTaskCounts $failedTaskCounts
EOF

复制代码

这段脚本中failedSchedulingTaskCounts 就是定义的prometheus中的一个指标。脚本通过sql语句查询出失败的任务数,然后发送到prometheus中。

然后在grafana 中就可以选择数据源为prometheus,并且选择对应的指标。

 比如可以通过如下shell 脚本采集正在运行的任务数,然后通过push gateway 发送到prometheus中。(本文作者:张永清,转载请注明来源博客园:https://www.cnblogs.com/laoqing/p/14538635.html)

复制代码

#!/bin/bash
runningTaskCounts=`mysql -h 10.25x.xx.xx -u username -ppassword -e "select 'running' as runTotal ,count(distinct(process_definition_id))  as runCounts from dolphinscheduler.t_ds_process_instance where state=1" |grep "running"|awk -F " " '{print $2}'`
echo "runningTaskCounts:${runningTaskCounts}"
job_name="Scheduling_system"

instance_name="dolphinscheduler"
if [ "${runningTaskCounts}yy" == "yy" ];then
runningTaskCounts=0
fi
cat <<EOF | curl --data-binary @- http://10.25x.xx.xx:8085/metrics/job/$job_name/instance/$instance_name
runningSchedulingTaskCounts $runningTaskCounts
EOF

复制代码

 采集好了后,就可以达到如下的效果了,从这个图就可以看出每个采集到的指标随着时间的变化趋势了。

在配置好指标后,还可以配置该指标的告警,如下图所示,点击Create Alert 按钮,按照页面提示,即可以配置告警通知。

 

通过上述方式还可以采集到的一些其他的常见指标如下:

1)、失败的工作流实例数

复制代码

failedInstnceCounts=`mysql -h 10.25x.xx.xx -u username-ppassword -e "select 'failed' as failTotal ,count(1) as failCounts from dolphinscheduler.t_ds_process_instance where state=6 and start_time>='${datetimestr} 00:00:00'" |grep "failed"|awk -F " " '{print $2}'`
echo "failedInstnceCounts:${failedInstnceCounts}"
job_name="Scheduling_system"
instance_name="dolphinscheduler"
cat <<EOF | curl --data-binary @- http://10.25x.xx.xx:8085/metrics/job/$job_name/instance/$instance_name
failedSchedulingInstanceCounts $failedInstnceCounts
EOF

复制代码

2)、等待中的工作任务流数

复制代码

waittingTaskCounts=`mysql -h 10.25x.xx.xx -u username -ppassword -e "select 'waitting' as waitTotal ,count(distinct(process_definition_id)) as waitCounts from dolphinscheduler.t_ds_process_instance where state in(10,11) and start_time>='${sevenDayAgo} 00:00:00'" |grep "waitting"|awk -F " " '{print $2}'`
echo "waittingTaskCounts:${waittingTaskCounts}"
job_name="Scheduling_system"
instance_name="dolphinscheduler"
cat <<EOF | curl --data-binary @- http://10.25x.xx.xx:8085/metrics/job/$job_name/instance/$instance_name
waittingSchedulingTaskCounts $waittingTaskCounts
EOF

复制代码

3)、运行中的工作流实例数

复制代码

runningInstnceCounts=`mysql -h 10.25x.xx.xx -u username -ppassword -e "select 'running' as runTotal ,count(1)  as runCounts from dolphinscheduler.t_ds_process_instance where state=1" |grep "running"|awk -F " " '{print $2}'`
echo "runningInstnceCounts:${runningInstnceCounts}"
job_name="Scheduling_system"
instance_name="dolphinscheduler"
if [ "${runningInstnceCounts}yy" == "yy" ];then
runningInstnceCounts=0
fi
cat <<EOF | curl --data-binary @- http://10.25x.xx.xx:8085/metrics/job/$job_name/instance/$instance_name
runningSchedulingInstnceCounts $runningInstnceCounts
EOF

复制代码

 2、通过grafana 直接查询dolphinscheduler自身 的Mysql数据库(也可以是别的数据库)

首先需要在grafana 中定义一个数据源,这个数据源就是dolphinscheduler自身 的Mysql数据库。

 然后在grafana 中选择这个数据源,Format as 格式选择table,输入对应的sql语句。(本文作者:张永清,转载请注明来源博客园:https://www.cnblogs.com/laoqing/p/14538635.html)

 比如如下sql,统计本周以及当日正在运行的调度任务的情况。

复制代码

select d.*,ifnull(f.today_runCount,0) as today_runCount,ifnull(e.today_faildCount,0) as today_faildCount,ifnull(f.today_avg_timeCosts,0) as today_avg_timeCosts,ifnull(f.today_max_timeCosts,0) as today_max_timeCosts,
ifnull(g.week_runCount,0) as week_runCount,ifnull(h.week_faildCount,0) as week_faildCount,ifnull(g.week_avg_timeCosts,0) as week_avg_timeCosts,ifnull(g.week_max_timeCosts,0) as week_max_timeCosts from
(select a.id,c.name as project_name,a.name as process_name,b.user_name,a.create_time,a.update_time from t_ds_process_definition a,t_ds_user b, t_ds_project c  where a.user_id=b.id and c.id=a.project_id and a.release_state=$status) d
left join
(select count(1) as today_faildCount,process_definition_id from
t_ds_process_instance where state=6 and start_time>=DATE_FORMAT(NOW(),'%Y-%m-%d 00:00:00') and  start_time<=DATE_FORMAT(NOW(),'%Y-%m-%d 23:59:59') group by process_definition_id ) e  on d.id=e.process_definition_id
left join 
(select count(1) as today_runCount,avg(UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)) as today_avg_timeCosts,max(UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)) as today_max_timeCosts,process_definition_id from
t_ds_process_instance  where start_time>=DATE_FORMAT(NOW(),'%Y-%m-%d 00:00:00') and  start_time<=DATE_FORMAT(NOW(),'%Y-%m-%d 23:59:59') group by process_definition_id ) f on d.id=f.process_definition_id
left join
(select count(1) as week_runCount,avg(UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)) as week_avg_timeCosts,max(UNIX_TIMESTAMP(end_time)-UNIX_TIMESTAMP(start_time)) as week_max_timeCosts,process_definition_id from
t_ds_process_instance  where start_time>=DATE_FORMAT(SUBDATE(CURDATE(),DATE_FORMAT(CURDATE(),'%w')-1), '%Y-%m-%d 00:00:00') and  start_time<=DATE_FORMAT(SUBDATE(CURDATE(),DATE_FORMAT(CURDATE(),'%w')-7), '%Y-%m-%d 23:59:59') group by process_definition_id ) g
on d.id=g.process_definition_id  left join 
(select count(1) as week_faildCount,process_definition_id from
t_ds_process_instance where state=6 and start_time>=DATE_FORMAT(SUBDATE(CURDATE(),DATE_FORMAT(CURDATE(),'%w')-1), '%Y-%m-%d 00:00:00')  and  start_time<=DATE_FORMAT( SUBDATE(CURDATE(),DATE_FORMAT(CURDATE(),'%w')-7), '%Y-%m-%d 23:59:59') group by process_definition_id ) h
on d.id=h.process_definition_id 

复制代码

这些配置完后,保存就可以得到如下的表格:(本文作者:张永清,转载请注明来源博客园:https://www.cnblogs.com/laoqing/p/14538635.html)

通过类似上述方式的配置,在输入sql后,选择Graph视图时,还可以生成任务耗时的曲线图,如下所示

select (UNIX_TIMESTAMP(a.end_time)-UNIX_TIMESTAMP(a.start_time)) as timeCosts, UNIX_TIMESTAMP(a.end_time) as time   from t_ds_process_instance a,t_ds_process_definition b where end_time>=DATE_FORMAT( DATE_SUB(CURDATE(), INTERVAL 1 MONTH), '%Y-%m-01 00:00:00')  and end_time is not null and a.process_definition_id=b.id and b.name='$process_name'

 还可以支持甘特图等多种图,此处不再一一介绍了。

备注:本文同时发表在海豚调度的官方公众号中,海豚公众号连接:https://mp.weixin.qq.com/s/XmmJ_7G07yuP16NKS2c6-A

标签:00,name,process,dolphinscheduler,grafana,incubator,prometheus,time
来源: https://blog.csdn.net/hudiemeng870329/article/details/115250377