上海市住房和城乡建设部官方网站大数据营销是什么
在生产实践过程中,需要把data退役之后需要停机下线,在下线之前需要确认机器是否已下线完成,要去namenode的50070界面上查看显然效率低,为了能够快速拿到节点信息,写了简单的脚本。jmx/50070还有很多信息可以获取,可以需求采集需要的指标,可以转成Prometheus的export,或是入到时序数据库。本文只是用于交流和学习。
# -*- coding: utf-8 -*-
__author__ = 'machine'
#date: 20220720
import json
import requestsurl_dict = {'集群1': 'http://192.168.100.1:50070','集群2': 'http://192.168.14.1:50070'}for k,v in url_dict.items():print(" ")print("-----------------------------------------------------------------------------")print("集群名称:",k)#print(v)url=v+str('/jmx?qry=Hadoop:service=NameNode,name=NameNodeInfo')print(url)req = requests.get(url)#print(req)result_json = json.loads(req.text)#print(result_json)livenode=json.loads(result_json['beans'][0]['LiveNodes'])deadnode=result_json['beans'][0]['DeadNodes']print("运行节点的服务状态: ")list_inservernode = []list_decommissioned = []for lip in livenode.values():#print(lip['xferaddr'].split(':')[0])status=lip['adminState'].split(' ')[0]if status == 'Decommissioned':list_decommissioned.append(lip['xferaddr'].split(':')[0])#print("退役节点",lip['xferaddr'].split(':')[0])else:list_inservernode.append(lip['xferaddr'].split(':')[0])#print("在线节点",lip['xferaddr'].split(':')[0])print(" ")print("退役节点")for i in list_decommissioned:print(i)print("在线节点")for i in list_inservernode:print(i)print(" ")#print("-----------------------------------------------------------------------------")print(str('----------------------------- ') + "HDFS空间使用情况" + str(' -----------------------------'))print("HDFS总共空间(TB):",result_json['beans'][0]['Total'] // (1024 * 1024 * 1024 * 1024) ,str('TB'))print("HDFS已使用空间(TB):", result_json['beans'][0]['Used'] // (1024 * 1024 * 1024 * 1024), str('TB'))print("HDFS剩余空间(TB):", result_json['beans'][0]['Free'] // (1024 * 1024 * 1024 * 1024), str('TB'))print("HDFS已使用空间(使用率)",result_json['beans'][0]['PercentUsed'],str('%'))print("-----------------------------------------------------------------------------")
jmx hadoop部分参数
curl http://192.168.10.2:50070/jmx?
NameNode:50070
qry=Hadoop:service=NameNode,name=RpcActivityForPort8020
MemHeapMaxM
MemMaxM
Hadoop:service=NameNode,name=JvmMetrics
MemHeapMaxM
MemMaxM
Hadoop:service=NameNode,name=FSNamesystem
CapacityTotal
CapacityTotalGB
CapacityRemaining
CapacityRemainingGB
TotalLoad
FilesTotal
Hadoop:service=NameNode,name=FSNamesystemState
NumLiveDataNodes
Hadoop:service=NameNode,name=NameNodeInfo
LiveNodes
java.lang:type=Runtime
StartTime
Hadoop:service=NameNode,name=FSNamesystemState
TopUserOpCounts:timestamp
Hadoop:service=NameNode,name=NameNodeActivity
CreateFileOps
FilesCreated
FilesAppended
FilesRenamed
GetListingOps
DeleteFileOps
FilesDeleted
Hadoop:service=NameNode,name=FSNamesystem
CapacityTotal
CapacityTotalGB
CapacityUsed
CapacityUsedGB
CapacityRemaining
CapacityRemainingGB
CapacityUsedNonDFS
DataNode
DataNode:50075
Hadoop:service=DataNode,name=DataNodeActivity-slave-50010
BytesWritten
BytesRead
BlocksWritten
BlocksRead
ReadsFromLocalClient
ReadsFromRemoteClient
WritesFromLocalClient
WritesFromRemoteClient
BlocksGetLocalPathInfo