Saltstack rpm, Cloudhub dependent packages(TI[C]K Stack), 기타 rpm
RPM 파일이 들어 있는 USB 또는 CD를 준비합니다.
Local Repository
설치 및 구성
createrepo rpm 설치를 위해 먼저 아래와 같은 순서로 dependencies RPM을 설치합니다.
$ rpm -ivh /root/repos/deltarpm-3.6-3.el7.x86_64.rpm $ rpm -ivh /root/repos/python-deltarpm-3.6-3.el7.x86_64.rpm $ rpm -ivh /root/repos/libxml2-python-2.9.1-6.el7.5.x86_64.rpm $ rpm -ivh /root/repos/createrepo-0.9.9-28.el7.noarch.rpm 경고: createrepo-0.9.9-28.el7.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY 준비 중... ################################# [100%] Updating / installing... 1:createrepo-0.9.9-28.el7 ################################# [100%]
기존 yum repository 파일을 백업합니다. 폐쇄망 환경이라 공인 인터넷 주소를 참조하는 repo 파일이 있으면
yum install
할 때 시간이 오래 걸립니다.# 기존 repo 파일을 백업합니다. # $ cd /etc/yum.repos.d $ mkdir bak $ mv * bak/
local repository 파일을 작성합니다.
# baseurl은 rpm이 있는 local 디렉토리 경로 # $ vi cloudhub.repo [cloudhubrepo] name=cloudhubrepo baseurl=file:///root/repos enabled=1 gpgcheck=0
USB 또는 CD에 있는 rpm 파일을 임의 디렉토리(e.g. /root/repos)에 복사합니다. 디렉토리 경로는
cloudhub.repo
의baseurl
로 지정된 경로와 같아야 합니다.createrepo
명령어로 rpm 디렉토리(3, 4번에 지정한)를 yum local repository 디렉토리로 등록합니다.# yum repository에 RPM 디렉토리 등록 # $ createrepo /root/repos/ Spawning worker 0 with 31 pkgs Workers Finished Saving Primary metadata Saving file lists metadata Saving other metadata Generating sqlite DBs Sqlite DBs complete
신규 rpm 패키지 추가시 주의사항
yum에 추후 새로운 패키지(rpm)를 추가할 경우, yum clean all
를 실행하여 repository를 초기화 해줘야 합니다. 초기화를 안하면 추가된 패키지를 인식하지 못합니다.
$ yum repolist all
repo id repo name status
cloudhubrepo cloudhubrepo enabled: 31
repolist: 31$ yum clean all
Cleaning repos: cloudhubrepo
Other repos take up 60 M of disk space (use --verbose for details)
# 새로운 패키지를 /root/repos/
복사
$ createrepo /root/repos/
$ yum repolist all
repo id repo name status
cloudhubrepo cloudhubrepo enabled: 119
repolist: 119
yum update
Local Repository를 구성하고
yum update --disablerepo=* --enablerepo=cloudhubrepo
를 실행하여 Linux 서버에 기 설치된 package를 업데이트 합니다.
Install dependent package
# dependent package 설치 $ yum install -y --disablerepo=* --enablerepo=cloudhubrepo net-tools screen ntp rdate vim openssl-devel gcc wget telnet python36u python36u-devel python36u-libs python36u-pip
Install package
외부 인터넷을 사용할 수 없으므로 docker container 방식이 아닌 패키지 설치로 진행합니다.
Influxdb
install
$ yum install -y --disablerepo=* --enablerepo=cloudhubrepo influxdb Examining influxdb-1.8.0.x86_64.rpm: influxdb-1.8.0-1.x86_64 Marking influxdb-1.8.0.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package influxdb.x86_64 0:1.8.0-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================ Package Arch Version Repository Size ============================================================================================================================================================ Installing: influxdb x86_64 1.8.0-1 /influxdb-1.8.0.x86_64 164 M Transaction Summary ============================================================================================================================================================ Install 1 Package Total size: 164 M Installed size: 164 M Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : influxdb-1.8.0-1.x86_64 1/1 Created symlink from /etc/systemd/system/influxd.service to /usr/lib/systemd/system/influxdb.service. Created symlink from /etc/systemd/system/multi-user.target.wants/influxdb.service to /usr/lib/systemd/system/influxdb.service. Verifying : influxdb-1.8.0-1.x86_64 1/1 Installed: influxdb.x86_64 0:1.8.0-1 Complete!
config 수정 : docker에서 사용한 influxdb conf와 동일하게 사용합니다.
$ vi /etc/influxdb/influxdb.conf reporting-disabled = false bind-address = ":8088" [meta] dir = "/var/lib/influxdb/meta" retention-autocreate = true logging-enabled = true [data] dir = "/var/lib/influxdb/data" wal-dir = "/var/lib/influxdb/wal" query-log-enabled = true cache-max-memory-size = 1073741824 cache-snapshot-memory-size = 26214400 cache-snapshot-write-cold-duration = "10m0s" compact-full-write-cold-duration = "4h0m0s" max-series-per-database = 1000000 max-values-per-tag = 100000 index-version = "tsi1" trace-logging-enabled = false [coordinator] write-timeout = "10s" max-concurrent-queries = 0 query-timeout = "0s" log-queries-after = "0s" max-select-point = 0 max-select-series = 0 max-select-buckets = 0 [retention] enabled = true check-interval = "30m0s" [shard-precreation] enabled = true check-interval = "10m0s" advance-period = "30m0s" [monitor] store-enabled = true store-database = "_internal" store-interval = "10s" [subscriber] enabled = true http-timeout = "30s" insecure-skip-verify = false ca-certs = "" write-concurrency = 40 write-buffer-size = 1000 [http] enabled = true flux-enabled = true bind-address = ":8086" auth-enabled = false log-enabled = true write-tracing = false pprof-enabled = true https-enabled = false https-certificate = "/etc/ssl/influxdb.pem" https-private-key = "" max-row-limit = 0 max-connection-limit = 0 shared-secret = "" realm = "InfluxDB" unix-socket-enabled = false bind-socket = "/var/run/influxdb.sock" [[graphite]] enabled = false bind-address = ":2003" database = "graphite" retention-policy = "" protocol = "tcp" batch-size = 5000 batch-pending = 10 batch-timeout = "1s" consistency-level = "one" separator = "." udp-read-buffer = 0 [[collectd]] enabled = false bind-address = ":25826" database = "collectd" retention-policy = "" batch-size = 5000 batch-pending = 10 batch-timeout = "10s" read-buffer = 0 typesdb = "/usr/share/collectd/types.db" security-level = "none" auth-file = "/etc/collectd/auth_file" [[opentsdb]] enabled = false bind-address = ":4242" database = "opentsdb" retention-policy = "" consistency-level = "one" tls-enabled = false certificate = "/etc/ssl/influxdb.pem" batch-size = 1000 batch-pending = 5 batch-timeout = "1s" log-point-errors = true [[udp]] enabled = true bind-address = ":8089" database = "udp" retention-policy = "" batch-size = 5000 batch-pending = 10 read-buffer = 0 batch-timeout = "1s" precision = "" [continuous_queries] log-enabled = true enabled = true run-interval = "1s"
service start
$ systemctl enable influxdb $ systemctl daemon-reload $ systemctl start influxdb
kapacitor
install
$ yum install -y --disablerepo=* --enablerepo=cloudhubrepo kapacitor Examining kapacitor-1.5.4-1.x86_64.rpm: kapacitor-1.5.4-1.x86_64 Marking kapacitor-1.5.4-1.x86_64.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package kapacitor.x86_64 0:1.5.4-1 will be installed --> Finished Dependency Resolution Dependencies Resolved ============================================================================================================================================================ Package Arch Version Repository Size ============================================================================================================================================================ Installing: kapacitor x86_64 1.5.4-1 /kapacitor-1.5.4-1.x86_64 90 M Transaction Summary ============================================================================================================================================================ Install 1 Package Total size: 90 M Installed size: 90 M Is this ok [y/d/N]: y Downloading packages: Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : kapacitor-1.5.4-1.x86_64 1/1 Verifying : kapacitor-1.5.4-1.x86_64 1/1 Installed: kapacitor.x86_64 0:1.5.4-1 Complete!
config 수정 : docker에서 사용한 kapacitor conf와 동일하게 사용합니다.
$ vi /etc/kapacitor/kapacitor.conf hostname = "localhost" data_dir = "/var/lib/kapacitor" skip-config-overrides = false default-retention-policy = "" [http] bind-address = ":9094" auth-enabled = false log-enabled = true write-tracing = false pprof-enabled = false https-enabled = false https-certificate = "/etc/ssl/kapacitor.pem" shutdown-timeout = "10s" shared-secret = "" [replay] dir = "/var/lib/kapacitor/replay" [storage] boltdb = "/var/lib/kapacitor/kapacitor.db" [task] dir = "/var/lib/kapacitor/tasks" snapshot-interval = "1m0s" [[influxdb]] enabled = true name = "default" default = false urls = ["http://:8086"] username = "" password = "" ssl-ca = "" ssl-cert = "" ssl-key = "" insecure-skip-verify = false timeout = "0s" disable-subscriptions = false subscription-protocol = "http" kapacitor-hostname = "" http-port = 0 udp-bind = "" udp-buffer = 1000 udp-read-buffer = 0 startup-timeout = "5m0s" subscriptions-sync-interval = "1m0s" [influxdb.excluded-subscriptions] _kapacitor = ["autogen"] [logging] file = "STDERR" level = "INFO" [config-override] enabled = true [collectd] enabled = false bind-address = ":25826" database = "collectd" retention-policy = "" batch-size = 5000 batch-pending = 10 batch-timeout = "10s" read-buffer = 0 typesdb = "/usr/share/collectd/types.db" [opentsdb] enabled = false bind-address = ":4242" database = "opentsdb" retention-policy = "" consistency-level = "one" tls-enabled = false certificate = "/etc/ssl/influxdb.pem" batch-size = 1000 batch-pending = 5 batch-timeout = "1s" log-point-errors = true [alerta] enabled = false url = "" token = "" environment = "" origin = "" [hipchat] enabled = false url = "" token = "" room = "" global = false state-changes-only = false [opsgenie] enabled = false api-key = "" url = "https://api.opsgenie.com/v1/json/alert" recovery_url = "https://api.opsgenie.com/v1/json/alert/note" global = false [pagerduty] enabled = false url = "https://events.pagerduty.com/generic/2010-04-15/create_event.json" service-key = "" global = false [smtp] enabled = false host = "localhost" port = 25 username = "" password = "" no-verify = false global = false state-changes-only = false from = "" idle-timeout = "30s" [sensu] enabled = false addr = "" source = "Kapacitor" [slack] enabled = false url = "" channel = "" username = "kapacitor" icon-emoji = "" global = false state-changes-only = false [talk] enabled = false url = "" author_name = "" [telegram] enabled = false url = "https://api.telegram.org/bot" token = "" chat-id = "" parse-mode = "" disable-web-page-preview = false disable-notification = false global = false state-changes-only = false [victorops] enabled = false api-key = "" routing-key = "" url = "https://alert.victorops.com/integrations/generic/20131114/alert" global = false [kubernetes] enabled = false in-cluster = false token = "" ca-path = "" namespace = "" [reporting] enabled = false url = "https://usage.influxdata.com" [stats] enabled = true stats-interval = "10s" database = "_kapacitor" retention-policy = "autogen" timing-sample-rate = 0.1 timing-movavg-size = 1000 [udf] [deadman] interval = "10s" threshold = 0.0 id = "{{ .Group }}:NODE_NAME for task '{{ .TaskName }}'" message = "{{ .ID }} is {{ if eq .Level \"OK\" }}alive{{ else }}dead{{ end }}: {{ index .Fields \"emitted\" | printf \"%0.3f\" }} points/INTERVAL." global = false
service start
$ systemctl enable kapacitor $ systemctl daemon-reload $ systemctl start kapacitor
SaltStack 설치
공인망 환경에서의 과정과 동일합니다.
epel-release
저장소 등록 과정은 Pass합니다.
Download telegraf
공인망 환경에서의 과정과 동일합니다.
Add Comment