구성 목표
물리서버 3대
Shading
클러스터 ( 장애시 남은 노드 균등 부하 )
장애노드 장애복구시 우선순위를 자동 failback
Mongo-1 Server | Mongo-2 Server | Mongo-3 Server | Role | |
mongos | - | - | - | WAS 20001 |
IP | 172.16.1.24 | 172.16.1.25 | 172.16.1.26 | |
Shard-1 | Primary 30001 | Secondary 30001 | Arbiter 30001 | Replicaset-1 |
Shard-2 | Primary 30002 | Arbiter 30002 | Secondary 30002 | Replicaset-2 |
Shard-3 | Secondary 30003 | Primary 30003 | Arbiter 30003 | Replicaset-3 |
Shard-4 | Arbiter 30004 | Primary 30004 | Secondary 30004 | Replicaset-4 |
Shard-5 | Secondary 30005 | Arbiter 30005 | Primary 30005 | Replicaset-5 |
Shard-6 | Arbiter 30006 | Secondary 30006 | Primary 30006 | Replicaset-6 |
Config | config-1 20000 | config-2 20000 | config-3 20000 | Replicaset-Config |
장애시 Failover
시나리오 : mongo-3-server 장애
shard-5 Secondary -> Primary ( Mongo-1 Server )
shard-6 Secondary -> Primary ( Mongo-2 Server )
Mongo-1 Server | Mongo-2 Server | Role | ||
mongos | - | - | WAS 20001 | |
IP | 172.16.1.24 | 172.16.1.25 | ||
Shard-1 | Primary 30001 | Secondary 30001 | Replicaset-1 | |
Shard-2 | Primary 30002 | Arbiter 30002 | Replicaset-2 | |
Shard-3 | Secondary 30003 | Primary 30003 | Replicaset-3 | |
Shard-4 | Arbiter 30004 | Primary 30004 | Replicaset-4 | |
Shard-5 | Primary |
Arbiter 30005 | Replicaset-5 | |
Shard-6 | Arbiter 30006 | Primary |
Replicaset-6 | |
Config | config-1 20000 | config-2 20000 | Replicaset-Config |
Mongodb Install
https://www.mongodb.com/ 접속 및 Products > Community Server
필요한 설치 파일은 4가지이며 필요한 버전 다운로드
mongosh 설치 버전은 openssl버전에 따라 나와있는 패키지파일 다름
mongodb-database-tools-rhel80-x86_64-100.6.0.rpm
mongodb-mongosh-shared-openssl11-1.6.0.x86_64.rpm
mongodb-org-server-6.0.2-1.el8.x86_64.rpm
mongodb-org-mongos-6.0.2-1.el8.x86_64.rpm // mongos는 최종적으로 WAS서버에서 구동될 예정이나 설치 과정에서 테스트 및 설정에 필요함으로 함께 설치
mongo-1 , mongo-2 , mongo-3 ( 공통 )
THP 비활성화
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
Linux Setting 수정
vi /etc/security/limit.conf
* soft memlock unlimited * hard memlock unlimited * hard nofile 65000 * soft nofile 65000 * soft nproc 65000 * hard nproc 65000 |
vi /etc/sysctl.conf
vm.dirty_ratio = 7 vm.dirty_background_ratio = 2 vm.swappiness = 1 vm.max_map_count=262144 net.core.somaxconn = 4096 net.ipv4.tcp_fin_timeout = 30 net.ipv4.tcp_keepalive_intvl = 75 net.ipv4.tcp_keepalive_time = 7200 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.ip_local_port_range = 1024 65000 |
호스트네임추가 ( Mongos를 구동할 서버에서도 진행필요 )
vi /etc/hosts
172.16.1.24 Mongo-1 172.16.1.25 Mongo-2 172.16.1.26 Mongo-3 |
각각의 서버에 업로드 ( /usr/local/src/ )
디렉토리 생성
mkdir -p /apps/mongodb/config
mkdir -p /apps/mongodb/log
mkdir -p /apps/mongodb/data/rs1
mkdir -p /apps/mongodb/data/rs2
mkdir -p /apps/mongodb/data/rs3
mkdir -p /apps/mongodb/data/rs4
mkdir -p /apps/mongodb/data/rs5
mkdir -p /apps/mongodb/data/rs6
mkdir -p /apps/mongodb/data/conf
mkdir -p /apps/mongodb/data/arbiter-rs1
mkdir -p /apps/mongodb/data/arbiter-rs2
mkdir -p /apps/mongodb/data/arbiter-rs3
mkdir -p /apps/mongodb/data/arbiter-rs4
mkdir -p /apps/mongodb/data/arbiter-rs5
mkdir -p /apps/mongodb/data/arbiter-rs6
패키지 설치
cd /usr/local/src
dnf localinstall mongodb-database-tools-rhel80-x86_64-100.6.0.rpm
dnf localinstall mongodb-mongosh-shared-openssl11-1.6.0.x86_64.rpm
dnf localinstall mongodb-org-server-6.0.2-1.el8.x86_64.rpm
dnf localinstall mongodb-org-mongos-6.0.2-1.el8.x86_64.rpm
config 설정
vi /apps/mongodb/config/config.conf
sharding:
clusterRole: configsvr replication: replSetName: "configRepl" net: port: 40001 bindIp: 0.0.0.0 storage: dbPath: /apps/mongodb/data/conf journal: enabled: true engine: "wiredTiger" processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-config.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
sharding: clusterRole: shardsvr net: port: 30001 bindIp: 0.0.0.0 replication: replSetName: "rs1" storage: dbPath: /apps/mongodb/data/rs1 journal: enabled: true engine: "wiredTiger" wiredTiger: engineConfig: cacheSizeGB: 32 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-shard-rs1.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
sharding: clusterRole: shardsvr net: port: 30002 bindIp: 0.0.0.0 replication: replSetName: "rs2" storage: dbPath: /apps/mongodb/data/rs2 journal: enabled: true engine: "wiredTiger" wiredTiger: engineConfig: cacheSizeGB: 32 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-shard-rs2.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
sharding: clusterRole: shardsvr net: port: 30003 bindIp: 0.0.0.0 replication: replSetName: "rs3" storage: dbPath: /apps/mongodb/data/rs3 journal: enabled: true engine: "wiredTiger" wiredTiger: engineConfig: cacheSizeGB: 32 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-shard-rs3.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
sharding: clusterRole: shardsvr net: port: 30004 bindIp: 0.0.0.0 replication: replSetName: "rs4" storage: dbPath: /apps/mongodb/data/rs4 journal: enabled: true engine: "wiredTiger" wiredTiger: engineConfig: cacheSizeGB: 32 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-shard-rs4.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
sharding: clusterRole: shardsvr net: port: 30005 bindIp: 0.0.0.0 replication: replSetName: "rs5" storage: dbPath: /apps/mongodb/data/rs5 journal: enabled: true engine: "wiredTiger" wiredTiger: engineConfig: cacheSizeGB: 32 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-shard-rs5.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
sharding: clusterRole: shardsvr net: port: 30006 bindIp: 0.0.0.0 replication: replSetName: "rs6" storage: dbPath: /apps/mongodb/data/rs6 journal: enabled: true engine: "wiredTiger" wiredTiger: engineConfig: cacheSizeGB: 32 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongod-shard-rs6.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile # authorization: 'enabled' |
vi /apps/mongodb/config/mongos.conf
sharding: configDB: configRepl/172.16.1.24:40001,172.16.1.25:40001,172.16.1.26:40001 net: port: 20001 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/mongos.log logAppend: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
mongo-1 Server
vi /apps/mongodb/config/arbiter-rs4.conf
#arbiter-rs4 replication: replSetName: "rs4" net: port : 30004 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/arbiter-rs4.log logAppend: true storage: dbPath: "/apps/mongodb/data/arbiter-rs4" directoryPerDB: true journal: enabled: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
vi /apps/mongodb/config/arbiter-rs6.conf
#arbiter-rs6 replication: replSetName: "rs6" net: port : 30006 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/arbiter-rs6.log logAppend: true storage: dbPath: "/apps/mongodb/data/arbiter-rs6" directoryPerDB: true journal: enabled: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
rm -rf /apps/mongodb/config/shard-rs4.conf
rm -rf /apps/mongodb/config/shard-rs6.conf
rm -rf /apps/mongodb/data/rs4
rm -rf /apps/mongodb/data/rs6
mongod -f /apps/mongodb/config/config.conf
mongod -f /apps/mongodb/config/arbiter-rs4.conf
mongod -f /apps/mongodb/config/arbiter-rs6.conf
mongod -f /apps/mongodb/config/shard-rs1.conf
mongod -f /apps/mongodb/config/shard-rs2.conf
mongod -f /apps/mongodb/config/shard-rs3.conf
mongod -f /apps/mongodb/config/shard-rs5.conf
mongo-2 Server
vi /apps/mongodb/config/arbiter-rs2.conf
#arbiter-rs2 replication: replSetName: "rs2" net: port : 30002 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/arbiter-rs2.log logAppend: true storage: dbPath: "/apps/mongodb/data/arbiter-rs2" directoryPerDB: true journal: enabled: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
vi /apps/mongodb/config/arbiter-rs5.conf
#arbiter-rs5 replication: replSetName: "rs5" net: port : 30005 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/arbiter-rs5.log logAppend: true storage: dbPath: "/apps/mongodb/data/arbiter-rs5" directoryPerDB: true journal: enabled: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
rm -rf /apps/mongodb/config/shard-rs2.conf
rm -rf /apps/mongodb/config/shard-rs5.conf
rm -rf /apps/mongodb/data/rs2
rm -rf /apps/mongodb/data/rs5
mongod -f /apps/mongodb/config/config.conf
mongod -f /apps/mongodb/config/arbiter-rs2.conf
mongod -f /apps/mongodb/config/arbiter-rs5.conf
mongod -f /apps/mongodb/config/shard-rs1.conf
mongod -f /apps/mongodb/config/shard-rs3.conf
mongod -f /apps/mongodb/config/shard-rs4.conf
mongod -f /apps/mongodb/config/shard-rs6.conf
mongo-3 Server
vi /apps/mongodb/config/arbiter-rs1.conf
replication: replSetName: "rs1" net: port : 30001 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/arbiter-rs1.log logAppend: true storage: dbPath: "/apps/mongodb/data/arbiter-rs1" directoryPerDB: true journal: enabled: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
vi /apps/mongodb/config/arbiter-rs3.conf
replication: replSetName: "rs3" net: port : 30003 bindIp: 0.0.0.0 processManagement: fork: true systemLog: destination: file path: /apps/mongodb/log/arbiter-rs3.log logAppend: true storage: dbPath: "/apps/mongodb/data/arbiter-rs3" directoryPerDB: true journal: enabled: true #security: # keyFile: /apps/mongodb/config/mongodb-keyfile |
rm -rf /apps/mongodb/config/shard-rs1.conf
rm -rf /apps/mongodb/config/shard-rs3.conf
rm -rf /apps/mongodb/data/rs1
rm -rf /apps/mongodb/data/rs3
mongod -f /apps/mongodb/config/config.conf
mongod -f /apps/mongodb/config/arbiter-rs1.conf
mongod -f /apps/mongodb/config/arbiter-rs3.conf
mongod -f /apps/mongodb/config/shard-rs2.conf
mongod -f /apps/mongodb/config/shard-rs4.conf
mongod -f /apps/mongodb/config/shard-rs5.conf
mongod -f /apps/mongodb/config/shard-rs6.conf
Config Repl
mongosh --port 40001
use admin
rs.initiate()
rs.add("Mongo-2:40001");
rs.add("Mongo-3:40001");
rs.status() // 확인 stateStr 항목에 Primary 1개와 Secondary 2개가 확인된다.
configRepl [direct: other] admin> rs.status() { set: 'configRepl', date: ISODate("2022-11-09T07:54:25.549Z"), myState: 1, term: Long("1"), syncSourceHost: '', syncSourceId: -1, configsvr: true, heartbeatIntervalMillis: Long("2000"), majorityVoteCount: 2, writeMajorityCount: 2, votingMembersCount: 3, writableVotingMembersCount: 3, optimes: { lastCommittedOpTime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, lastCommittedWallTime: ISODate("2022-11-09T07:54:24.737Z"), readConcernMajorityOpTime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, appliedOpTime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, durableOpTime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, lastAppliedWallTime: ISODate("2022-11-09T07:54:24.737Z"), lastDurableWallTime: ISODate("2022-11-09T07:54:24.737Z") }, lastStableRecoveryTimestamp: Timestamp({ t: 1667980437, i: 1 }), electionCandidateMetrics: { lastElectionReason: 'electionTimeout', lastElectionDate: ISODate("2022-11-09T07:53:57.654Z"), electionTerm: Long("1"), lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1667980437, i: 1 }), t: Long("-1") }, lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1667980437, i: 1 }), t: Long("-1") }, numVotesNeeded: 1, priorityAtElection: 1, electionTimeoutMillis: Long("10000"), newTermStartDate: ISODate("2022-11-09T07:53:57.664Z"), wMajorityWriteAvailabilityDate: ISODate("2022-11-09T07:53:57.717Z") }, members: [ { _id: 0, name: 'Mongo-1:40001', health: 1, state: 1, stateStr: 'PRIMARY', uptime: 165, optime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, optimeDate: ISODate("2022-11-09T07:54:24.000Z"), lastAppliedWallTime: ISODate("2022-11-09T07:54:24.737Z"), lastDurableWallTime: ISODate("2022-11-09T07:54:24.737Z"), syncSourceHost: '', syncSourceId: -1, infoMessage: 'Could not find member to sync from', electionTime: Timestamp({ t: 1667980437, i: 2 }), electionDate: ISODate("2022-11-09T07:53:57.000Z"), configVersion: 5, configTerm: 1, self: true, lastHeartbeatMessage: '' }, { _id: 1, name: 'Mongo-2:40001', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 11, optime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, optimeDurable: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, optimeDate: ISODate("2022-11-09T07:54:24.000Z"), optimeDurableDate: ISODate("2022-11-09T07:54:24.000Z"), lastAppliedWallTime: ISODate("2022-11-09T07:54:24.737Z"), lastDurableWallTime: ISODate("2022-11-09T07:54:24.737Z"), lastHeartbeat: ISODate("2022-11-09T07:54:25.537Z"), lastHeartbeatRecv: ISODate("2022-11-09T07:54:25.542Z"), pingMs: Long("0"), lastHeartbeatMessage: '', syncSourceHost: 'Mongo-1:40001', syncSourceId: 0, infoMessage: '', configVersion: 5, configTerm: 1 }, { _id: 2, name: 'Mongo-3:40001', health: 1, state: 2, stateStr: 'SECONDARY', uptime: 6, optime: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, optimeDurable: { ts: Timestamp({ t: 1667980464, i: 1 }), t: Long("1") }, optimeDate: ISODate("2022-11-09T07:54:24.000Z"), optimeDurableDate: ISODate("2022-11-09T07:54:24.000Z"), lastAppliedWallTime: ISODate("2022-11-09T07:54:24.737Z"), lastDurableWallTime: ISODate("2022-11-09T07:54:24.737Z"), lastHeartbeat: ISODate("2022-11-09T07:54:25.537Z"), lastHeartbeatRecv: ISODate("2022-11-09T07:54:24.043Z"), pingMs: Long("0"), lastHeartbeatMessage: '', syncSourceHost: 'Mongo-2:40001', syncSourceId: 1, infoMessage: '', configVersion: 5, configTerm: 1 } ], ok: 1, lastCommittedOpTime: Timestamp({ t: 1667980464, i: 1 }), '$clusterTime': { clusterTime: Timestamp({ t: 1667980464, i: 1 }), signature: { hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0), keyId: Long("0") } }, operationTime: Timestamp({ t: 1667980464, i: 1 }) } |
Mongo-1
mongosh --port 30001
use admin
rs.initiate()
rs.add("Mongo-2:30001");
rs.addArb("Mongo-3:30001");
rs.status()
rs.conf()
cfg=rs.conf()
cfg.members[0].priority=2
rs.reconfig(cfg)
rs.conf() // priority 확인. 높을수록 우선 ( primary )
exit
mongosh --port 30002
use admin
rs.initiate()
rs.add("Mongo-3:30002");
rs.addArb("Mongo-2:30002");
rs.status()
rs.conf()
cfg=rs.conf()
cfg.members[0].priority=2
rs.reconfig(cfg)
rs.conf() // priority 확인. 높을수록 우선 ( primary )
exit
Mongo-2
mongosh --port 30003
use admin
rs.initiate()
rs.add("Mongo-1:30003");
rs.addArb("Mongo-3:30003");
rs.status()
rs.conf()
cfg=rs.conf()
cfg.members[0].priority=2
rs.reconfig(cfg)
rs.conf() // priority 확인. 높을수록 우선 ( primary )
exit
mongosh --port 30004
use admin
rs.initiate()
rs.add("Mongo-3:30004");
rs.addArb("Mongo-1:30004");
rs.status()
rs.conf()
cfg=rs.conf()
cfg.members[0].priority=2
rs.reconfig(cfg)
rs.conf() // priority 확인. 높을수록 우선 ( primary )
exit
Mongo-3
mongosh --port 30005
use admin
rs.initiate()
rs.add("Mongo-1:30005");
rs.addArb("Mongo-2:30005");
rs.status()
rs.conf()
cfg=rs.conf()
cfg.members[0].priority=2
rs.reconfig(cfg)
rs.conf() // priority 확인. 높을수록 우선 ( primary )
exit
mongosh --port 30006
use admin
rs.initiate()
rs.add("Mongo-2:30006");
rs.addArb("Mongo-1:30006");
rs.status()
rs.conf()
cfg=rs.conf()
cfg.members[0].priority=2
rs.reconfig(cfg)
rs.conf() // priority 확인. 높을수록 우선 ( primary )
exit
Mongos Server
## Mongos에서 리플리카셋이 존재하는경우 반드시 필요한 옵션
P와 S간 정합성을 위해 필요하며 1로 설정시 P에만 쓰기가 완료되면 응답한다.
2일경우 리플리카멤버중 2개의 쓰기를 완료한 후 응답한다.
readPreference값이 Primary 일 경우 상기 값은 1로만 설정되어도 무관할 것으로 보인다. ( Default Primary )
쓰기와 읽기를 나눠주는 옵션으로 처리할 경우 적절한 값을 설정하지 않으면 쓰기가 완료되지 않은 상태에서 데이터를 요청하는 경우 에러가 발생 할 수 있다.
mongosh --port 20001
설정 추가
db.adminCommand({ setDefaultRWConcern : 1, defaultWriteConcern: { w: 1 }, }) |
설정 확인
db.adminCommand( { getDefaultRWConcern: 1 } ) |
sh.addShard("rs1/Mongo-1:30001");
sh.addShard("rs2/Mongo-1:30002");
sh.addShard("rs3/Mongo-2:30003");
sh.addShard("rs4/Mongo-2:30004");
sh.addShard("rs5/Mongo-3:30005");
sh.addShard("rs6/Mongo-3:30006");
추가 완료되면 상태를 확인 할 수 있다.
sh.status()
ETC
WARNING: Arbiters are not supported in quarterly binary versions
-> 리플리카셋이 동작하고 있는 노드에서 아비터 구동시 나오는 경고메세지, 현재 구성에서는 아비터가 속해있는 리플리카셋과 함께 있지 않음으로 무관하다고 생각된다.
참고자료
https://www.mongodb.com/docs/manual/core/replica-set-arbiter/
Monitoring
mongosh --port 30001 ~ 30006
활성화
db.enableFreeMonitoring()
비활성화
db.disableFreeMonitoring()
활성화하면 링크가 보이고 그링크로 접속할 경우 각 정보를 확인 할 수 있다.
'SYSTEM > Software' 카테고리의 다른 글
MariaDB upgrade 10.2 to 10.7 (0) | 2022.11.07 |
---|---|
Rocky Linux 8 / cockpit / system monitoring / management (0) | 2022.08.31 |
linux software raid / mdadm 사용 방법 (0) | 2022.08.31 |
lsyncd + unison 양방향 동기화 / NFS 대체 (0) | 2022.08.31 |
hpsmh - HP Smart Management Home Page on CentOS (0) | 2022.08.31 |