LINUX.ORG.RU

Сообщения SimpleEnough

 

Тормозит NFS на хостах

Форум — Admin

Всем привет! есть кластер Cassandra из 6 ВМ, на каждой из этих вм примаплена NFS шара, последние несколько недель стали отваливатся бекапы с некоторых нод, стал деббажить, по итогу попробовал позаписывать нолики вот что в итоге вышло

1вм

time dd if=/dev/zero of=foel bs=16k count=128k
131072+0 records in
131072+0 records out
2147483648 bytes (2.1 GB) copied, 4.23151 s, 507 MB/s

real 0m4.239s
user 0m0.008s
sys 0m1.593s

2вм
time dd if=/dev/zero of=foel bs=16k count=128k
131072+0 records in
131072+0 records out
2147483648 bytes (2.1 GB) copied, 4.83356 s, 444 MB/s

real 0m4.839s
user 0m0.005s
sys 0m1.544s

3вм
time dd if=/dev/zero of=foel bs=16k count=128k
131072+0 records in
131072+0 records out
2147483648 bytes (2.1 GB) copied, 56.0399 s, 38.3 MB/s

real 0m56.054s
user 0m0.138s
sys 0m1.741s

4вм
time dd if=/dev/zero of=foel bs=16k count=128k
131072+0 records in
131072+0 records out
2147483648 bytes (2.1 GB) copied, 6.88073 s, 312 MB/s

real 0m6.887s
user 0m0.125s
sys 0m1.862s

5вм
time dd if=/dev/zero of=foel bs=16k count=128k
131072+0 records in
131072+0 records out
2147483648 bytes (2.1 GB) copied, 48.762 s, 44.0 MB/s

real 0m48.771s
user 0m0.114s
sys 0m1.740s

6вм
time dd if=/dev/zero of=foel bs=16k count=128k
131072+0 records in
131072+0 records out
2147483648 bytes (2.1 GB) copied, 5.74801 s, 374 MB/s

real 0m5.754s
user 0m0.104s
sys 0m1.744s

как вы видите на 5 и 3 вм запись на шару идет очень медленно

параметры монтирования на всех вм одинаковые

tac /etc/fstab
172.24.5.229:/kosandra_bekap /opt/cassandra-backup nfs rw,noatime 0 0

пропинговать шару не могу, пинги закрыты в чем может быть подвох?

 ,

SimpleEnough
()

Помогите с Bash скриптом

Форум — General

Всем привет! пишу скрипт для бекапа Elasticsearch, и хочу после завершения бекапа смотреть действительно ли забекапились данные ( индексы ) Каждый день у меня в elasticsearch создаются некие индексы которые я хочу бекапить, и при запуске скрипта для бекапа эти индексы я вставляю в переменную ( для примера сниппет из моего скрипта)

_DATE=$(date '+%Y.%m.%d')
_UZBEKISTAN_INDICES=$(curl -s -X GET "localhost:9200/_cat/indices?v" | grep 'tashkent-'$_DATE'\|fergana-'$_DATE'' | awk '{ print $3 }')

и если мы сделаем echo _$UZBEKISTAN_INDICES в скрипте то вывод будет такой

tashkent-2021.09.20 fergana-2021.09.20

Для проверки бекапа Имеется такой выхлоп со списком индексов которые забекапились

curl -s -XGET "localhost:9200/_snapshot/uzbekistan-2021.09.20/uzbekistan-2021.09.20/?pretty"
{
  "snapshots" : [
    {
      "snapshot" : "uzbekistan-2021.09.20",
      "uuid" : "CLDZ6bLpS06s37tUKNwxpg",
      "version_id" : 7040099,
      "version" : "7.4.0",
      "indices" : [
        "tashkent-2021.09.20",
        "fergana-2021.09.20"
      ],
      "include_global_state" : false,
      "metadata" : {
        "taken_by" : "kimchy",
        "taken_because" : "backup before upgrading"
      },
      "state" : "SUCCESS",
      "start_time" : "2021-09-20T07:00:06.272Z",
      "start_time_in_millis" : 1632121206272,
      "end_time" : "2021-09-20T07:00:06.472Z",
      "end_time_in_millis" : 1632121206472,
      "duration_in_millis" : 200,
      "failures" : [ ],
      "shards" : {
        "total" : 2,
        "failed" : 0,
        "successful" : 2
      }
    }
  ]
}

думаю что правильнее будет проверять какие именно индексы забекапились и сравнивать их с содержимым переменной _$UZBEKISTAN_INDICES а проверять нужно с этого момента в выхлопе

"indices" : [
       "tashkent-2021.09.20",
       "fergana-2021.09.20"
     ],

знатоки, подскажите пожалуйста как грамотно организовать проверку?

 

SimpleEnough
()

Проверка с несколькими условиями

Форум — General

Добрый день уважаемые участники данного форума! помогите пожалуйста с такой ситуацией

у меня есть вот такой скрипт который проверяет 1 снепшот репозиторий в Эластике на предмет успешности/неуспешности последнего бекапа

#!/bin/bash
project=emirates сюда добавились еще 3 проекта

_BACKUP_STATUS=$(curl -s -X GET "10.0.45.101:9200/_cat/snapshots/$project?v&s=id&pretty" | tail -1 | awk '{ print $2 }')

[[ $_BACKUP_STATUS = "SUCCESS" ]] && echo elasticsearch_backup_status{project='"$project"', env='"prod"', zone='"dubai"'} 0 > /var/lib/node_exporter/textfile_collector/elasticsearch_backup_status.prom || echo elasticsearch_backup_status{project='"$project"', env='"prod"', zone='"dubai"'} 1 > /var/lib/node_exporter/textfile_collector/elasticsearch_backup_status.prom

но дело в том что у меня появились еще 3 проекта etihad,flydubai, airarabia, как в таком случае проверять каждые эти 3 проекта?

 ,

SimpleEnough
()

права 777 на все новые файлы

Форум — General

Добрый день уважаемые участники данного форума! у меня есть директория на NFS хранилище, и в этой директории создаются некие файлы и директории на ежедневной основе как мне сделать так что бы все файлы и директории которые создаются в этой директории, создавались с правами 777? заранее благодарю за помощь!

 ,

SimpleEnough
()

Форматирование вывода ansible роли

Форум — Admin

Всем привет! есть некая ansible роль которая проверяет последние бекапы баз данных (Elasticsearch, Cassandra) Хочется вот с этого вывода


ansible-playbook other/check_backup_status.yml -i hosts/mvd/prod/hosts.yml

PLAY [check backup] ***************************************************************************************************************************************************************************************************************

TASK [check_backup : check backups for cluster elasticsearch-bishkek] ***************************************************************************************************************************************************************************
skipping: [elasticsearch-bishkek-02]
skipping: [elasticsearch-bishkek-03]
skipping: [elasticsearch-kabul1]
skipping: [elasticsearch-kabul2]
skipping: [elasticsearch-kabul3]
skipping: [elasticsearch-kabul4]
skipping: [elasticsearch-kabul5]
skipping: [elasticsearch-kabul6]
skipping: [cassandra-bishkek01]
skipping: [cassandra-bishkek02]
skipping: [cassandra-bishkek03]
[WARNING]: Consider using the get_url or uri module rather than running 'curl'.  If you need to use command because get_url or uri is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False'
in ansible.cfg to get rid of this message.
changed: [elasticsearch-bishkek-01]

TASK [check_backup : Show last 10 backups for elasticsearch bishkek cluster] ***********************************************************************************************************************************************************
ok: [elasticsearch-bishkek-01] => {
    "msg": [
        "snapshot-2021-01-02-14:20:01 SUCCESS 1609575601  08:20:01   1609575655 08:20:55    54.2s     106               364             0          364",
        "snapshot-2021-01-02-20:20:01 SUCCESS 1609597201  14:20:01   1609597260 14:21:00    58.7s     106               364             0          364",
        "snapshot-2021-01-03-09:20:01 SUCCESS 1609644002  03:20:02   1609644092 03:21:32     1.4m     106               364             0          364",
        "snapshot-2021-01-03-14:20:01 SUCCESS 1609662001  08:20:01   1609662173 08:22:53     2.8m     106               364             0          364",
        "snapshot-2021-01-03-20:20:01 SUCCESS 1609683602  14:20:02   1609683671 14:21:11     1.1m     106               364             0          364",
        "snapshot-2021-01-04-09:20:01 SUCCESS 1609730401  03:20:01   1609730467 03:21:07       1m     106               364             0          364",
        "snapshot-2021-01-04-14:20:01 SUCCESS 1609748402  08:20:02   1609748460 08:21:00    57.8s     106               364             0          364",
        "snapshot-2021-01-04-20:20:01 SUCCESS 1609770001  14:20:01   1609770064 14:21:04       1m     106               364             0          364",
        "snapshot-2021-01-05-09:20:01 SUCCESS 1609816801  03:20:01   1609816856 03:20:56    54.5s     106               364             0          364",
        "snapshot-2021-01-05-14:20:01 SUCCESS 1609834802  08:20:02   1609835018 08:23:38     3.5m     107               365             0          365"
    ]
}
skipping: [elasticsearch-bishkek-02]
skipping: [elasticsearch-bishkek-03]
skipping: [elasticsearch-kabul1]
skipping: [elasticsearch-kabul2]
skipping: [elasticsearch-kabul3]
skipping: [elasticsearch-kabul4]
skipping: [elasticsearch-kabul5]
skipping: [elasticsearch-kabul6]
skipping: [cassandra-bishkek01]
skipping: [cassandra-bishkek02]
skipping: [cassandra-bishkek03]

TASK [check_backup : check backups for elasticsearch kabul cluster] ******************************************************************************************************************************************************************************
skipping: [elasticsearch-bishkek-01]
skipping: [elasticsearch-bishkek-02]
skipping: [elasticsearch-bishkek-03]
skipping: [elasticsearch-kabul2]
skipping: [elasticsearch-kabul3]
skipping: [elasticsearch-kabul4]
skipping: [elasticsearch-kabul5]
skipping: [elasticsearch-kabul6]
skipping: [cassandra-bishkek01]
skipping: [cassandra-bishkek02]
skipping: [cassandra-bishkek03]
changed: [elasticsearch-kabul1]


TASK [check_backup : Show last 10 backups for elasticsearch-kabul] **************************************************************************************************************************************************************************
skipping: [elasticsearch-bishkek-01]
skipping: [elasticsearch-bishkek-02]
skipping: [elasticsearch-bishkek-03]
ok: [elasticsearch-kabul1] => {
    "msg": [
        "snapshot-2020-12-27-06:00:01 SUCCESS 1609027201  00:00:01   1609030938 01:02:18       1h      47               131             0          131",
        "snapshot-2020-12-28-06:00:01 SUCCESS 1609113602  00:00:02   1609114922 00:22:02    21.9m      50               140             0          140",
        "snapshot-2020-12-29-06:00:01 SUCCESS 1609200002  00:00:02   1609201779 00:29:39    29.6m      52               146             0          146",
        "snapshot-2020-12-30-06:00:01 SUCCESS 1609286402  00:00:02   1609290726 01:12:06     1.2h      30                76             0           76",
        "snapshot-2020-12-31-06:00:01 SUCCESS 1609372802  00:00:02   1609375122 00:38:42    38.6m      31                79             0           79",
        "snapshot-2021-01-01-06:00:01 SUCCESS 1609459202  00:00:02   1609461236 00:33:56    33.8m      33                85             0           85",
        "snapshot-2021-01-02-06:00:01 SUCCESS 1609545602  00:00:02   1609546580 00:16:20    16.3m      36                94             0           94",
        "snapshot-2021-01-03-06:00:01 SUCCESS 1609632003  00:00:03   1609633134 00:18:54    18.8m      37                97             0           97",
        "snapshot-2021-01-04-06:00:01 SUCCESS 1609718402  00:00:02   1609719266 00:14:26    14.4m      40               106             0          106",
        "snapshot-2021-01-05-06:00:01 SUCCESS 1609804802  00:00:02   1609805812 00:16:52    16.8m      41               109             0          109"
    ]
}
skipping: [elasticsearch-kabul2]
skipping: [elasticsearch-kabul3]
skipping: [elasticsearch-kabul4]
skipping: [elasticsearch-kabul5]
skipping: [elasticsearch-kabul6]
skipping: [cassandra-bishkek01]
skipping: [cassandra-bishkek02]
skipping: [cassandra-bishkek03]


TASK [check_backup : check backups for Cassandra cluster] *************************************************************************************************************************************************************************
skipping: [elasticsearch-bishkek-01]
skipping: [elasticsearch-bishkek-02]
skipping: [elasticsearch-bishkek-03]
skipping: [elasticsearch-kabul1]
skipping: [elasticsearch-kabul2]
skipping: [elasticsearch-kabul3]
skipping: [elasticsearch-kabul4]
skipping: [elasticsearch-kabul5]
skipping: [elasticsearch-kabul6]
skipping: [cassandra-bishkek02]
skipping: [cassandra-bishkek03]
changed: [cassandra-bishkek01]

TASK [check_backup : show last backups for Cassandra cluster] *********************************************************************************************************************************************************
skipping: [elasticsearch-bishkek-01]
skipping: [elasticsearch-bishkek-02]
skipping: [elasticsearch-bishkek-03]
skipping: [elasticsearch-kabul1]
skipping: [elasticsearch-kabul2]
skipping: [elasticsearch-kabul3]
skipping: [elasticsearch-kabul4]
skipping: [elasticsearch-kabul5]
skipping: [elasticsearch-kabul6]
ok: [cassandra-bishkek01] => {
    "msg": [
        "2021-01-03__09:20:01 (started: 2021-01-03 09:20:02, finished: 2021-01-03 09:21:16)",
        "2021-01-03__14:20:01 (started: 2021-01-03 14:20:02, finished: 2021-01-03 14:20:58)",
        "2021-01-03__20:20:01 (started: 2021-01-03 20:20:02, finished: 2021-01-03 20:21:01)",
        "2021-01-04__09:20:01 (started: 2021-01-04 09:20:02, finished: 2021-01-04 09:21:36)",
        "2021-01-04__14:20:01 (started: 2021-01-04 14:20:02, finished: 2021-01-04 14:20:59)",
        "2021-01-04__20:20:01 (started: 2021-01-04 20:20:02, finished: 2021-01-04 20:21:02)",
        "2021-01-05__09:20:01 (started: 2021-01-05 09:20:02, finished: 2021-01-05 09:21:25)",
        "2021-01-05__14:20:01 (started: 2021-01-05 14:20:02, finished: 2021-01-05 14:21:13)",
        "",
        "Incomplete backups found. You can run \"medusa status --backup-name <name>\" for more details"
    ]
}
skipping: [cassandra-bishkek02]
skipping: [cassandra-bishkek03]

PLAY RECAP ************************************************************************************************************************************************************************************************************************
elasticsearch-kabul1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0
elasticsearch-kabul2                     : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
elasticsearch-kabul3                     : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
elasticsearch-kabul4                     : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
elasticsearch-kabul5                     : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
elasticsearch-kabul6                     : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
elasticsearch-bishkek-01        : ok=2    changed=1    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0
elasticsearch-bishkek-02        : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
elasticsearch-bishkek-03        : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
cassandra-bishkek01           : ok=2    changed=1    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0
cassandra-bishkek02           : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0
cassandra-bishkek03           : ok=0    changed=0    unreachable=0    failed=0    skipped=8    rescued=0    ignored=0

получить что то читабельное типа

show last backups for Cassandra cluster
ok: [cassandra-bishkek01] => {
    "msg": [
        "2021-01-03__09:20:01 (started: 2021-01-03 09:20:02, finished: 2021-01-03 09:21:16)",
        "2021-01-03__14:20:01 (started: 2021-01-03 14:20:02, finished: 2021-01-03 14:20:58)",
        "2021-01-03__20:20:01 (started: 2021-01-03 20:20:02, finished: 2021-01-03 20:21:01)",
        "2021-01-04__09:20:01 (started: 2021-01-04 09:20:02, finished: 2021-01-04 09:21:36)",
        "2021-01-04__14:20:01 (started: 2021-01-04 14:20:02, finished: 2021-01-04 14:20:59)",
        "2021-01-04__20:20:01 (started: 2021-01-04 20:20:02, finished: 2021-01-04 20:21:02)",
        "2021-01-05__09:20:01 (started: 2021-01-05 09:20:02, finished: 2021-01-05 09:21:25)",
        "2021-01-05__14:20:01 (started: 2021-01-05 14:20:02, finished: 2021-01-05 14:21:13)",
        "",
        "Incomplete backups found. You can run \"medusa status --backup-name <name>\" for more details"

и так для каждого таска там где в name стоит show last backups

можете помочь плиз вырезать все эти skipping

 

SimpleEnough
()

RSS подписка на новые темы