LINUX.ORG.RU

Сообщения antonio-an

 

Cilium pod Init:0/6 Init:Error — не работает

Форум — Admin

Cilium не работает в Talos Состояние

Init:0/6

Init:Error

Включение толерантности не помогает.

Делаю установку Talos

talosctl gen secrets  

talosctl gen config --with-secrets secrets.yaml talos-vbox https://192.168.1.100:6443 

talosctl machineconfig patch controlplane.yaml --patch patch.yaml -o controlplane_patched.yaml

talosctl apply-config --insecure -n 192.168.1.100 --file controlplane_patched.yaml


talosctl bootstrap --nodes 192.168.1.100 --endpoints 192.168.1.100   --talosconfig=talosconfig


talosctl kubeconfig -n 192.168.1.100 --endpoints 192.168.1.100   --talosconfig=talosconfig

В файле patch.yaml отключаем flannel & proxy

cluster:
  network:
    cni:
      name: none
  proxy:
    disabled: true

Устанавливаю Cilium хелмом

helm install  cilium   cilium/cilium --version 1.18.0 -n kube-system
 
 helm upgrade  cilium   cilium/cilium --version 1.18.0  --namespace kube-system   --set ipam.mode=kubernetes   --set kubeProxyReplacement=true --set operator.replicas=1    --set hubble.enabled=true   --set hubble.relay.enabled=true   --set hubble.ui.enabled=true  --set l2podAnnouncements.interface="enp0s3"   --set devices=enp0s3 

Смотрим состояние подов

kubectl get pod -A -o wide

NAMESPACE     NAME                                         READY   STATUS                  RESTARTS        AGE
kube-system   cilium-envoy-6n6pc                           1/1     Running                 0               29m
kube-system   cilium-operator-85c86d7fb9-rmft5             0/1     Pending                 0               29m
kube-system   cilium-operator-85c86d7fb9-t7xbs             0/1     Pending                 0               8m9s
kube-system   cilium-w96nb                                 0/1     Init:CrashLoopBackOff   7 (9m58s ago)   29m
kube-system   coredns-7859998f6-chfpr                      0/1     Pending                 0               77m
kube-system   coredns-7859998f6-f55jm                      0/1     Pending                 0               77m
kube-system   kube-apiserver-node01.localdomain            1/1     Terminated              0               33m
kube-system   kube-controller-manager-node01.localdomain   1/1     Terminated              2 (33m ago)     33m
kube-system   kube-scheduler-node01.localdomain            1/1     Terminated              2 (33m ago)     33m

Что там с этим бедным подом:

kubectl describe pods/cilium-w96nb -n kube-system

Name:                 cilium-w96nb
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      cilium
Node:                 node01.localdomain/192.168.1.100
Start Time:           Fri, 06 Feb 2026 00:09:39 +0300
Labels:               app.kubernetes.io/name=cilium-agent
                      app.kubernetes.io/part-of=cilium
                      controller-revision-hash=7b7c49857d
                      k8s-app=cilium
                      pod-template-generation=1
Annotations:          kubectl.kubernetes.io/default-container: cilium-agent
Status:               Pending
SeccompProfile:       Unconfined
IP:                   192.168.1.100
IPs:
  IP:           192.168.1.100
Controlled By:  DaemonSet/cilium
Init Containers:
  config:
    Container ID:  containerd://a4e7bcc4d71a2f98e9b50a2969c5c80e43e53d35b0f51c2e8816c80fd2822e0b
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:      quay.io/cilium/cilium@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Port:          <none>
    Host Port:     <none>
    Command:
      cilium-dbg
      build-config
    State:       Running
      Started:   Fri, 06 Feb 2026 00:10:46 +0300
    Last State:  Terminated
      Reason:    Error
      Message:   time=2026-02-05T21:09:40.784802739Z level=info msg=Running subsys=cilium-dbg
time=2026-02-05T21:09:40.786691799Z level=info msg="Starting hive" subsys=cilium-dbg
time=2026-02-05T21:09:40.786901748Z level=info msg="Establishing connection to apiserver" subsys=cilium-dbg module=k8s-client ipAddr=https://10.96.0.1:443
time=2026-02-05T21:10:15.816825365Z level=info msg="Establishing connection to apiserver" subsys=cilium-dbg module=k8s-client ipAddr=https://10.96.0.1:443
time=2026-02-05T21:10:45.842985179Z level=error msg="Unable to contact k8s api-server" subsys=cilium-dbg module=k8s-client ipAddr=https://10.96.0.1:443 error="Get \"https://10.96.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.96.0.1:443: i/o timeout"
time=2026-02-05T21:10:45.843082548Z level=error msg="Start hook failed" subsys=cilium-dbg function="client.(*compositeClientset).onStart (k8s-client)" error="Get \"https://10.96.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.96.0.1:443: i/o timeout"
time=2026-02-05T21:10:45.843105244Z level=error msg="Failed to start hive" subsys=cilium-dbg error="Get \"https://10.96.0.1:443/api/v1/namespaces/kube-system\": dial tcp 10.96.0.1:443: i/o timeout" duration=1m5.056323338s
time=2026-02-05T21:10:45.843150241Z level=info msg="Stopping hive" subsys=cilium-dbg
time=2026-02-05T21:10:45.843208798Z level=info msg="Stopped hive" subsys=cilium-dbg duration=47.542µs
Error: Build config failed: failed to start: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system": dial tcp 10.96.0.1:443: i/o timeout


      Exit Code:    1
      Started:      Fri, 06 Feb 2026 00:09:40 +0300
      Finished:     Fri, 06 Feb 2026 00:10:45 +0300
    Ready:          False
    Restart Count:  1
    Environment:
      K8S_NODE_NAME:          (v1:spec.nodeName)
      CILIUM_K8S_NAMESPACE:  kube-system (v1:metadata.namespace)
    Mounts:
      /tmp from tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
  mount-cgroup:
    Container ID:
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -ec
      cp /usr/bin/cilium-mount /hostbin/cilium-mount;
      nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT;
      rm /hostbin/cilium-mount

    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      CGROUP_ROOT:  /run/cilium/cgroupv2
      BIN_PATH:     /opt/cni/bin
    Mounts:
      /hostbin from cni-path (rw)
      /hostproc from hostproc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
  apply-sysctl-overwrites:
    Container ID:
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -ec
      cp /usr/bin/cilium-sysctlfix /hostbin/cilium-sysctlfix;
      nsenter --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-sysctlfix";
      rm /hostbin/cilium-sysctlfix

    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      BIN_PATH:  /opt/cni/bin
    Mounts:
      /hostbin from cni-path (rw)
      /hostproc from hostproc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
  mount-bpf-fs:
    Container ID:
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/bash
      -c
      --
    Args:
      mount | grep "/sys/fs/bpf type bpf" || mount -t bpf bpf /sys/fs/bpf
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /sys/fs/bpf from bpf-maps (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
  clean-cilium-state:
    Container ID:
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /init-container.sh
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Environment:
      CILIUM_ALL_STATE:           <set to the key 'clean-cilium-state' of config map 'cilium-config'>         Optional: true
      CILIUM_BPF_STATE:           <set to the key 'clean-cilium-bpf-state' of config map 'cilium-config'>     Optional: true
      WRITE_CNI_CONF_WHEN_READY:  <set to the key 'write-cni-conf-when-ready' of config map 'cilium-config'>  Optional: true
    Mounts:
      /run/cilium/cgroupv2 from cilium-cgroup (rw)
      /sys/fs/bpf from bpf-maps (rw)
      /var/run/cilium from cilium-run (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
  install-cni-binaries:
    Container ID:
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      /install-plugin.sh
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     10Mi
    Environment:  <none>
    Mounts:
      /host/opt/cni/bin from cni-path (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
Containers:
  cilium-agent:
    Container ID:
    Image:         quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Command:
      cilium-agent
    Args:
      --config-dir=/tmp/cilium/config-map
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=10
    Readiness:      http-get http://127.0.0.1:9879/healthz delay=0s timeout=5s period=30s #success=1 #failure=3
    Startup:        http-get http://127.0.0.1:9879/healthz delay=5s timeout=1s period=2s #success=1 #failure=300
    Environment:
      K8S_NODE_NAME:                  (v1:spec.nodeName)
      CILIUM_K8S_NAMESPACE:          kube-system (v1:metadata.namespace)
      CILIUM_CLUSTERMESH_CONFIG:     /var/lib/cilium/clustermesh/
      GOMEMLIMIT:                    node allocatable (limits.memory)
      KUBE_CLIENT_BACKOFF_BASE:      1
      KUBE_CLIENT_BACKOFF_DURATION:  120
    Mounts:
      /host/etc/cni/net.d from etc-cni-netd (rw)
      /host/proc/sys/kernel from host-proc-sys-kernel (rw)
      /host/proc/sys/net from host-proc-sys-net (rw)
      /lib/modules from lib-modules (ro)
      /run/xtables.lock from xtables-lock (rw)
      /sys/fs/bpf from bpf-maps (rw)
      /tmp from tmp (rw)
      /var/lib/cilium/clustermesh from clustermesh-secrets (ro)
      /var/lib/cilium/tls/hubble from hubble-tls (ro)
      /var/run/cilium from cilium-run (rw)
      /var/run/cilium/envoy/sockets from envoy-sockets (rw)
      /var/run/cilium/netns from cilium-netns (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pqjr8 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True
  Initialized                 False
  Ready                       False
  ContainersReady             False
  PodScheduled                True
Volumes:
  tmp:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  cilium-run:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/cilium
    HostPathType:  DirectoryOrCreate
  cilium-netns:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/netns
    HostPathType:  DirectoryOrCreate
  bpf-maps:
    Type:          HostPath (bare host directory volume)
    Path:          /sys/fs/bpf
    HostPathType:  DirectoryOrCreate
  hostproc:
    Type:          HostPath (bare host directory volume)
    Path:          /proc
    HostPathType:  Directory
  cilium-cgroup:
    Type:          HostPath (bare host directory volume)
    Path:          /run/cilium/cgroupv2
    HostPathType:  DirectoryOrCreate
  cni-path:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  DirectoryOrCreate
  etc-cni-netd:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  DirectoryOrCreate
  lib-modules:
    Type:          HostPath (bare host directory volume)
    Path:          /lib/modules
    HostPathType:
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  envoy-sockets:
    Type:          HostPath (bare host directory volume)
    Path:          /var/run/cilium/envoy/sockets
    HostPathType:  DirectoryOrCreate
  clustermesh-secrets:
    Type:        Projected (a volume that contains injected data from multiple sources)
    SecretName:  cilium-clustermesh
    Optional:    true
    SecretName:  clustermesh-apiserver-remote-cert
    Optional:    true
    SecretName:  clustermesh-apiserver-local-cert
    Optional:    true
  host-proc-sys-net:
    Type:          HostPath (bare host directory volume)
    Path:          /proc/sys/net
    HostPathType:  Directory
  host-proc-sys-kernel:
    Type:          HostPath (bare host directory volume)
    Path:          /proc/sys/kernel
    HostPathType:  Directory
  hubble-tls:
    Type:        Projected (a volume that contains injected data from multiple sources)
    SecretName:  hubble-server-certs
    Optional:    true
  kube-api-access-pqjr8:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age                From               Message
  ----    ------     ----               ----               -------
  Normal  Scheduled  82s                default-scheduler  Successfully assigned kube-system/cilium-w96nb to node01.localdomain
  Normal  Pulled     16s (x2 over 82s)  kubelet            Container image "quay.io/cilium/cilium:v1.18.0@sha256:dfea023972d06ec183cfa3c9e7809716f85daaff042e573ef366e9ec6a0c0ab2" already present on machine and can be accessed by the pod
  Normal  Created    16s (x2 over 82s)  kubelet            Container created
  Normal  Started    16s (x2 over 82s)  kubelet            Container started

 , ,

antonio-an
()

talosctl get disks

Форум — Admin

Хочу посмотреть диски

talosctl get disks –insecure –nodes 192.168.1.100

Ответ:

rpc error: code = NotFound desc = resource "disks" is not registered

 

antonio-an
()

Накидайте задачек по Kubernetes

Форум — Talks

Накидайте задачек по Kubernetes

Для обучения kubernetes Интересуют задачи из реального опыта.

Можно добавить: jenkins argo-cd github action cillium L2 Helm

У меня 16 задач в redmine висят + постоянно что-то придумываю сам.

 , ,

antonio-an
()

vuejs api данные не выводятся

Форум — Web-development

Что-то ни как не выводятся данные

в логах api сервера запросы есть.

Подскажите как правильно скрипт сделать??

<script setup>
import axios from 'axios'
async function addTodo() {
  try {
    const response = await axios.get('http://192.168.1.100:8000/items/ ')
    console.log(response.data) // Получили добавленный объект
  } catch (error) {
    console.error('Ошибка добавления задачи:', error)
  }
}
</script>
<template>
<div>{{ addTodo }}</div>
</template>

он мне просто текст этой функций выводит

Если просто кнопку делаю

 <div @click.capture="addTodo">...</div> 

не чего не выводит

что не правильно делаю?

 , , ,

antonio-an
()

python script

Форум — Development

есть python скрипт func.py

в нём 3 функций

func1 func2 func3

Какой будет строка http запроса к функций этого скрипта?

 , ,

antonio-an
()

Микросервисные приложения в кластере

Форум — Admin

Приветствую! Я в процессе своего обучения по kubernetes и у меня главный вопрос про микросервисные приложения для которых создавался kubernetes !

Интересует личный опыт специалиста или компании - ответа пока не нашёл.

Интересует информация как работают такие микросервисные приложения в кластерах kubernets от 50 POD.

Тем более я изучил как разворачиваются новые деплои в kubernetes и у меня еще больше стало вопросов, кто этим пользуется. Это сложные приложения.

Вопрос - нужна инфа о микросервисных приложениях работающих в кластере.

 , ,

antonio-an
()

CSS cтиль форума Solarized (кастомный)

Форум — Linux-org-ru

На основе стиля Solarized создал свой кастомный CSS стиль для сайта linux.org.ru

Найдёте ошибки или есть идей - пишите!

Использован плагин Stylus для браузера Firefox

Использованно расширение lorify-ng для Firefox

описание здесь

превьюшка-главная || превьюшка-трекер || превьюшка-ввод сообшения

/* ==UserStyle==
@name         LOR Solarized (Dark or Light versions)
@namespace    USO Archive
@author       falcon-peregrinus
@description  Solarized theme for Linux.Org.Ru. Has Dark and Light versions. Compitable with Tango.
@version      20150704.08.38
@license      NONE
@preprocessor uso
@advanced dropdown light-dark "Color gamma" {
    light "Light*" <<<EOT @namespace url(http://www.w3.org/1999/xhtml);

@-moz-document domain("linux.org.ru") {
    html, body, .msg {
        background-color: #fdf6e3 !important;
        color: #657b83 !important;
    }
    #bd h1, #bd h2, .icon-tag, .stars, .icon-tag-color, .icon-user-color, .icon-pin-color, .icon-reply-color {
        color: #657b83 !important;
    }
    .boxlet, .infoblock, #whois_userpic, thead, .forum tr:hover, span.tag, .page-number {
        background-color: #eee8d5 !important;
        color: #657b83 !important;
		border-radius: 5px 45px 5px 45px;
    }
    .menu, #hd {
                text-shadow: 2px 2px 5px #888;
        background-color: #93a1a1 !important;
    }
    #hd .menu a:hover, #loginGreating a:hover {
        text-shadow: 2px 2px 5px #888;
        color: #fdf6e3 !important;
    
    }
    #sitetitle {
    text-shadow: 0 0 5px #FFF, 0 0 10px #FFF, 0 0 15px #FFF, 0 0 20px #49ff18, 0 0 30px #49FF18, 0 0 40px #49FF18, 0 0 55px #49FF18, 0 0 75px #49ff18, -28px -29px 0px rgba(206,89,55,0);
        color: #fdf6e3 !important;
    }
    .menu a, #loginGreating a {
        color: #586e75 !important;
        text-decoration: none !important;
    }
}
	
a{
  color: #b58900 !important;
}

a:visited {
  color: #614700 !important;
}

a:hover {
  color: #B52D00 !important;
}

a:active {
  color: #B52D00 !important;
}	
    .stars {
        color: #2aa198 !important;
    }
    .user-remark, .message-table a.secondary {
        color: #839496 !important;
    }
    .msg {
        border-radius: 0px !important;
        border-bottom: 1px solid #657b83 !important;
    }
    .msg:target {
        border-color: #dc322f !important;
    }
    .entry-body .msg {
        border-bottom: none !important;
    }
    .btn, .btn a, #bd .nav li a {
        background-color: #1A4D56 !important;
        color: #586e75 !important;
		border-radius: 45px 5px 45px 5px;
    }
    #bd .nav li a.current {
        background-color: #1A4D56 !important;
        color: #586e75 !important;
		border-radius: 45px 5px 45px 5px;
    }
	#bd .nav li a.visited {
        background-color: #1A4D56 !important;
        color: #58756D !important;
		border-radius: 45px 5px 45px 5px;
    }
    .btn-danger {
        background: #dc322f !important;
        color: #fdf6e3 !important;
    }
    .fav-buttons a {
        color: #93a1a1 !important;
    }
    .fav-buttons .selected [class^="icon-"] {
        color: #b58900 !important;
    }

    pre code, .code {
	background: #eee8d5 !important;
	color: #657b83 !important;
	display: block !important;
	padding: 0.5em !important;
    }
    pre .comment,pre .template_comment,pre .diff .header,pre .doctype,pre .pi,pre .lisp .string,pre .javadoc,pre .pragma {
	color: #93a1a1 !important;
	font-style: italic !important;
    }
    pre .keyword,pre .winutils,pre .method,pre .addition,pre .css .tag,pre .request,pre .status,pre .nginx .title {
	color: #859900 !important;
    }
    pre .number,pre .command,pre .string,pre .tag .value,pre .rules .value,pre .phpdoc,pre .tex .formula,pre .regexp,pre .hexcolor {
	color: #2aa198 !important;
    }
    pre .title,pre .localvars,pre .chunk,pre .decorator,pre .built_in,pre .identifier,pre .vhdl .literal,pre .id,pre .css .function {
	color: #268bd2 !important;
    }
    pre .attribute,pre .variable,pre .lisp .body,pre .smalltalk .number,pre .constant,pre .class .title,pre .parent,pre .haskell .type {
	color: #b58900 !important;
    }
    pre .preprocessor,pre .preprocessor .keyword,pre .shebang,pre .symbol,pre .symbol .string,pre .diff .change,pre .special,pre .attr_selector,pre .important,pre .subst,pre .cdata,pre .clojure .title,pre .css .pseudo {
	color: #cb4b16 !important;
    }
    pre .deletion {
	color: #dc322f !important;
    }
    pre .tex .formula {
	background: #eee8d5 !important;
    }

    span.code {
        display: inline !important;
        padding: 0 !important;
    }
} EOT;
    dark "Dark" <<<EOT @namespace url(http://www.w3.org/1999/xhtml);

@-moz-document domain("linux.org.ru") {
    html, body, .msg {
        background-color: #002b36 !important;
        color: #839496 !important;
    }
    #bd h1, #bd h2, .icon-tag, .stars, .icon-tag-color, .icon-user-color, .icon-pin-color, .icon-reply-color {
        color: #839496 !important;
    }
    .boxlet, .infoblock, #whois_userpic, thead, .forum tr:hover, span.tag, .page-number {
        background-color: #073642 !important;
        color: #839496 !important;
		border-radius: 5px 45px 5px 45px;
    }
    .menu, #hd {
				background-color: #1A4D56 !important;
				-webkit-box-shadow: 7px 7px 10px 0px rgba(95, 50, 125, 0.2);
				-moz-box-shadow: 7px 7px 10px 0px rgba(95, 50, 125, 0.2);
				box-shadow: 7px 7px 10px 0px rgba(95, 50, 125, 0.2);
				border: 5px solid #1A4D56;
				border-radius: 45px 5px 45px 5px;
    }
	#topProfile {
				float: right;
				font-size: 1.1em;
				margin-top: 1rem;
				margin-right: 0.8rem;		
				color: #B52D00;
				text-shadow: 0 0 10px #FFFFFF;
}
	.messages .msg  {
				-webkit-border-radius: 45px 5px 45px 5px;
				-moz-border-radius: 45px 5px 45px 5px;
				border-radius: 45px 5px 45px 5px;
}
	.messages .msg:target  {
				-webkit-border-radius: 45px 5px 45px 5px;
				-moz-border-radius: 45px 5px 45px 5px;
				border-radius: 45px 5px 45px 5px;
}
    #hd .menu a, #loginGreating a {
				color: #839496 !important;
    }   
   #hd .menu a:hover, #loginGreating a:hover {
				color: #B52D00 !important;
    }
    #sitetitle {
color: #FFFFFF;
text-shadow: 0 0 5px #FFF, 0 0 10px #FFF, 0 0 15px #FFF, 0 0 20px #49ff18, 0 0 30px #49FF18, 0 0 40px #49FF18, 0 0 55px #49FF18, 0 0 75px #49ff18;
    }
    .menu a, #loginGreating a {
        color: #93a1a1 !important;
        text-decoration: none !important;
    }
	textarea {
						background-color: #1A4D56 !important;
						border-radius: 15px 15px 15px 15px;
						border-color: #657b83 !important;
		}	
		input	{
						background-color: #1A4D56 !important;
						border-radius: 15px 15px 15px 15px;
						border-color: #657b83 !important;
		}		
.link-reply, .link-quote, .reactions-li, .wakations-li, .link-self    {
						box-sizing: border-box;		
						 width: 350px;
						height: 150px;
						font-size:18px;
						border-width:1px;
						border-color:#ffaa22;
						border-top-left-radius:6px;
						border-top-right-radius:6px;
						border-bottom-left-radius:6px;
						border-bottom-right-radius:6px;
						box-shadow: 0px 1px 0px 0px #fff6af;
						text-shadow: 0px 1px 0px #ffee66;
						background:linear-gradient(#1A4D56, #657b83)
}	
		#bd nav .btn {
			border-color: #657b83 !important;
			border-radius: 25px 5px 25px 5px;
			-webkit-box-shadow: 7px 7px 10px 0px rgba(95, 50, 125, 0.2);
			-moz-box-shadow: 7px 7px 10px 0px rgba(95, 50, 125, 0.2);
			box-shadow: 7px 7px 10px 0px rgba(95, 50, 125, 0.2);
}
			.nav-buttons {
			float: right;
			background-color: #1A4D56 !important;
			border-color: #657b83 !important;
			border-radius: 25px 5px 25px 5px;
}
a {
  color: #b58900 !important;
}

a:visited {
  color: #614700 !important;
}

a:hover {
  color: #B52D00 !important;
}

a:active {
  color: #B52D00 !important;
}
    .stars {
        color: #36AB88 !important;
    }
    .user-remark, .message-table a.secondary {
        color: #657b83 !important;
    }
    .msg {
        border-radius: 0px !important;
        border-bottom: 1px solid #839496 !important;
    }
    .msg:target {
        border-color: #dc322f !important;
    }
    .entry-body .msg {
        border-bottom: none !important;
    }
    .btn, .btn a, #bd .nav li a {
        background-color: #1A4D56 !important;
        color: #586e75 !important;
        border-color: #657b83 !important;
		border-radius: 45px 5px 45px 5px;
    }
    #bd .nav li a.current {
        background-color: #1A4D56 !important;
        color: #586e75 !important;
		border-radius: 45px 5px 45px 5px;
    }
	#bd .nav li a.visited {
        background-color: #1A4D56 !important;
        color: #58756D !important;
		border-radius: 45px 5px 45px 5px;
    }
    .btn-danger {
        background: #dc322f !important;
        color: #002b36 !important;
    }
    .fav-buttons a {
        color: #586e75 !important;
    }
    .fav-buttons .selected [class^="icon-"] {
        color: #b58900 !important;
    }

    pre code, .code {
	background: #073642 !important;
	color: #839496 !important;
	display: block !important;
	padding: 0.5em !important;
    }
    pre .comment,pre .template_comment,pre .diff .header,pre .doctype,pre .pi,pre .lisp .string,pre .javadoc,pre .pragma {
	color: #586e75 !important;
	font-style: italic !important;
    }
    pre .keyword,pre .winutils,pre .method,pre .addition,pre .css .tag,pre .request,pre .status,pre .nginx .title {
	color: #859900 !important;
    }
    pre .number,pre .command,pre .string,pre .tag .value,pre .rules .value,pre .phpdoc,pre .tex .formula,pre .regexp,pre .hexcolor {
	color: #2aa198 !important;
    }
    pre .title,pre .localvars,pre .chunk,pre .decorator,pre .built_in,pre .identifier,pre .vhdl .literal,pre .id,pre .css .function {
	color: #268bd2 !important;
    }
    pre .attribute,pre .variable,pre .lisp .body,pre .smalltalk .number,pre .constant,pre .class .title,pre .parent,pre .haskell .type {
	color: #b58900 !important;
    }
    pre .preprocessor,pre .preprocessor .keyword,pre .shebang,pre .symbol,pre .symbol .string,pre .diff .change,pre .special,pre .attr_selector,pre .important,pre .subst,pre .cdata,pre .clojure .title,pre .css .pseudo {
	color: #cb4b16 !important;
    }
    pre .deletion {
	color: #dc322f !important;
    }
    pre .tex .formula {
	background: #073642 !important;
    }

    span.code {
        display: inline !important;
        padding: 0 !important;
    }
}
 EOT;

}
==/UserStyle== */
/*[[light-dark]]*/

 , , ,

antonio-an
()

Stylus правки в CSS Лора

Форум — Linux-org-ru

Поставил расширение Stylus для Firefox.

В какой строке CSS добавить тень для текста в главном меню?

text-shadow: 2px 2px 5px #888;

Новости || Галерея || Статьи || Форум

 , ,

antonio-an
()

java web ui

Форум — Development

Посоветуйте какие есть приложения в github на java с доступом web ui.

Хочу это приложение в docker установить. Нужно для тестов.

 , , ,

antonio-an
()

вопрос по автоматическому деплою CI/CD

Форум — Admin

Автодеплой Git–>Gitlab–>Kubernetes–>Traefik&NGINX->World

DEployment создаётся POD создаются(эти шаги работают)

А каким инструментом создаётся доступ пользователей к этим ресурсам снаружи ??

Нужно ещё Ingress сервис создать и т.п.

Нужен NGINX или Traefik обратный HTTP-прокси и балансировщик нагрузки .

Это все автоматом должно настраиваться при Git push.

 , , , ,

antonio-an
()

kuber install and crash

Форум — Linux-install

Установил kubernetes на Debian(VirtualBox)Mem=5048,Cpu=2 В какую сторону копать ошибку???

стартую:

kubeadm init –control-plane-endpoint=$HOSTNAME

По началу ПОДы показывал,потом началось:

kubectl get pods –all-namespaces

The connection to the server node-01:6443 was refused - did you specify the right host or port?

journalctl -f -u kubelet.service

Dec 17 12:54:22 node-01 kubelet[797]: I1217 12:54:22.422257     797 scope.go:117] "RemoveContainer" containerID="05b0005a56c8d217fdf85fc552ab3ac631003678a90080fccd5a44b7eaf6f99b"
Dec 17 12:54:22 node-01 kubelet[797]: E1217 12:54:22.422718     797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" w   ith CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-node-01_kube-system(dcfb8f60d21e77af673e430a4091f4c8)\"" pod="kube-s   ystem/kube-controller-manager-node-01" podUID="dcfb8f60d21e77af673e430a4091f4c8"
Dec 17 12:54:24 node-01 kubelet[797]: E1217 12:54:24.438127     797 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 17 12:54:25 node-01 kubelet[797]: I1217 12:54:25.454727     797 scope.go:117] "RemoveContainer" containerID="05b935247939a259d83e6463968bda857cc1c3c28473a684c5efbf7d60163f3b"
Dec 17 12:54:25 node-01 kubelet[797]: E1217 12:54:25.455866     797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-apiserver pod=kube-apiserver-node-01_kube-system(2f88ee07fe379110744672da1a8923b3)\"" pod="kube-system/kube-apiserver-node-01" podUID="2f88ee07fe379110744672da1a8923b3"
Dec 17 12:54:27 node-01 kubelet[797]: I1217 12:54:27.469044     797 scope.go:117] "RemoveContainer" containerID="25b904a2237a91dbbfb79613c19816977b473fc370a30e03c1b5e636a8250216"
Dec 17 12:54:27 node-01 kubelet[797]: E1217 12:54:27.470882     797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-q6qkc_kube-system(5542650e-f8d2-4fa8-82a1-cca6baf42fcc)\"" pod="kube-system/kube-proxy-q6qkc" podUID="5542650e-f8d2-4fa8-82a1-cca6baf42fcc"
Dec 17 12:54:28 node-01 kubelet[797]: E1217 12:54:28.651589     797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.1.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/node-01?timeout=10s\": dial tcp 192.168.1.10:6443: connect: connection refused" interval="7s"
Dec 17 12:54:28 node-01 kubelet[797]: E1217 12:54:28.780613     797 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/events/kube-scheduler-node-01.18821052a502108b\": dial tcp 192.168.1.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-scheduler-node-01.18821052a502108b  kube-system   3172 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-scheduler-node-01,UID:eb3bdd36cec9338c0ac264972d3ecaa9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod kube-scheduler-node-01_kube-system(eb3bdd36cec9338c0ac264972d3ecaa9),Source:EventSource{Component:kubelet,Host:node-01,},FirstTimestamp:2025-12-17 12:19:11 -0500 EST,LastTimestamp:2025-12-17 12:52:35.458928337 -0500 EST m=+2165.415672342,Count:142,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:node-01,}"
Dec 17 12:54:29 node-01 kubelet[797]: E1217 12:54:29.443856     797 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.485105     797 status_manager.go:895] "Failed to get status for pod" podUID="2f88ee07fe379110744672da1a8923b3" pod="kube-system/kube-apiserver-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.486821     797 status_manager.go:895] "Failed to get status for pod" podUID="dcfb8f60d21e77af673e430a4091f4c8" pod="kube-system/kube-controller-manager-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.488659     797 status_manager.go:895] "Failed to get status for pod" podUID="eb3bdd36cec9338c0ac264972d3ecaa9" pod="kube-system/kube-scheduler-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.489371     797 status_manager.go:895] "Failed to get status for pod" podUID="5542650e-f8d2-4fa8-82a1-cca6baf42fcc" pod="kube-system/kube-proxy-q6qkc" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/kube-proxy-q6qkc\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.490248     797 status_manager.go:895] "Failed to get status for pod" podUID="51c670536858241606c4bad9a5634813" pod="kube-system/etcd-node-01" err="Get \"https://192.168.1.10:6443/api/v1/namespaces/kube-system/pods/etcd-node-01\": dial tcp 192.168.1.10:6443: connect: connection refused"
Dec 17 12:54:30 node-01 kubelet[797]: I1217 12:54:30.518076     797 scope.go:117] "RemoveContainer" containerID="e2a7510faecfe25de69adf5dcfaf75cb5d08f14560540791829c45de00002d64"
Dec 17 12:54:30 node-01 kubelet[797]: E1217 12:54:30.518386     797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-node-01_kube-system(eb3bdd36cec9338c0ac264972d3ecaa9)\"" pod="kube-system/kube-scheduler-node-01" podUID="eb3bdd36cec9338c0ac264972d3ecaa9"
^C

[b]kubectl events[/b]

LAST SEEN               TYPE      REASON                    OBJECT         MESSAGE
8m11s                   Normal    Starting                  Node/node-01   Starting kubelet.
8m11s                   Warning   InvalidDiskCapacity       Node/node-01   invalid capacity 0 on image filesystem
8m11s (x8 over 8m11s)   Normal    NodeHasSufficientMemory   Node/node-01   Node node-01 status is now: NodeHasSufficientMemory
8m11s (x8 over 8m11s)   Normal    NodeHasNoDiskPressure     Node/node-01   Node node-01 status is now: NodeHasNoDiskPressure
8m11s (x7 over 8m11s)   Normal    NodeHasSufficientPID      Node/node-01   Node node-01 status is now: NodeHasSufficientPID
8m11s                   Normal    NodeAllocatableEnforced   Node/node-01   Updated Node Allocatable limit across pods
8m2s                    Normal    Starting                  Node/node-01   Starting kubelet.
8m2s                    Warning   InvalidDiskCapacity       Node/node-01   invalid capacity 0 on image filesystem
8m2s                    Normal    NodeAllocatableEnforced   Node/node-01   Updated Node Allocatable limit across pods
8m2s                    Normal    NodeHasSufficientMemory   Node/node-01   Node node-01 status is now: NodeHasSufficientMemory
8m2s                    Normal    NodeHasNoDiskPressure     Node/node-01   Node node-01 status is now: NodeHasNoDiskPressure
8m2s                    Normal    NodeHasSufficientPID      Node/node-01   Node node-01 status is now: NodeHasSufficientPID
7m1s                    Normal    RegisteredNode            Node/node-01   Node node-01 event: Registered Node node-01 in Controller
6m58s                   Normal    Starting                  Node/node-01
4m58s                   Normal    Starting                  Node/node-01
4m34s                   Normal    RegisteredNode            Node/node-01   Node node-01 event: Registered Node node-01 in Controller
3m34s                   Normal    Starting                  Node/node-01

kubectl get pods –all-namespaces

NAMESPACE     NAME                              READY   STATUS             RESTARTS         AGE
kube-system   coredns-674b8bbfcf-nvm5q          0/1     Pending            0                46m
kube-system   coredns-674b8bbfcf-zpkbl          0/1     Pending            0                46m
kube-system   etcd-node-01                      1/1     Running            63 (2m23s ago)   46m
kube-system   kube-apiserver-node-01            1/1     Running            68 (2m16s ago)   46m
kube-system   kube-controller-manager-node-01   0/1     CrashLoopBackOff   69 (45s ago)     47m
kube-system   kube-proxy-gxhnz                  1/1     Running            21 (62s ago)     46m
kube-system   kube-scheduler-node-01            1/1     Running            77 (58s ago)     46m

 

antonio-an
()

Ruby version is 3.2.3, but your Gemfile specified

Форум — Development

Делаю

sudo bundle install

Ругается

Your Ruby version is 3.2.3, but your Gemfile specified >= 2.5.0, < 3.2.0

Как отремонтировать?

Перемещено Pinkbyte из linux-install

 , ,

antonio-an
()

Избранные теги

Форум — Linux-org-ru

Избранные теги в профиле пользователя, какие у них возможности в работе сайта?

 ,

antonio-an
()

jenkins kubectl

Форум — Admin

jenkinsfile авторизация jenkins+kubernetes настроена

в паплайне

  • ./kubectl get pods -A

Выдает ошибку

E1208 14:24:15.345004      54 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1208 14:24:15.346319      54 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1208 14:24:15.347793      54 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1208 14:24:15.348271      54 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1208 14:24:15.349647      54 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?```


В какую сторону копать?

 , ,

antonio-an
()

access bin файл запутался

Форум — Linux-install

запускаю докер контейнер->curl скачивает kubectl

потом назначить права и скопировать в /usr/local/bin/kubectl

/usr/local/bin/kubectl```

Вывод

```sudo: you do not exist in the passwd database```

Что ему надо?

Запускаю так:

```chmod +x ./kubectl && mv ./kubectl/usr/local/bin/kubectl```



Вывод

```mv: cannot create regular file '/usr/local/bin/kubectl': Permission denied```


Какие права тебе нужны?

 

antonio-an
()

jenkinsfile стартует POD и выключается

Форум — Linux-install

Как новичку подскажите в какую сторону изучать. В pipline jenkins

  agent {
    kubernetes {
      yaml '''
        apiVersion: v1
        kind: Pod
        spec:
          containers:
          - name: debian
            image: debian:latest
            command:
            - cat
            tty: true
        '''
    }
  }
  stages {
    stage('Run IP A ') {
      steps {
        container('debian') {
          sh 'uname -a'
        }
      }
    }
  }
}

POD стартует выдает uname -a и завершает свою работу.

Может лучше Deployment развернуть?

 

antonio-an
()

Jenkins 2.528.2 у кого установленна

Форум — Linux-install

Последняя версия Jenkins 2.528.2 https://get.jenkins.io/war-stable/2.528.2/

У кого установлена? Она стабильно работает?

 

antonio-an
()

kubernetes

Форум — General

Есть специалисты кто с ним работает? kubernetes файлы из внешнего мира перемещать в POD?

 

antonio-an
()

OID для перезагрузки DGS-3420-28TC

Форум — General

Приветствую! Есть у кого OID для ребута коммутатора DGS-3420-28TC ??

 ,

antonio-an
()

zabbix не стартует в логах не понятное

Форум — General

Приветствую! Перестал стартовать заббикс. В логах не понятное

Кто подскажет, куда копать??




  3757:20200726:150944.106 Starting Zabbix Server. Zabbix 4.4.6 (revision 8cc702429d).
  3757:20200726:150944.106 ****** Enabled features ******
  3757:20200726:150944.106 SNMP monitoring:           YES
  3757:20200726:150944.106 IPMI monitoring:           YES
  3757:20200726:150944.106 Web monitoring:            YES
  3757:20200726:150944.106 VMware monitoring:         YES
  3757:20200726:150944.106 SMTP authentication:       YES
  3757:20200726:150944.106 ODBC:                      YES
  3757:20200726:150944.106 SSH support:               YES
  3757:20200726:150944.106 IPv6 support:              YES
  3757:20200726:150944.106 TLS support:               YES
  3757:20200726:150944.106 ******************************
  3757:20200726:150944.106 using configuration file: /etc/zabbix/zabbix_server.conf
  3757:20200726:150944.114 current database version (mandatory/optional): 04040000/04040001
  3757:20200726:150944.114 required mandatory version: 04040000
  3757:20200726:150944.122 server #0 started [main process]
  3758:20200726:150944.122 server #1 started [configuration syncer #1]
  3758:20200726:150944.621 __mem_malloc: skipped 0 asked 24 skip_min 18446744073709551615 skip_max 0
  3758:20200726:150944.621 [file:dbconfig.c,line:94] __zbx_mem_realloc(): out of memory (requested 24 bytes)
  3758:20200726:150944.621 [file:dbconfig.c,line:94] __zbx_mem_realloc(): please increase CacheSize configuration parameter
  3758:20200726:150944.621 === memory statistics for configuration cache ===
  3758:20200726:150944.621 min chunk size: 18446744073709551615 bytes
  3758:20200726:150944.621 max chunk size:          0 bytes
  3758:20200726:150944.621 memory of total size 8388232 bytes fragmented into 72175 chunks
  3758:20200726:150944.621 of those,          0 bytes are in        0 free chunks
  3758:20200726:150944.621 of those,    7233448 bytes are in    72175 used chunks
  3758:20200726:150944.621 ================================
  3758:20200726:150944.621 === Backtrace: ===
  3758:20200726:150944.622 11: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](zbx_backtrace+0x3f) [0x5558b78d102d]
  3758:20200726:150944.622 10: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](__zbx_mem_realloc+0x160) [0x5558b78cc612]
  3758:20200726:150944.622 9: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](+0x175b4a) [0x5558b7895b4a]
  3758:20200726:150944.622 8: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](+0x180f9f) [0x5558b78a0f9f]
  3758:20200726:150944.622 7: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](DCsync_configuration+0x1157) [0x5558b78a22fa]
  3758:20200726:150944.622 6: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](dbconfig_thread+0x10c) [0x5558b776e419]
  3758:20200726:150944.622 5: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](zbx_thread_start+0x37) [0x5558b78debea]
  3758:20200726:150944.622 4: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](MAIN_ZABBIX_ENTRY+0x97d) [0x5558b7761434]
  3758:20200726:150944.622 3: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](daemon_start+0x2ff) [0x5558b78d0818]
  3758:20200726:150944.622 2: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](main+0x2f5) [0x5558b7760ab5]
  3758:20200726:150944.622 1: /lib64/libc.so.6(__libc_start_main+0xf3) [0x7f0f21282873]
  3758:20200726:150944.622 0: /usr/sbin/zabbix_server: configuration syncer [syncing configuration](_start+0x2e) [0x5558b775fb7e]
  3757:20200726:150944.624 One child process died (PID:3758,exitcode/signal:1). Exiting ...
  3757:20200726:150944.626 syncing trend data...
  3757:20200726:150944.626 syncing trend data done
  3757:20200726:150944.626 Zabbix Server stopped. Zabbix 4.4.6 (revision 8cc702429d).

 

antonio-an
()

RSS подписка на новые темы