2024-04-17T09:42:03.978943130Z kafka 09:42:03.97  2024-04-17T09:42:03.985920682Z kafka 09:42:03.98 Welcome to the Bitnami kafka container 2024-04-17T09:42:03.988262330Z kafka 09:42:03.98 Subscribe to project updates by watching https://github.com/bitnami/containers 2024-04-17T09:42:03.990523100Z kafka 09:42:03.99 Submit issues and feature requests at https://github.com/bitnami/containers/issues 2024-04-17T09:42:03.996922327Z kafka 09:42:03.99  2024-04-17T09:42:03.996933817Z kafka 09:42:03.99 INFO  ==> ** Starting Kafka setup ** 2024-04-17T09:42:04.132333566Z kafka 09:42:04.13 INFO  ==> Initializing KRaft storage metadata 2024-04-17T09:42:04.133988665Z kafka 09:42:04.13 INFO  ==> Adding KRaft SCRAM users at storage bootstrap 2024-04-17T09:42:04.157857271Z kafka 09:42:04.15 INFO  ==> Formatting storage directories to add metadata... 2024-04-17T09:42:08.582550852Z Formatting /bitnami/kafka/data with metadata.version 3.5-IV2. 2024-04-17T09:42:08.944494989Z kafka 09:42:08.94 INFO  ==> ** Kafka setup finished! ** 2024-04-17T09:42:08.944540064Z 2024-04-17T09:42:08.969812594Z kafka 09:42:08.96 INFO  ==> ** Starting Kafka ** 2024-04-17T09:42:11.215807034Z [2024-04-17 09:42:11,195] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 2024-04-17T09:42:13.180386479Z [2024-04-17 09:42:13,180] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 2024-04-17T09:42:13.833053291Z [2024-04-17 09:42:13,830] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 2024-04-17T09:42:13.836799570Z [2024-04-17 09:42:13,836] INFO [ControllerServer id=2] Starting controller (kafka.server.ControllerServer) 2024-04-17T09:42:15.214078603Z [2024-04-17 09:42:15,213] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 2024-04-17T09:42:15.555052861Z [2024-04-17 09:42:15,552] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin) 2024-04-17T09:42:15.575933483Z [2024-04-17 09:42:15,575] INFO [SocketServer listenerType=CONTROLLER, nodeId=2] Created data-plane acceptor and processors for endpoint : ListenerName(CONTROLLER) (kafka.network.SocketServer) 2024-04-17T09:42:15.578582627Z [2024-04-17 09:42:15,578] INFO [SharedServer id=2] Starting SharedServer (kafka.server.SharedServer) 2024-04-17T09:42:15.767292575Z [2024-04-17 09:42:15,767] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:42:15.799490822Z [2024-04-17 09:42:15,799] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Reloading from producer snapshot and rebuilding producer state from offset 0 (kafka.log.UnifiedLog$) 2024-04-17T09:42:15.799559179Z [2024-04-17 09:42:15,799] INFO [LogLoader partition=__cluster_metadata-0, dir=/bitnami/kafka/data] Producer state recovery took 0ms for snapshot load and 0ms for segment recovery from offset 0 (kafka.log.UnifiedLog$) 2024-04-17T09:42:15.868804560Z [2024-04-17 09:42:15,868] INFO Initialized snapshots with IDs Set() from /bitnami/kafka/data/__cluster_metadata-0 (kafka.raft.KafkaMetadataLog$) 2024-04-17T09:42:15.921630515Z [2024-04-17 09:42:15,921] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper) 2024-04-17T09:42:16.155373923Z [2024-04-17 09:42:16,155] INFO [RaftManager id=2] Completed transition to Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1047) from null (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:16.158857083Z [2024-04-17 09:42:16,158] INFO [kafka-2-raft-outbound-request-thread]: Starting (kafka.raft.RaftSendThread) 2024-04-17T09:42:16.160506527Z [2024-04-17 09:42:16,160] INFO [kafka-2-raft-io-thread]: Starting (kafka.raft.KafkaRaftManager$RaftIoThread) 2024-04-17T09:42:16.185444868Z [2024-04-17 09:42:16,185] INFO [RaftManager id=2] Registered the listener org.apache.kafka.image.loader.MetadataLoader@823878874 (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:16.185845379Z [2024-04-17 09:42:16,185] INFO [ControllerServer id=2] Waiting for controller quorum voters future (kafka.server.ControllerServer) 2024-04-17T09:42:16.185914813Z [2024-04-17 09:42:16,185] INFO [ControllerServer id=2] Finished waiting for controller quorum voters future (kafka.server.ControllerServer) 2024-04-17T09:42:16.186860466Z [2024-04-17 09:42:16,186] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.244439599Z [2024-04-17 09:42:16,244] INFO [QuorumController id=2] Creating new QuorumController with clusterId wwF0Ks0d226Do1GaY9kKkw, authorizer Optional.empty. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:16.247663630Z [2024-04-17 09:42:16,246] INFO [RaftManager id=2] Registered the listener org.apache.kafka.controller.QuorumController$QuorumMetaLogListener@877920684 (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:16.273617304Z [2024-04-17 09:42:16,273] INFO [controller-2-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.276260193Z [2024-04-17 09:42:16,276] INFO [controller-2-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.277720403Z [2024-04-17 09:42:16,276] INFO [controller-2-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.290280509Z [2024-04-17 09:42:16,285] INFO [controller-2-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.291830147Z [2024-04-17 09:42:16,286] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.313727395Z [2024-04-17 09:42:16,309] INFO [ExpirationReaper-2-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:16.313740597Z [2024-04-17 09:42:16,312] INFO [SocketServer listenerType=CONTROLLER, nodeId=2] Enabling request processing. (kafka.network.SocketServer) 2024-04-17T09:42:16.319239790Z [2024-04-17 09:42:16,319] INFO Awaiting socket connections on 0.0.0.0:9093. (kafka.network.DataPlaneAcceptor) 2024-04-17T09:42:16.387894414Z [2024-04-17 09:42:16,387] INFO [ControllerServer id=2] Waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) 2024-04-17T09:42:16.387959571Z [2024-04-17 09:42:16,387] INFO [ControllerServer id=2] Finished waiting for all of the authorizer futures to be completed (kafka.server.ControllerServer) 2024-04-17T09:42:16.387964316Z [2024-04-17 09:42:16,387] INFO [ControllerServer id=2] Waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) 2024-04-17T09:42:16.388000025Z [2024-04-17 09:42:16,387] INFO [ControllerServer id=2] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.ControllerServer) 2024-04-17T09:42:16.391836923Z [2024-04-17 09:42:16,391] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.396868642Z [2024-04-17 09:42:16,396] INFO [ControllerServer id=2] Waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) 2024-04-17T09:42:16.397381791Z [2024-04-17 09:42:16,396] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.407128388Z [2024-04-17 09:42:16,405] INFO [ControllerServer id=2] Finished waiting for the controller metadata publishers to be installed (kafka.server.ControllerServer) 2024-04-17T09:42:16.407143811Z [2024-04-17 09:42:16,406] INFO [BrokerServer id=2] Transition from SHUTDOWN to STARTING (kafka.server.BrokerServer) 2024-04-17T09:42:16.407147882Z [2024-04-17 09:42:16,406] INFO [BrokerServer id=2] Starting broker (kafka.server.BrokerServer) 2024-04-17T09:42:16.499236801Z [2024-04-17 09:42:16,498] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.508587516Z [2024-04-17 09:42:16,508] INFO [broker-2-ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.517302064Z [2024-04-17 09:42:16,514] INFO [broker-2-ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.525031976Z [2024-04-17 09:42:16,521] INFO [broker-2-ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.526281658Z [2024-04-17 09:42:16,525] INFO [broker-2-ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 2024-04-17T09:42:16.578426744Z [2024-04-17 09:42:16,578] INFO [BrokerServer id=2] Waiting for controller quorum voters future (kafka.server.BrokerServer) 2024-04-17T09:42:16.578589944Z [2024-04-17 09:42:16,578] INFO [BrokerServer id=2] Finished waiting for controller quorum voters future (kafka.server.BrokerServer) 2024-04-17T09:42:16.599070661Z [2024-04-17 09:42:16,598] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.615582331Z [2024-04-17 09:42:16,615] INFO [broker-2-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:16.701482938Z [2024-04-17 09:42:16,701] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.732555064Z [2024-04-17 09:42:16,732] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 2024-04-17T09:42:16.750275210Z [2024-04-17 09:42:16,750] INFO [SocketServer listenerType=BROKER, nodeId=2] Created data-plane acceptor and processors for endpoint : ListenerName(CLIENT) (kafka.network.SocketServer) 2024-04-17T09:42:16.750964000Z [2024-04-17 09:42:16,750] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 2024-04-17T09:42:16.754167361Z [2024-04-17 09:42:16,754] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin) 2024-04-17T09:42:16.763004947Z [2024-04-17 09:42:16,762] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin) 2024-04-17T09:42:16.793730112Z [2024-04-17 09:42:16,793] INFO [SocketServer listenerType=BROKER, nodeId=2] Created data-plane acceptor and processors for endpoint : ListenerName(INTERNAL) (kafka.network.SocketServer) 2024-04-17T09:42:16.809935694Z [2024-04-17 09:42:16,809] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.825274142Z [2024-04-17 09:42:16,825] INFO [broker-2-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:16.912475811Z [2024-04-17 09:42:16,912] INFO [ExpirationReaper-2-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:16.913366816Z [2024-04-17 09:42:16,913] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:16.918833501Z [2024-04-17 09:42:16,918] INFO [ExpirationReaper-2-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:16.925301158Z [2024-04-17 09:42:16,925] INFO [ExpirationReaper-2-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:16.941346895Z [2024-04-17 09:42:16,941] INFO [ExpirationReaper-2-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:17.009468137Z [2024-04-17 09:42:17,009] INFO [ExpirationReaper-2-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:17.014019736Z [2024-04-17 09:42:17,013] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:17.028847519Z [2024-04-17 09:42:17,028] INFO [ExpirationReaper-2-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:17.116264433Z [2024-04-17 09:42:17,116] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:17.125367288Z [2024-04-17 09:42:17,125] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=1, retries=1, voteStates={0=UNRECORDED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1134) from Unattached(epoch=0, voters=[0, 1, 2], electionTimeoutMs=1047) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:17.138450732Z [2024-04-17 09:42:17,138] INFO [QuorumController id=2] In the new epoch 1, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:17.224953645Z [2024-04-17 09:42:17,224] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:17.325160582Z [2024-04-17 09:42:17,325] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:17.393523731Z [2024-04-17 09:42:17,391] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:17.512356160Z [2024-04-17 09:42:17,413] WARN [RaftManager id=2] Connection to node 0 (kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local/10.244.0.37:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:17.512379015Z [2024-04-17 09:42:17,438] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:17.948910572Z [2024-04-17 09:42:17,948] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.015662214Z [2024-04-17 09:42:18,015] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:18.015688235Z [2024-04-17 09:42:18,015] WARN [RaftManager id=2] Connection to node 0 (kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local/10.244.0.37:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:18.049638009Z [2024-04-17 09:42:18,049] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.049849219Z [2024-04-17 09:42:18,049] INFO [broker-2-to-controller-heartbeat-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:18.127022760Z [2024-04-17 09:42:18,125] INFO [BrokerLifecycleManager id=2] Incarnation sKyXoe60S46TYQ7MH3s1KA of broker 2 in cluster wwF0Ks0d226Do1GaY9kKkw is now STARTING. (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:18.142083435Z [2024-04-17 09:42:18,141] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:18.142167742Z [2024-04-17 09:42:18,142] WARN [RaftManager id=2] Connection to node 0 (kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local/10.244.0.37:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:18.153340265Z [2024-04-17 09:42:18,151] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.252625156Z [2024-04-17 09:42:18,252] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.252919723Z [2024-04-17 09:42:18,252] INFO [RaftManager id=2] Election has timed out, backing off for 100ms before becoming a candidate again (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:18.373714880Z [2024-04-17 09:42:18,353] INFO [RaftManager id=2] Re-elect as candidate after election backoff has completed (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:18.373751723Z [2024-04-17 09:42:18,359] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.377323971Z [2024-04-17 09:42:18,377] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=2, retries=2, voteStates={0=UNRECORDED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1392) from CandidateState(localId=2, epoch=1, retries=1, voteStates={0=UNRECORDED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1134) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:18.377973094Z [2024-04-17 09:42:18,377] INFO [QuorumController id=2] In the new epoch 2, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:18.406017238Z [2024-04-17 09:42:18,401] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:18.411317999Z [2024-04-17 09:42:18,411] WARN [RaftManager id=2] Connection to node 0 (kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local/10.244.0.37:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:18.418073999Z [2024-04-17 09:42:18,417] INFO [RaftManager id=2] Completed transition to Unattached(epoch=24, voters=[0, 1, 2], electionTimeoutMs=1345) from CandidateState(localId=2, epoch=2, retries=2, voteStates={0=UNRECORDED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1392) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:18.423372681Z [2024-04-17 09:42:18,423] INFO [QuorumController id=2] In the new epoch 24, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:18.452595281Z [2024-04-17 09:42:18,452] INFO [ExpirationReaper-2-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 2024-04-17T09:42:18.471606195Z [2024-04-17 09:42:18,471] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.540946464Z [2024-04-17 09:42:18,540] INFO [BrokerServer id=2] Waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer) 2024-04-17T09:42:18.541692488Z [2024-04-17 09:42:18,541] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.542039522Z [2024-04-17 09:42:18,541] INFO [BrokerServer id=2] Finished waiting for the broker metadata publishers to be installed (kafka.server.BrokerServer) 2024-04-17T09:42:18.542186289Z [2024-04-17 09:42:18,542] INFO [BrokerServer id=2] Waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) 2024-04-17T09:42:18.643521272Z [2024-04-17 09:42:18,642] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.750366888Z [2024-04-17 09:42:18,747] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.848715936Z [2024-04-17 09:42:18,848] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:18.957802416Z [2024-04-17 09:42:18,957] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.059103878Z [2024-04-17 09:42:19,058] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.161264529Z [2024-04-17 09:42:19,161] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.200986852Z [2024-04-17 09:42:19,193] INFO [RaftManager id=2] Completed transition to Unattached(epoch=25, voters=[0, 1, 2], electionTimeoutMs=587) from Unattached(epoch=24, voters=[0, 1, 2], electionTimeoutMs=1345) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:19.201010058Z [2024-04-17 09:42:19,194] INFO [QuorumController id=2] In the new epoch 25, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:19.203629748Z [2024-04-17 09:42:19,203] INFO [RaftManager id=2] Completed transition to Voted(epoch=25, votedId=1, voters=[0, 1, 2], electionTimeoutMs=1160) from Unattached(epoch=25, voters=[0, 1, 2], electionTimeoutMs=587) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:19.204577545Z [2024-04-17 09:42:19,204] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='wwF0Ks0d226Do1GaY9kKkw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=25, candidateId=1, lastOffsetEpoch=0, lastOffset=0)])]) with epoch 25 is granted (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:19.265023397Z [2024-04-17 09:42:19,263] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.366169460Z [2024-04-17 09:42:19,365] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.396766488Z [2024-04-17 09:42:19,396] INFO [RaftManager id=2] Completed transition to FollowerState(fetchTimeoutMs=2000, epoch=25, leaderId=1, voters=[0, 1, 2], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) from Voted(epoch=25, votedId=1, voters=[0, 1, 2], electionTimeoutMs=1160) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:19.404549154Z [2024-04-17 09:42:19,401] INFO [QuorumController id=2] In the new epoch 25, the leader is 1. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:19.431920606Z [2024-04-17 09:42:19,429] INFO [broker-2-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.434365471Z [2024-04-17 09:42:19,430] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.467012093Z [2024-04-17 09:42:19,466] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.482653626Z [2024-04-17 09:42:19,480] INFO [broker-2-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.521565276Z [2024-04-17 09:42:19,521] INFO [BrokerLifecycleManager id=2] Unable to register broker 2 because the controller returned error UNKNOWN_SERVER_ERROR (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:19.570100567Z [2024-04-17 09:42:19,567] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.642109748Z [2024-04-17 09:42:19,641] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:19.651314271Z [2024-04-17 09:42:19,649] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.669013442Z [2024-04-17 09:42:19,668] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.700298270Z [2024-04-17 09:42:19,697] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.724545024Z [2024-04-17 09:42:19,724] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:19.727447669Z [2024-04-17 09:42:19,727] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.770374707Z [2024-04-17 09:42:19,770] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.781666741Z [2024-04-17 09:42:19,781] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.816966807Z [2024-04-17 09:42:19,816] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:19.816987104Z [2024-04-17 09:42:19,816] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.869587987Z [2024-04-17 09:42:19,869] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:19.871293563Z [2024-04-17 09:42:19,871] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:19.895610269Z [2024-04-17 09:42:19,895] INFO [RaftManager id=2] Become candidate due to fetch timeout (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:19.900528448Z [2024-04-17 09:42:19,900] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=26, retries=1, voteStates={0=UNRECORDED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1684) from FollowerState(fetchTimeoutMs=2000, epoch=25, leaderId=1, voters=[0, 1, 2], highWatermark=Optional.empty, fetchingSnapshot=Optional.empty) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:19.903368476Z [2024-04-17 09:42:19,902] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:19.903377002Z [2024-04-17 09:42:19,902] INFO [QuorumController id=2] In the new epoch 26, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:19.904654926Z [2024-04-17 09:42:19,904] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:19.921898816Z [2024-04-17 09:42:19,921] WARN [RaftManager id=2] Connection to node 0 (kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local/10.244.0.37:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:19.971626229Z [2024-04-17 09:42:19,971] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.072796859Z [2024-04-17 09:42:20,072] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.173782460Z [2024-04-17 09:42:20,173] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.275268562Z [2024-04-17 09:42:20,274] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.382921225Z [2024-04-17 09:42:20,380] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.453060303Z [2024-04-17 09:42:20,441] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:20.453080706Z [2024-04-17 09:42:20,441] WARN [RaftManager id=2] Connection to node 0 (kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local/10.244.0.37:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:20.482322524Z [2024-04-17 09:42:20,482] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.582621093Z [2024-04-17 09:42:20,582] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.688127602Z [2024-04-17 09:42:20,685] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.786356062Z [2024-04-17 09:42:20,786] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.889010655Z [2024-04-17 09:42:20,886] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:20.989141661Z [2024-04-17 09:42:20,988] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.090868006Z [2024-04-17 09:42:21,089] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.194228375Z [2024-04-17 09:42:21,193] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.221021176Z [2024-04-17 09:42:21,211] INFO [RaftManager id=2] Completed transition to Leader(localId=2, epoch=26, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 2=ReplicaState(nodeId=2, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) from CandidateState(localId=2, epoch=26, retries=1, voteStates={0=GRANTED, 1=REJECTED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1684) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:21.221048900Z [2024-04-17 09:42:21,211] INFO [QuorumController id=2] Becoming the active controller at epoch 26, committed offset -1, committed epoch -1 (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:21.221063142Z [2024-04-17 09:42:21,215] INFO [QuorumController id=2] The metadata log appears to be empty. Appending 3 bootstrap record(s) at metadata.version 3.5-IV2 from the binary bootstrap metadata file: /bitnami/kafka/data/bootstrap.checkpoint. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:21.221066779Z [2024-04-17 09:42:21,216] INFO [QuorumController id=2] Setting metadata version to 3.5-IV2 (org.apache.kafka.controller.FeatureControlManager) 2024-04-17T09:42:21.221070109Z [2024-04-17 09:42:21,218] INFO [QuorumController id=2] Created new SCRAM credential for user with mechanism SCRAM_SHA_256. (org.apache.kafka.controller.ScramControlManager) 2024-04-17T09:42:21.221073325Z [2024-04-17 09:42:21,218] INFO [QuorumController id=2] Created new SCRAM credential for user with mechanism SCRAM_SHA_512. (org.apache.kafka.controller.ScramControlManager) 2024-04-17T09:42:21.221076660Z [2024-04-17 09:42:21,219] INFO [QuorumController id=2] Transitioning ZK migration state from NONE to NONE (org.apache.kafka.controller.FeatureControlManager) 2024-04-17T09:42:21.246447417Z [2024-04-17 09:42:21,245] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:21.276543417Z [2024-04-17 09:42:21,276] INFO [QuorumController id=2] Registered new broker: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=sKyXoe60S46TYQ7MH3s1KA, brokerEpoch=4, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:21.293659205Z [2024-04-17 09:42:21,293] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.394378783Z [2024-04-17 09:42:21,393] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.453718921Z [2024-04-17 09:42:21,453] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:21.496856628Z [2024-04-17 09:42:21,496] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.599253227Z [2024-04-17 09:42:21,596] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.699462458Z [2024-04-17 09:42:21,698] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.809272974Z [2024-04-17 09:42:21,800] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:21.903564977Z [2024-04-17 09:42:21,901] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.007100269Z [2024-04-17 09:42:22,006] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.108929708Z [2024-04-17 09:42:22,107] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.221221830Z [2024-04-17 09:42:22,208] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.309301037Z [2024-04-17 09:42:22,309] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.413440032Z [2024-04-17 09:42:22,409] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.514289038Z [2024-04-17 09:42:22,514] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.614562014Z [2024-04-17 09:42:22,614] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.714937394Z [2024-04-17 09:42:22,714] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.815248936Z [2024-04-17 09:42:22,815] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:22.915798656Z [2024-04-17 09:42:22,915] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.016835537Z [2024-04-17 09:42:23,016] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.119628481Z [2024-04-17 09:42:23,119] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.225067352Z [2024-04-17 09:42:23,223] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.328998270Z [2024-04-17 09:42:23,325] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.425932590Z [2024-04-17 09:42:23,425] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.532505486Z [2024-04-17 09:42:23,526] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.627412706Z [2024-04-17 09:42:23,627] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.740301423Z [2024-04-17 09:42:23,737] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.857252416Z [2024-04-17 09:42:23,840] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:23.945367864Z [2024-04-17 09:42:23,941] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.041916779Z [2024-04-17 09:42:24,041] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.142338095Z [2024-04-17 09:42:24,142] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.249066823Z [2024-04-17 09:42:24,244] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.346827045Z [2024-04-17 09:42:24,346] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.450406238Z [2024-04-17 09:42:24,449] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.553223699Z [2024-04-17 09:42:24,553] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.664991320Z [2024-04-17 09:42:24,653] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.757043163Z [2024-04-17 09:42:24,756] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.857366679Z [2024-04-17 09:42:24,856] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:24.957531205Z [2024-04-17 09:42:24,957] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.059178864Z [2024-04-17 09:42:25,058] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.160176251Z [2024-04-17 09:42:25,159] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.261251438Z [2024-04-17 09:42:25,260] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.364730812Z [2024-04-17 09:42:25,361] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.464585666Z [2024-04-17 09:42:25,462] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.569007722Z [2024-04-17 09:42:25,565] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.667506488Z [2024-04-17 09:42:25,666] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.757806459Z [2024-04-17 09:42:25,757] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Disconnecting from node 2 due to request timeout. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:25.759445650Z [2024-04-17 09:42:25,759] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Cancelled in-flight BROKER_REGISTRATION request with correlation id 12 due to node 2 being disconnected (elapsed time since creation: 4512ms, elapsed time since send: 4506ms, request timeout: 4500ms) (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:25.759472818Z [2024-04-17 09:42:25,759] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:25.760353703Z [2024-04-17 09:42:25,760] INFO [BrokerLifecycleManager id=2] Unable to register the broker because the RPC got timed out before it could be sent. (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:25.778968378Z [2024-04-17 09:42:25,773] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.874703170Z [2024-04-17 09:42:25,873] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.975556712Z [2024-04-17 09:42:25,974] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:25.997008785Z [2024-04-17 09:42:25,984] INFO [QuorumController id=2] Re-registered broker incarnation: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=sKyXoe60S46TYQ7MH3s1KA, brokerEpoch=15, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:26.075637954Z [2024-04-17 09:42:26,075] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.176983506Z [2024-04-17 09:42:26,176] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.278865560Z [2024-04-17 09:42:26,278] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.381130405Z [2024-04-17 09:42:26,380] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.481675009Z [2024-04-17 09:42:26,481] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.584438439Z [2024-04-17 09:42:26,582] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.686789282Z [2024-04-17 09:42:26,684] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.792914259Z [2024-04-17 09:42:26,785] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.889759002Z [2024-04-17 09:42:26,886] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:26.991269558Z [2024-04-17 09:42:26,988] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.100697649Z [2024-04-17 09:42:27,088] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.207287561Z [2024-04-17 09:42:27,206] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.307240786Z [2024-04-17 09:42:27,307] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.322881432Z [2024-04-17 09:42:27,322] INFO [RaftManager id=2] Completed transition to Unattached(epoch=29, voters=[0, 1, 2], electionTimeoutMs=1991) from Leader(localId=2, epoch=26, epochStartOffset=0, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), 2=ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=19, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=1947)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:27.326342230Z [2024-04-17 09:42:27,323] WARN [QuorumController id=2] Renouncing the leadership due to a metadata log event. We were the leader at epoch 26, but in the new epoch 29, the leader is (none). Reverting to last committed offset -1. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.326730314Z [2024-04-17 09:42:27,326] INFO [QuorumController id=2] failAll(NotControllerException): failing completeActivation[26](1380590631). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.326905323Z [2024-04-17 09:42:27,326] INFO [QuorumController id=2] completeActivation[26]: failed with NotControllerException in 6111653 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.327082323Z [2024-04-17 09:42:27,326] INFO [QuorumController id=2] failAll(NotControllerException): failing registerBroker(150110157). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.327093134Z [2024-04-17 09:42:27,327] INFO [QuorumController id=2] registerBroker: failed with NotControllerException in 6069106 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333119717Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(413573908). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333137741Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 5611149 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333158313Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(458151739). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333161578Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 5109131 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333164779Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(14278536). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333168156Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 4605642 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333171390Z [2024-04-17 09:42:27,328] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(1962204842). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333174471Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 4098216 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333177648Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(1499950371). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333180650Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 3596061 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333183683Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(880647739). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333186658Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 3095941 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333189677Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(2056663715). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333193107Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 2592044 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333204425Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(675656405). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333207532Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 2084242 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333210642Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(1244257419). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333213616Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 1580271 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333216643Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing registerBroker(318574515). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333219788Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] registerBroker: failed with NotControllerException in 1345008 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333222774Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(1496659045). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333236798Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 1080369 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333240250Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(1550055052). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333243365Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 578665 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333246438Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] failAll(NotControllerException): failing writeNoOpRecord(1479189519). (org.apache.kafka.controller.ControllerPurgatory) 2024-04-17T09:42:27.333249660Z [2024-04-17 09:42:27,329] INFO [QuorumController id=2] writeNoOpRecord: failed with NotControllerException in 75764 us. Reason: No controller appears to be active. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:27.333253689Z [2024-04-17 09:42:27,330] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='wwF0Ks0d226Do1GaY9kKkw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=29, candidateId=0, lastOffsetEpoch=0, lastOffset=0)])]) with epoch 29 is rejected (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:27.340431777Z [2024-04-17 09:42:27,339] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:27.411540129Z [2024-04-17 09:42:27,409] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.514527059Z [2024-04-17 09:42:27,510] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.613495434Z [2024-04-17 09:42:27,613] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.717174750Z [2024-04-17 09:42:27,713] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.821091004Z [2024-04-17 09:42:27,815] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:27.919875914Z [2024-04-17 09:42:27,918] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.022495171Z [2024-04-17 09:42:28,019] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.125766614Z [2024-04-17 09:42:28,119] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.227114011Z [2024-04-17 09:42:28,221] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.325499609Z [2024-04-17 09:42:28,322] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.424149107Z [2024-04-17 09:42:28,423] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.525390329Z [2024-04-17 09:42:28,525] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.584987471Z [2024-04-17 09:42:28,574] INFO [RaftManager id=2] Completed transition to Unattached(epoch=30, voters=[0, 1, 2], electionTimeoutMs=718) from Unattached(epoch=29, voters=[0, 1, 2], electionTimeoutMs=1991) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:28.585007690Z [2024-04-17 09:42:28,574] INFO [RaftManager id=2] Vote request VoteRequestData(clusterId='wwF0Ks0d226Do1GaY9kKkw', topics=[TopicData(topicName='__cluster_metadata', partitions=[PartitionData(partitionIndex=0, candidateEpoch=30, candidateId=0, lastOffsetEpoch=0, lastOffset=0)])]) with epoch 30 is rejected (org.apache.kafka.raft.KafkaRaftClient) 2024-04-17T09:42:28.585011976Z [2024-04-17 09:42:28,575] INFO [QuorumController id=2] In the new epoch 30, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:28.625999690Z [2024-04-17 09:42:28,625] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.727176164Z [2024-04-17 09:42:28,725] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.827537670Z [2024-04-17 09:42:28,827] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:28.931054854Z [2024-04-17 09:42:28,930] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.034514958Z [2024-04-17 09:42:29,033] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.136135834Z [2024-04-17 09:42:29,135] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.237426531Z [2024-04-17 09:42:29,236] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.291204835Z [2024-04-17 09:42:29,289] INFO [RaftManager id=2] Completed transition to CandidateState(localId=2, epoch=31, retries=1, voteStates={0=UNRECORDED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1984) from Unattached(epoch=30, voters=[0, 1, 2], electionTimeoutMs=718) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:29.291222638Z [2024-04-17 09:42:29,290] INFO [QuorumController id=2] In the new epoch 31, the leader is (none). (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.294016515Z [2024-04-17 09:42:29,293] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.294093472Z [2024-04-17 09:42:29,294] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.345319329Z [2024-04-17 09:42:29,345] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.360525118Z [2024-04-17 09:42:29,360] INFO [RaftManager id=2] Completed transition to Leader(localId=2, epoch=31, epochStartOffset=19, highWatermark=Optional.empty, voterStates={0=ReplicaState(nodeId=0, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 1=ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false), 2=ReplicaState(nodeId=2, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true)}) from CandidateState(localId=2, epoch=31, retries=1, voteStates={0=GRANTED, 1=UNRECORDED, 2=GRANTED}, highWatermark=Optional.empty, electionTimeoutMs=1984) (org.apache.kafka.raft.QuorumState) 2024-04-17T09:42:29.384187504Z [2024-04-17 09:42:29,382] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.390046238Z [2024-04-17 09:42:29,389] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.390075216Z [2024-04-17 09:42:29,389] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.421246177Z [2024-04-17 09:42:29,418] ERROR [QuorumController id=2] registerBroker: unable to start processing because of NotControllerException. Reason: The active controller appears to be node 2. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.423338467Z [2024-04-17 09:42:29,422] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.423351466Z [2024-04-17 09:42:29,423] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.447654683Z [2024-04-17 09:42:29,447] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.478800870Z [2024-04-17 09:42:29,478] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.549181842Z [2024-04-17 09:42:29,549] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.556530016Z [2024-04-17 09:42:29,556] ERROR [QuorumController id=2] registerBroker: unable to start processing because of NotControllerException. Reason: The active controller appears to be node 2. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.573492417Z [2024-04-17 09:42:29,573] ERROR [QuorumController id=2] registerBroker: unable to start processing because of NotControllerException. Reason: The active controller appears to be node 2. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.575702358Z [2024-04-17 09:42:29,575] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.576319110Z [2024-04-17 09:42:29,576] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.628743759Z [2024-04-17 09:42:29,626] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.662716248Z [2024-04-17 09:42:29,662] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.666641299Z [2024-04-17 09:42:29,666] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.666658554Z [2024-04-17 09:42:29,666] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.710732866Z [2024-04-17 09:42:29,710] ERROR [QuorumController id=2] registerBroker: unable to start processing because of NotControllerException. Reason: The active controller appears to be node 2. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.718797357Z [2024-04-17 09:42:29,718] ERROR [QuorumController id=2] registerBroker: unable to start processing because of NotControllerException. Reason: The active controller appears to be node 2. (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.728439470Z [2024-04-17 09:42:29,728] INFO [BrokerToControllerChannelManager id=2 name=heartbeat] Client requested disconnect from node 2 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:29.728676605Z [2024-04-17 09:42:29,728] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.767110646Z [2024-04-17 09:42:29,764] INFO [MetadataLoader id=2] initializeNewPublishers: the loader is still catching up because we still don't know the high water mark yet. (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.781144766Z [2024-04-17 09:42:29,780] INFO [broker-2-to-controller-heartbeat-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:42:29.783065397Z [2024-04-17 09:42:29,782] INFO [RaftManager id=2] High watermark set to LogOffsetMetadata(offset=20, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=2053)]) for the first time for epoch 31 based on indexOfHw 1 and voters [ReplicaState(nodeId=0, endOffset=Optional[LogOffsetMetadata(offset=20, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=2053)])], lastFetchTimestamp=1713346949778, lastCaughtUpTimestamp=1713346949778, hasAcknowledgedLeader=true), ReplicaState(nodeId=2, endOffset=Optional[LogOffsetMetadata(offset=20, metadata=Optional[(segmentBaseOffset=0,relativePositionInSegment=2053)])], lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=true), ReplicaState(nodeId=1, endOffset=Optional.empty, lastFetchTimestamp=-1, lastCaughtUpTimestamp=-1, hasAcknowledgedLeader=false)] (org.apache.kafka.raft.LeaderState) 2024-04-17T09:42:29.801429425Z [2024-04-17 09:42:29,797] INFO [QuorumController id=2] Setting metadata version to 3.5-IV2 (org.apache.kafka.controller.FeatureControlManager) 2024-04-17T09:42:29.801449133Z [2024-04-17 09:42:29,797] INFO [QuorumController id=2] Created new SCRAM credential for user with mechanism SCRAM_SHA_256. (org.apache.kafka.controller.ScramControlManager) 2024-04-17T09:42:29.801452453Z [2024-04-17 09:42:29,798] INFO [QuorumController id=2] Created new SCRAM credential for user with mechanism SCRAM_SHA_512. (org.apache.kafka.controller.ScramControlManager) 2024-04-17T09:42:29.801456261Z [2024-04-17 09:42:29,798] INFO [QuorumController id=2] Transitioning ZK migration state from NONE to NONE (org.apache.kafka.controller.FeatureControlManager) 2024-04-17T09:42:29.812807220Z [2024-04-17 09:42:29,812] INFO [QuorumController id=2] Registered new broker: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=sKyXoe60S46TYQ7MH3s1KA, brokerEpoch=4, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:29.815895835Z [2024-04-17 09:42:29,815] INFO [QuorumController id=2] Re-registered broker incarnation: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=sKyXoe60S46TYQ7MH3s1KA, brokerEpoch=15, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:29.818809972Z [2024-04-17 09:42:29,818] INFO [QuorumController id=2] Becoming the active controller at epoch 31, committed offset 18, committed epoch 26 (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.818963796Z [2024-04-17 09:42:29,818] INFO [QuorumController id=2] Loaded ZK migration state of NONE (org.apache.kafka.controller.QuorumController) 2024-04-17T09:42:29.875842027Z [2024-04-17 09:42:29,875] INFO [QuorumController id=2] Registered new broker: RegisterBrokerRecord(brokerId=0, isMigratingZkBroker=false, incarnationId=SB3JjNSLRdW5i11FXk5k1Q, brokerEpoch=19, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:29.912683678Z [2024-04-17 09:42:29,912] INFO [MetadataLoader id=2] handleCommit: The loader is still catching up because we have loaded up to offset 3, but the high water mark is 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.925105392Z [2024-04-17 09:42:29,912] INFO [MetadataLoader id=2] initializeNewPublishers: The loader is still catching up because we have loaded up to offset 3, but the high water mark is 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.925138937Z [2024-04-17 09:42:29,924] INFO [MetadataLoader id=2] handleCommit: The loader is still catching up because we have loaded up to offset 18, but the high water mark is 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:29.945295442Z [2024-04-17 09:42:29,945] INFO [QuorumController id=2] Re-registered broker incarnation: RegisterBrokerRecord(brokerId=2, isMigratingZkBroker=false, incarnationId=sKyXoe60S46TYQ7MH3s1KA, brokerEpoch=21, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:30.024299121Z [2024-04-17 09:42:30,024] INFO [MetadataLoader id=2] initializeNewPublishers: The loader is still catching up because we have loaded up to offset 18, but the high water mark is 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.121297417Z [2024-04-17 09:42:30,121] INFO [MetadataLoader id=2] handleCommit: The loader finished catching up to the current high water mark of 21 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.133357956Z [2024-04-17 09:42:30,133] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing SnapshotGenerator with a snapshot at offset 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.133827293Z [2024-04-17 09:42:30,133] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing DynamicConfigPublisher controller id=2 with a snapshot at offset 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.134880589Z [2024-04-17 09:42:30,134] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing DynamicClientQuotaPublisher controller id=2 with a snapshot at offset 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.135210572Z [2024-04-17 09:42:30,135] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing ScramPublisher controller id=2 with a snapshot at offset 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.141019327Z [2024-04-17 09:42:30,135] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing ControllerMetadataMetricsPublisher with a snapshot at offset 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.141049113Z [2024-04-17 09:42:30,136] INFO [MetadataLoader id=2] InitializeNewPublishers: initializing BrokerMetadataPublisher with a snapshot at offset 20 (org.apache.kafka.image.loader.MetadataLoader) 2024-04-17T09:42:30.141053167Z [2024-04-17 09:42:30,136] INFO [BrokerMetadataPublisher id=2] Publishing initial metadata at offset OffsetAndEpoch(offset=20, epoch=31) with metadata.version 3.5-IV2. (kafka.server.metadata.BrokerMetadataPublisher) 2024-04-17T09:42:30.141904072Z [2024-04-17 09:42:30,141] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager) 2024-04-17T09:42:30.189940094Z [2024-04-17 09:42:30,189] INFO No logs found to be loaded in /bitnami/kafka/data (kafka.log.LogManager) 2024-04-17T09:42:30.197469498Z [2024-04-17 09:42:30,197] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:30.199964743Z [2024-04-17 09:42:30,199] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:30.255668473Z [2024-04-17 09:42:30,255] INFO [BrokerLifecycleManager id=2] Successfully registered broker 2 with broker epoch 21 (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:30.465919266Z [2024-04-17 09:42:30,465] INFO Loaded 0 logs in 289ms (kafka.log.LogManager) 2024-04-17T09:42:30.473179069Z [2024-04-17 09:42:30,473] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 2024-04-17T09:42:30.482060820Z [2024-04-17 09:42:30,481] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 2024-04-17T09:42:30.674608790Z [2024-04-17 09:42:30,667] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:30.674627978Z [2024-04-17 09:42:30,674] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:30.752985646Z [2024-04-17 09:42:30,749] INFO Starting the log cleaner (kafka.log.LogCleaner) 2024-04-17T09:42:31.147006066Z [2024-04-17 09:42:31,146] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:31.147178552Z [2024-04-17 09:42:31,147] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:31.685018856Z [2024-04-17 09:42:31,682] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 2024-04-17T09:42:31.693285930Z [2024-04-17 09:42:31,693] INFO [GroupCoordinator 2]: Starting up. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:42:31.694703543Z [2024-04-17 09:42:31,694] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 2024-04-17T09:42:31.695120047Z [2024-04-17 09:42:31,694] INFO [GroupCoordinator 2]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:42:31.695738165Z [2024-04-17 09:42:31,695] INFO [TransactionCoordinator id=2] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 2024-04-17T09:42:31.701179230Z [2024-04-17 09:42:31,699] INFO [TransactionCoordinator id=2] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 2024-04-17T09:42:31.701194315Z [2024-04-17 09:42:31,700] INFO [BrokerMetadataPublisher id=2] Updating metadata.version to 11 at offset OffsetAndEpoch(offset=20, epoch=31). (kafka.server.metadata.BrokerMetadataPublisher) 2024-04-17T09:42:31.701199348Z [2024-04-17 09:42:31,700] INFO [TxnMarkerSenderThread-2]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 2024-04-17T09:42:31.903673826Z [2024-04-17 09:42:31,903] INFO [QuorumController id=2] The request from broker 0 to unfence has been granted because it has caught up with the offset of its register broker record 19. (org.apache.kafka.controller.BrokerHeartbeatManager) 2024-04-17T09:42:31.991645697Z [2024-04-17 09:42:31,989] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:31.991665686Z [2024-04-17 09:42:31,989] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:32.183831239Z [2024-04-17 09:42:32,183] INFO [BrokerLifecycleManager id=2] The broker has caught up. Transitioning from STARTING to RECOVERY. (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:32.184129412Z [2024-04-17 09:42:32,183] INFO [BrokerServer id=2] Finished waiting for the controller to acknowledge that we are caught up (kafka.server.BrokerServer) 2024-04-17T09:42:32.184321518Z [2024-04-17 09:42:32,184] INFO [BrokerServer id=2] Waiting for the initial broker metadata update to be published (kafka.server.BrokerServer) 2024-04-17T09:42:32.184485653Z [2024-04-17 09:42:32,184] INFO [BrokerServer id=2] Finished waiting for the initial broker metadata update to be published (kafka.server.BrokerServer) 2024-04-17T09:42:32.187113230Z [2024-04-17 09:42:32,186] INFO KafkaConfig values: 2024-04-17T09:42:32.187125950Z advertised.listeners = CLIENT://kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9092,INTERNAL://kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9094 2024-04-17T09:42:32.187130137Z alter.config.policy.class.name = null 2024-04-17T09:42:32.187133215Z alter.log.dirs.replication.quota.window.num = 11 2024-04-17T09:42:32.187136220Z alter.log.dirs.replication.quota.window.size.seconds = 1 2024-04-17T09:42:32.187139671Z authorizer.class.name = 2024-04-17T09:42:32.187142611Z auto.create.topics.enable = true 2024-04-17T09:42:32.187145536Z auto.include.jmx.reporter = true 2024-04-17T09:42:32.187148579Z auto.leader.rebalance.enable = true 2024-04-17T09:42:32.187152779Z background.threads = 10 2024-04-17T09:42:32.187155795Z broker.heartbeat.interval.ms = 2000 2024-04-17T09:42:32.187158720Z broker.id = 2 2024-04-17T09:42:32.187173053Z broker.id.generation.enable = true 2024-04-17T09:42:32.187176385Z broker.rack = null 2024-04-17T09:42:32.187179337Z broker.session.timeout.ms = 9000 2024-04-17T09:42:32.187182261Z client.quota.callback.class = null 2024-04-17T09:42:32.187185327Z compression.type = producer 2024-04-17T09:42:32.187188514Z connection.failed.authentication.delay.ms = 100 2024-04-17T09:42:32.187191420Z connections.max.idle.ms = 600000 2024-04-17T09:42:32.187194426Z connections.max.reauth.ms = 0 2024-04-17T09:42:32.187197352Z control.plane.listener.name = null 2024-04-17T09:42:32.187200248Z controlled.shutdown.enable = true 2024-04-17T09:42:32.187203298Z controlled.shutdown.max.retries = 3 2024-04-17T09:42:32.187206290Z controlled.shutdown.retry.backoff.ms = 5000 2024-04-17T09:42:32.187209274Z controller.listener.names = CONTROLLER 2024-04-17T09:42:32.187212254Z controller.quorum.append.linger.ms = 25 2024-04-17T09:42:32.187215152Z controller.quorum.election.backoff.max.ms = 1000 2024-04-17T09:42:32.187218159Z controller.quorum.election.timeout.ms = 1000 2024-04-17T09:42:32.187221022Z controller.quorum.fetch.timeout.ms = 2000 2024-04-17T09:42:32.187223835Z controller.quorum.request.timeout.ms = 2000 2024-04-17T09:42:32.187234003Z controller.quorum.retry.backoff.ms = 20 2024-04-17T09:42:32.187248941Z controller.quorum.voters = [0@kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9093, 1@kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9093, 2@kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093] 2024-04-17T09:42:32.187252323Z controller.quota.window.num = 11 2024-04-17T09:42:32.187255259Z controller.quota.window.size.seconds = 1 2024-04-17T09:42:32.187258066Z controller.socket.timeout.ms = 30000 2024-04-17T09:42:32.187260997Z create.topic.policy.class.name = null 2024-04-17T09:42:32.187263878Z default.replication.factor = 1 2024-04-17T09:42:32.187266788Z delegation.token.expiry.check.interval.ms = 3600000 2024-04-17T09:42:32.187269660Z delegation.token.expiry.time.ms = 86400000 2024-04-17T09:42:32.187272589Z delegation.token.master.key = null 2024-04-17T09:42:32.187275483Z delegation.token.max.lifetime.ms = 604800000 2024-04-17T09:42:32.187278388Z delegation.token.secret.key = null 2024-04-17T09:42:32.187281331Z delete.records.purgatory.purge.interval.requests = 1 2024-04-17T09:42:32.187284308Z delete.topic.enable = true 2024-04-17T09:42:32.187287189Z early.start.listeners = null 2024-04-17T09:42:32.187290120Z fetch.max.bytes = 57671680 2024-04-17T09:42:32.187296990Z fetch.purgatory.purge.interval.requests = 1000 2024-04-17T09:42:32.187300121Z group.consumer.assignors = [] 2024-04-17T09:42:32.187303061Z group.consumer.heartbeat.interval.ms = 5000 2024-04-17T09:42:32.187305964Z group.consumer.max.heartbeat.interval.ms = 15000 2024-04-17T09:42:32.187309051Z group.consumer.max.session.timeout.ms = 60000 2024-04-17T09:42:32.187312008Z group.consumer.max.size = 2147483647 2024-04-17T09:42:32.187325340Z group.consumer.min.heartbeat.interval.ms = 5000 2024-04-17T09:42:32.187328676Z group.consumer.min.session.timeout.ms = 45000 2024-04-17T09:42:32.187331613Z group.consumer.session.timeout.ms = 45000 2024-04-17T09:42:32.187334521Z group.coordinator.new.enable = false 2024-04-17T09:42:32.187337485Z group.coordinator.threads = 1 2024-04-17T09:42:32.187340479Z group.initial.rebalance.delay.ms = 3000 2024-04-17T09:42:32.187343388Z group.max.session.timeout.ms = 1800000 2024-04-17T09:42:32.187346416Z group.max.size = 2147483647 2024-04-17T09:42:32.187349375Z group.min.session.timeout.ms = 6000 2024-04-17T09:42:32.187352302Z initial.broker.registration.timeout.ms = 60000 2024-04-17T09:42:32.187355212Z inter.broker.listener.name = INTERNAL 2024-04-17T09:42:32.187358104Z inter.broker.protocol.version = 3.5-IV2 2024-04-17T09:42:32.187361000Z kafka.metrics.polling.interval.secs = 10 2024-04-17T09:42:32.187363976Z kafka.metrics.reporters = [] 2024-04-17T09:42:32.187366841Z leader.imbalance.check.interval.seconds = 300 2024-04-17T09:42:32.187369755Z leader.imbalance.per.broker.percentage = 10 2024-04-17T09:42:32.187373160Z listener.security.protocol.map = CLIENT:PLAINTEXT,INTERNAL:SASL_PLAINTEXT,CONTROLLER:SASL_PLAINTEXT 2024-04-17T09:42:32.187376129Z listeners = CLIENT://:9092,INTERNAL://:9094,CONTROLLER://:9093 2024-04-17T09:42:32.187379082Z log.cleaner.backoff.ms = 15000 2024-04-17T09:42:32.187382030Z log.cleaner.dedupe.buffer.size = 134217728 2024-04-17T09:42:32.187384926Z log.cleaner.delete.retention.ms = 86400000 2024-04-17T09:42:32.187387838Z log.cleaner.enable = true 2024-04-17T09:42:32.187400561Z log.cleaner.io.buffer.load.factor = 0.9 2024-04-17T09:42:32.187403900Z log.cleaner.io.buffer.size = 524288 2024-04-17T09:42:32.187406852Z log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 2024-04-17T09:42:32.187409780Z log.cleaner.max.compaction.lag.ms = 9223372036854775807 2024-04-17T09:42:32.187416790Z log.cleaner.min.cleanable.ratio = 0.5 2024-04-17T09:42:32.187419729Z log.cleaner.min.compaction.lag.ms = 0 2024-04-17T09:42:32.187422658Z log.cleaner.threads = 1 2024-04-17T09:42:32.187425562Z log.cleanup.policy = [delete] 2024-04-17T09:42:32.187428494Z log.dir = /bitnami/kafka/data 2024-04-17T09:42:32.187431460Z log.dirs = null 2024-04-17T09:42:32.187434460Z log.flush.interval.messages = 9223372036854775807 2024-04-17T09:42:32.187437534Z log.flush.interval.ms = null 2024-04-17T09:42:32.187440429Z log.flush.offset.checkpoint.interval.ms = 60000 2024-04-17T09:42:32.187443348Z log.flush.scheduler.interval.ms = 9223372036854775807 2024-04-17T09:42:32.187446264Z log.flush.start.offset.checkpoint.interval.ms = 60000 2024-04-17T09:42:32.187449150Z log.index.interval.bytes = 4096 2024-04-17T09:42:32.187452008Z log.index.size.max.bytes = 10485760 2024-04-17T09:42:32.187454850Z log.message.downconversion.enable = true 2024-04-17T09:42:32.187459875Z log.message.format.version = 3.0-IV1 2024-04-17T09:42:32.187462993Z log.message.timestamp.difference.max.ms = 9223372036854775807 2024-04-17T09:42:32.187465834Z log.message.timestamp.type = CreateTime 2024-04-17T09:42:32.187479192Z log.preallocate = false 2024-04-17T09:42:32.187482129Z log.retention.bytes = -1 2024-04-17T09:42:32.187485018Z log.retention.check.interval.ms = 300000 2024-04-17T09:42:32.187487923Z log.retention.hours = 168 2024-04-17T09:42:32.187490778Z log.retention.minutes = null 2024-04-17T09:42:32.187493687Z log.retention.ms = null 2024-04-17T09:42:32.187496556Z log.roll.hours = 168 2024-04-17T09:42:32.187499464Z log.roll.jitter.hours = 0 2024-04-17T09:42:32.187502379Z log.roll.jitter.ms = null 2024-04-17T09:42:32.187505270Z log.roll.ms = null 2024-04-17T09:42:32.187508138Z log.segment.bytes = 1073741824 2024-04-17T09:42:32.187511019Z log.segment.delete.delay.ms = 60000 2024-04-17T09:42:32.187513901Z max.connection.creation.rate = 2147483647 2024-04-17T09:42:32.187516769Z max.connections = 2147483647 2024-04-17T09:42:32.187519640Z max.connections.per.ip = 2147483647 2024-04-17T09:42:32.187522523Z max.connections.per.ip.overrides = 2024-04-17T09:42:32.187525511Z max.incremental.fetch.session.cache.slots = 1000 2024-04-17T09:42:32.187528367Z message.max.bytes = 1048588 2024-04-17T09:42:32.187531321Z metadata.log.dir = null 2024-04-17T09:42:32.187534937Z metadata.log.max.record.bytes.between.snapshots = 20971520 2024-04-17T09:42:32.187537835Z metadata.log.max.snapshot.interval.ms = 3600000 2024-04-17T09:42:32.187540683Z metadata.log.segment.bytes = 1073741824 2024-04-17T09:42:32.187553103Z metadata.log.segment.min.bytes = 8388608 2024-04-17T09:42:32.187556182Z metadata.log.segment.ms = 604800000 2024-04-17T09:42:32.187559022Z metadata.max.idle.interval.ms = 500 2024-04-17T09:42:32.187561886Z metadata.max.retention.bytes = 104857600 2024-04-17T09:42:32.187564679Z metadata.max.retention.ms = 604800000 2024-04-17T09:42:32.187567547Z metric.reporters = [] 2024-04-17T09:42:32.187570487Z metrics.num.samples = 2 2024-04-17T09:42:32.187573324Z metrics.recording.level = INFO 2024-04-17T09:42:32.187576123Z metrics.sample.window.ms = 30000 2024-04-17T09:42:32.187578940Z min.insync.replicas = 1 2024-04-17T09:42:32.187581868Z node.id = 2 2024-04-17T09:42:32.187584735Z num.io.threads = 8 2024-04-17T09:42:32.187587574Z num.network.threads = 3 2024-04-17T09:42:32.187590371Z num.partitions = 1 2024-04-17T09:42:32.187593211Z num.recovery.threads.per.data.dir = 1 2024-04-17T09:42:32.187596114Z num.replica.alter.log.dirs.threads = null 2024-04-17T09:42:32.187602541Z num.replica.fetchers = 1 2024-04-17T09:42:32.187605455Z offset.metadata.max.bytes = 4096 2024-04-17T09:42:32.187608322Z offsets.commit.required.acks = -1 2024-04-17T09:42:32.187611153Z offsets.commit.timeout.ms = 5000 2024-04-17T09:42:32.187614018Z offsets.load.buffer.size = 5242880 2024-04-17T09:42:32.187616856Z offsets.retention.check.interval.ms = 600000 2024-04-17T09:42:32.187629292Z offsets.retention.minutes = 10080 2024-04-17T09:42:32.187632266Z offsets.topic.compression.codec = 0 2024-04-17T09:42:32.187635088Z offsets.topic.num.partitions = 50 2024-04-17T09:42:32.187637918Z offsets.topic.replication.factor = 3 2024-04-17T09:42:32.187640778Z offsets.topic.segment.bytes = 104857600 2024-04-17T09:42:32.187653037Z password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 2024-04-17T09:42:32.187656306Z password.encoder.iterations = 4096 2024-04-17T09:42:32.187659209Z password.encoder.key.length = 128 2024-04-17T09:42:32.187662074Z password.encoder.keyfactory.algorithm = null 2024-04-17T09:42:32.187664895Z password.encoder.old.secret = null 2024-04-17T09:42:32.187667779Z password.encoder.secret = null 2024-04-17T09:42:32.187670728Z principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 2024-04-17T09:42:32.187673615Z process.roles = [controller, broker] 2024-04-17T09:42:32.187676442Z producer.id.expiration.check.interval.ms = 600000 2024-04-17T09:42:32.187679342Z producer.id.expiration.ms = 86400000 2024-04-17T09:42:32.187682159Z producer.purgatory.purge.interval.requests = 1000 2024-04-17T09:42:32.187685067Z queued.max.request.bytes = -1 2024-04-17T09:42:32.187687915Z queued.max.requests = 500 2024-04-17T09:42:32.187690837Z quota.window.num = 11 2024-04-17T09:42:32.187703752Z quota.window.size.seconds = 1 2024-04-17T09:42:32.187707035Z remote.log.index.file.cache.total.size.bytes = 1073741824 2024-04-17T09:42:32.187710171Z remote.log.manager.task.interval.ms = 30000 2024-04-17T09:42:32.187713073Z remote.log.manager.task.retry.backoff.max.ms = 30000 2024-04-17T09:42:32.187715943Z remote.log.manager.task.retry.backoff.ms = 500 2024-04-17T09:42:32.187718835Z remote.log.manager.task.retry.jitter = 0.2 2024-04-17T09:42:32.187722083Z remote.log.manager.thread.pool.size = 10 2024-04-17T09:42:32.187725002Z remote.log.metadata.manager.class.name = null 2024-04-17T09:42:32.187727953Z remote.log.metadata.manager.class.path = null 2024-04-17T09:42:32.187730874Z remote.log.metadata.manager.impl.prefix = null 2024-04-17T09:42:32.187733773Z remote.log.metadata.manager.listener.name = null 2024-04-17T09:42:32.187736657Z remote.log.reader.max.pending.tasks = 100 2024-04-17T09:42:32.187739526Z remote.log.reader.threads = 10 2024-04-17T09:42:32.187742380Z remote.log.storage.manager.class.name = null 2024-04-17T09:42:32.187745289Z remote.log.storage.manager.class.path = null 2024-04-17T09:42:32.187748207Z remote.log.storage.manager.impl.prefix = null 2024-04-17T09:42:32.187751041Z remote.log.storage.system.enable = false 2024-04-17T09:42:32.187753914Z replica.fetch.backoff.ms = 1000 2024-04-17T09:42:32.187756772Z replica.fetch.max.bytes = 1048576 2024-04-17T09:42:32.187759652Z replica.fetch.min.bytes = 1 2024-04-17T09:42:32.187762503Z replica.fetch.response.max.bytes = 10485760 2024-04-17T09:42:32.187765385Z replica.fetch.wait.max.ms = 500 2024-04-17T09:42:32.187768284Z replica.high.watermark.checkpoint.interval.ms = 5000 2024-04-17T09:42:32.187780581Z replica.lag.time.max.ms = 30000 2024-04-17T09:42:32.187783644Z replica.selector.class = null 2024-04-17T09:42:32.187790265Z replica.socket.receive.buffer.bytes = 65536 2024-04-17T09:42:32.187793237Z replica.socket.timeout.ms = 30000 2024-04-17T09:42:32.187796094Z replication.quota.window.num = 11 2024-04-17T09:42:32.187798929Z replication.quota.window.size.seconds = 1 2024-04-17T09:42:32.187801780Z request.timeout.ms = 30000 2024-04-17T09:42:32.187804660Z reserved.broker.max.id = 1000 2024-04-17T09:42:32.187807501Z sasl.client.callback.handler.class = null 2024-04-17T09:42:32.187810410Z sasl.enabled.mechanisms = [PLAIN, SCRAM-SHA-256, SCRAM-SHA-512] 2024-04-17T09:42:32.187813352Z sasl.jaas.config = null 2024-04-17T09:42:32.187816192Z sasl.kerberos.kinit.cmd = /usr/bin/kinit 2024-04-17T09:42:32.187819043Z sasl.kerberos.min.time.before.relogin = 60000 2024-04-17T09:42:32.187821957Z sasl.kerberos.principal.to.local.rules = [DEFAULT] 2024-04-17T09:42:32.187824796Z sasl.kerberos.service.name = null 2024-04-17T09:42:32.187827637Z sasl.kerberos.ticket.renew.jitter = 0.05 2024-04-17T09:42:32.187830487Z sasl.kerberos.ticket.renew.window.factor = 0.8 2024-04-17T09:42:32.187835116Z sasl.login.callback.handler.class = null 2024-04-17T09:42:32.187838189Z sasl.login.class = null 2024-04-17T09:42:32.187841041Z sasl.login.connect.timeout.ms = null 2024-04-17T09:42:32.187843974Z sasl.login.read.timeout.ms = null 2024-04-17T09:42:32.187856423Z sasl.login.refresh.buffer.seconds = 300 2024-04-17T09:42:32.187859424Z sasl.login.refresh.min.period.seconds = 60 2024-04-17T09:42:32.187862272Z sasl.login.refresh.window.factor = 0.8 2024-04-17T09:42:32.187865139Z sasl.login.refresh.window.jitter = 0.05 2024-04-17T09:42:32.187868014Z sasl.login.retry.backoff.max.ms = 10000 2024-04-17T09:42:32.187870828Z sasl.login.retry.backoff.ms = 100 2024-04-17T09:42:32.187873679Z sasl.mechanism.controller.protocol = PLAIN 2024-04-17T09:42:32.187876532Z sasl.mechanism.inter.broker.protocol = PLAIN 2024-04-17T09:42:32.187879350Z sasl.oauthbearer.clock.skew.seconds = 30 2024-04-17T09:42:32.187882178Z sasl.oauthbearer.expected.audience = null 2024-04-17T09:42:32.187884984Z sasl.oauthbearer.expected.issuer = null 2024-04-17T09:42:32.187887825Z sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 2024-04-17T09:42:32.187890831Z sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 2024-04-17T09:42:32.187893698Z sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 2024-04-17T09:42:32.187896503Z sasl.oauthbearer.jwks.endpoint.url = null 2024-04-17T09:42:32.187899333Z sasl.oauthbearer.scope.claim.name = scope 2024-04-17T09:42:32.187902139Z sasl.oauthbearer.sub.claim.name = sub 2024-04-17T09:42:32.187904955Z sasl.oauthbearer.token.endpoint.url = null 2024-04-17T09:42:32.187907771Z sasl.server.callback.handler.class = null 2024-04-17T09:42:32.187910575Z sasl.server.max.receive.size = 524288 2024-04-17T09:42:32.187913415Z security.inter.broker.protocol = PLAINTEXT 2024-04-17T09:42:32.187916242Z security.providers = null 2024-04-17T09:42:32.187919109Z server.max.startup.time.ms = 9223372036854775807 2024-04-17T09:42:32.187931223Z socket.connection.setup.timeout.max.ms = 30000 2024-04-17T09:42:32.187934312Z socket.connection.setup.timeout.ms = 10000 2024-04-17T09:42:32.187937209Z socket.listen.backlog.size = 50 2024-04-17T09:42:32.187940030Z socket.receive.buffer.bytes = 102400 2024-04-17T09:42:32.187942848Z socket.request.max.bytes = 104857600 2024-04-17T09:42:32.187945698Z socket.send.buffer.bytes = 102400 2024-04-17T09:42:32.187948538Z ssl.cipher.suites = [] 2024-04-17T09:42:32.187951393Z ssl.client.auth = none 2024-04-17T09:42:32.187954215Z ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 2024-04-17T09:42:32.187960603Z ssl.endpoint.identification.algorithm = https 2024-04-17T09:42:32.187963502Z ssl.engine.factory.class = null 2024-04-17T09:42:32.187966363Z ssl.key.password = null 2024-04-17T09:42:32.187969200Z ssl.keymanager.algorithm = SunX509 2024-04-17T09:42:32.187972085Z ssl.keystore.certificate.chain = null 2024-04-17T09:42:32.187974943Z ssl.keystore.key = null 2024-04-17T09:42:32.187977834Z ssl.keystore.location = null 2024-04-17T09:42:32.187980685Z ssl.keystore.password = null 2024-04-17T09:42:32.187983537Z ssl.keystore.type = JKS 2024-04-17T09:42:32.187986390Z ssl.principal.mapping.rules = DEFAULT 2024-04-17T09:42:32.187989257Z ssl.protocol = TLSv1.3 2024-04-17T09:42:32.187992100Z ssl.provider = null 2024-04-17T09:42:32.187995037Z ssl.secure.random.implementation = null 2024-04-17T09:42:32.188007402Z ssl.trustmanager.algorithm = PKIX 2024-04-17T09:42:32.188010493Z ssl.truststore.certificates = null 2024-04-17T09:42:32.188013394Z ssl.truststore.location = null 2024-04-17T09:42:32.188016214Z ssl.truststore.password = null 2024-04-17T09:42:32.188019065Z ssl.truststore.type = JKS 2024-04-17T09:42:32.188022708Z transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 2024-04-17T09:42:32.188025587Z transaction.max.timeout.ms = 900000 2024-04-17T09:42:32.188028559Z transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 2024-04-17T09:42:32.188031464Z transaction.state.log.load.buffer.size = 5242880 2024-04-17T09:42:32.188034374Z transaction.state.log.min.isr = 2 2024-04-17T09:42:32.188037191Z transaction.state.log.num.partitions = 50 2024-04-17T09:42:32.188040040Z transaction.state.log.replication.factor = 3 2024-04-17T09:42:32.188042871Z transaction.state.log.segment.bytes = 104857600 2024-04-17T09:42:32.188045765Z transactional.id.expiration.ms = 604800000 2024-04-17T09:42:32.188048588Z unclean.leader.election.enable = false 2024-04-17T09:42:32.188051496Z unstable.api.versions.enable = false 2024-04-17T09:42:32.188054325Z zookeeper.clientCnxnSocket = null 2024-04-17T09:42:32.188057222Z zookeeper.connect = null 2024-04-17T09:42:32.188071034Z zookeeper.connection.timeout.ms = null 2024-04-17T09:42:32.188073910Z zookeeper.max.in.flight.requests = 10 2024-04-17T09:42:32.188076587Z zookeeper.metadata.migration.enable = false 2024-04-17T09:42:32.188079311Z zookeeper.session.timeout.ms = 18000 2024-04-17T09:42:32.188082035Z zookeeper.set.acl = false 2024-04-17T09:42:32.188094024Z zookeeper.ssl.cipher.suites = null 2024-04-17T09:42:32.188096734Z zookeeper.ssl.client.enable = false 2024-04-17T09:42:32.188099488Z zookeeper.ssl.crl.enable = false 2024-04-17T09:42:32.188102132Z zookeeper.ssl.enabled.protocols = null 2024-04-17T09:42:32.188104822Z zookeeper.ssl.endpoint.identification.algorithm = HTTPS 2024-04-17T09:42:32.188107468Z zookeeper.ssl.keystore.location = null 2024-04-17T09:42:32.188110203Z zookeeper.ssl.keystore.password = null 2024-04-17T09:42:32.188112842Z zookeeper.ssl.keystore.type = null 2024-04-17T09:42:32.188115515Z zookeeper.ssl.ocsp.enable = false 2024-04-17T09:42:32.188118152Z zookeeper.ssl.protocol = TLSv1.2 2024-04-17T09:42:32.188120812Z zookeeper.ssl.truststore.location = null 2024-04-17T09:42:32.188123507Z zookeeper.ssl.truststore.password = null 2024-04-17T09:42:32.188126151Z zookeeper.ssl.truststore.type = null 2024-04-17T09:42:32.188128834Z (kafka.server.KafkaConfig) 2024-04-17T09:42:32.208196572Z [2024-04-17 09:42:32,207] INFO [BrokerLifecycleManager id=2] The broker is in RECOVERY. (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:32.219191795Z [2024-04-17 09:42:32,218] INFO [BrokerServer id=2] Waiting for the broker to be unfenced (kafka.server.BrokerServer) 2024-04-17T09:42:32.221128646Z [2024-04-17 09:42:32,219] INFO [QuorumController id=2] The request from broker 2 to unfence has been granted because it has caught up with the offset of its register broker record 21. (org.apache.kafka.controller.BrokerHeartbeatManager) 2024-04-17T09:42:32.278881620Z [2024-04-17 09:42:32,278] INFO [BrokerLifecycleManager id=2] The broker has been unfenced. Transitioning from RECOVERY to RUNNING. (kafka.server.BrokerLifecycleManager) 2024-04-17T09:42:32.286620267Z [2024-04-17 09:42:32,284] INFO [BrokerServer id=2] Finished waiting for the broker to be unfenced (kafka.server.BrokerServer) 2024-04-17T09:42:32.286627122Z [2024-04-17 09:42:32,284] INFO [SocketServer listenerType=BROKER, nodeId=2] Enabling request processing. (kafka.network.SocketServer) 2024-04-17T09:42:32.286630131Z [2024-04-17 09:42:32,284] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 2024-04-17T09:42:32.293054584Z [2024-04-17 09:42:32,292] INFO Awaiting socket connections on 0.0.0.0:9094. (kafka.network.DataPlaneAcceptor) 2024-04-17T09:42:32.318377295Z [2024-04-17 09:42:32,318] INFO [BrokerServer id=2] Waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer) 2024-04-17T09:42:32.318450311Z [2024-04-17 09:42:32,318] INFO [BrokerServer id=2] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer) 2024-04-17T09:42:32.318453878Z [2024-04-17 09:42:32,318] INFO [BrokerServer id=2] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer) 2024-04-17T09:42:32.318461813Z [2024-04-17 09:42:32,318] INFO [BrokerServer id=2] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer) 2024-04-17T09:42:32.318465426Z [2024-04-17 09:42:32,318] INFO [BrokerServer id=2] Transition from STARTING to STARTED (kafka.server.BrokerServer) 2024-04-17T09:42:32.320229801Z [2024-04-17 09:42:32,318] INFO Kafka version: 3.5.1 (org.apache.kafka.common.utils.AppInfoParser) 2024-04-17T09:42:32.320234201Z [2024-04-17 09:42:32,318] INFO Kafka commitId: 2c6fb6c54472e90a (org.apache.kafka.common.utils.AppInfoParser) 2024-04-17T09:42:32.320246124Z [2024-04-17 09:42:32,318] INFO Kafka startTimeMs: 1713346952318 (org.apache.kafka.common.utils.AppInfoParser) 2024-04-17T09:42:32.321127965Z [2024-04-17 09:42:32,320] INFO [KafkaRaftServer nodeId=2] Kafka Server started (kafka.server.KafkaRaftServer) 2024-04-17T09:42:32.520413756Z [2024-04-17 09:42:32,519] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:32.522209893Z [2024-04-17 09:42:32,520] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:33.043800161Z [2024-04-17 09:42:33,043] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:33.044452826Z [2024-04-17 09:42:33,043] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:33.588995141Z [2024-04-17 09:42:33,586] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:33.589008955Z [2024-04-17 09:42:33,586] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:34.129914423Z [2024-04-17 09:42:34,116] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:34.129930506Z [2024-04-17 09:42:34,116] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:34.628298700Z [2024-04-17 09:42:34,627] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:34.628317767Z [2024-04-17 09:42:34,627] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:35.152910242Z [2024-04-17 09:42:35,152] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:35.152930725Z [2024-04-17 09:42:35,152] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:35.589927443Z [2024-04-17 09:42:35,589] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:35.589945696Z [2024-04-17 09:42:35,589] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:36.120055283Z [2024-04-17 09:42:36,119] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:36.120073436Z [2024-04-17 09:42:36,119] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:36.627640136Z [2024-04-17 09:42:36,624] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:36.627672077Z [2024-04-17 09:42:36,627] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:37.127916986Z [2024-04-17 09:42:37,126] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:37.127933579Z [2024-04-17 09:42:37,126] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:37.690720385Z [2024-04-17 09:42:37,689] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:37.690739887Z [2024-04-17 09:42:37,689] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:38.159376990Z [2024-04-17 09:42:38,158] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:38.159401740Z [2024-04-17 09:42:38,159] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:38.717793193Z [2024-04-17 09:42:38,716] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:38.718001212Z [2024-04-17 09:42:38,717] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:39.368541821Z [2024-04-17 09:42:39,365] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:39.368565927Z [2024-04-17 09:42:39,365] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:39.849877009Z [2024-04-17 09:42:39,849] INFO [RaftManager id=2] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:39.855229706Z [2024-04-17 09:42:39,851] WARN [RaftManager id=2] Connection to node 1 (kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local/10.244.0.27:9093) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:42:41.749434811Z [2024-04-17 09:42:41,749] INFO [QuorumController id=2] Registered new broker: RegisterBrokerRecord(brokerId=1, isMigratingZkBroker=false, incarnationId=OYtF8BArQPS6ALCGjb1MBA, brokerEpoch=47, endPoints=[BrokerEndpoint(name='CLIENT', host='kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local', port=9092, securityProtocol=0), BrokerEndpoint(name='INTERNAL', host='kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local', port=9094, securityProtocol=2)], features=[BrokerFeature(name='metadata.version', minSupportedVersion=1, maxSupportedVersion=11)], rack=null, fenced=true, inControlledShutdown=false) (org.apache.kafka.controller.ClusterControlManager) 2024-04-17T09:42:42.669163206Z [2024-04-17 09:42:42,668] INFO [QuorumController id=2] The request from broker 1 to unfence has been granted because it has caught up with the offset of its register broker record 47. (org.apache.kafka.controller.BrokerHeartbeatManager) 2024-04-17T09:43:42.490225072Z [2024-04-17 09:43:42,485] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='alarm_request', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.490267655Z [2024-04-17 09:43:42,486] INFO [QuorumController id=2] Created topic alarm_request with topic ID yw9wVy9hSSuIX2uGOkyCpg. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.490274845Z [2024-04-17 09:43:42,487] INFO [QuorumController id=2] Created partition alarm_request-0 with topic ID yw9wVy9hSSuIX2uGOkyCpg and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.667861236Z [2024-04-17 09:43:42,667] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='alarm_request', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'alarm_request' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.919065719Z [2024-04-17 09:43:42,918] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='users', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.919330756Z [2024-04-17 09:43:42,919] INFO [QuorumController id=2] Created topic users with topic ID AT0ub9SdSnOx-SWf9f7jHg. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.921008020Z [2024-04-17 09:43:42,920] INFO [QuorumController id=2] Created partition users-0 with topic ID AT0ub9SdSnOx-SWf9f7jHg and PartitionRegistration(replicas=[2], isr=[2], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:42.971455837Z [2024-04-17 09:43:42,970] INFO [Broker id=2] Transitioning 1 partition(s) to local leaders. (state.change.logger) 2024-04-17T09:43:42.980778563Z [2024-04-17 09:43:42,973] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(users-0) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:43:42.980792609Z [2024-04-17 09:43:42,975] INFO [Broker id=2] Creating new partition users-0 with topic id AT0ub9SdSnOx-SWf9f7jHg. (state.change.logger) 2024-04-17T09:43:42.993777563Z [2024-04-17 09:43:42,993] INFO [LogLoader partition=users-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:43.003112323Z [2024-04-17 09:43:43,002] INFO Created log for partition users-0 in /bitnami/kafka/data/users-0 with properties {} (kafka.log.LogManager) 2024-04-17T09:43:43.005107131Z [2024-04-17 09:43:43,004] INFO [Partition users-0 broker=2] No checkpointed highwatermark is found for partition users-0 (kafka.cluster.Partition) 2024-04-17T09:43:43.006518323Z [2024-04-17 09:43:43,005] INFO [Partition users-0 broker=2] Log loaded for partition users-0 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:43.008641157Z [2024-04-17 09:43:43,008] INFO [Broker id=2] Leader users-0 with topic id Some(AT0ub9SdSnOx-SWf9f7jHg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:43.095722450Z [2024-04-17 09:43:43,095] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='users', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'users' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:43.294903826Z [2024-04-17 09:43:43,294] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='users', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'users' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:43.504517406Z [2024-04-17 09:43:43,502] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='project', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:43.504531716Z [2024-04-17 09:43:43,502] INFO [QuorumController id=2] Created topic project with topic ID fRvyD73wRMOUj3W-qVS4HQ. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:43.504536578Z [2024-04-17 09:43:43,502] INFO [QuorumController id=2] Created partition project-0 with topic ID fRvyD73wRMOUj3W-qVS4HQ and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:43.633435318Z [2024-04-17 09:43:43,632] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='project', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'project' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:43.772842751Z [2024-04-17 09:43:43,771] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='project', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'project' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.384244620Z [2024-04-17 09:43:44,371] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.384277793Z [2024-04-17 09:43:44,377] INFO [QuorumController id=2] Created topic __consumer_offsets with topic ID AYBz8iuVS5KvjNFEVdGv9g. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.384281316Z [2024-04-17 09:43:44,378] INFO [QuorumController id=2] ConfigResource(type=TOPIC, name='__consumer_offsets'): set configuration compression.type to producer (org.apache.kafka.controller.ConfigurationControlManager) 2024-04-17T09:43:44.384283908Z [2024-04-17 09:43:44,378] INFO [QuorumController id=2] ConfigResource(type=TOPIC, name='__consumer_offsets'): set configuration cleanup.policy to compact (org.apache.kafka.controller.ConfigurationControlManager) 2024-04-17T09:43:44.384286632Z [2024-04-17 09:43:44,378] INFO [QuorumController id=2] ConfigResource(type=TOPIC, name='__consumer_offsets'): set configuration segment.bytes to 104857600 (org.apache.kafka.controller.ConfigurationControlManager) 2024-04-17T09:43:44.388984023Z [2024-04-17 09:43:44,387] INFO [QuorumController id=2] Created partition __consumer_offsets-0 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.388994686Z [2024-04-17 09:43:44,388] INFO [QuorumController id=2] Created partition __consumer_offsets-1 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.388997861Z [2024-04-17 09:43:44,388] INFO [QuorumController id=2] Created partition __consumer_offsets-2 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.389000848Z [2024-04-17 09:43:44,388] INFO [QuorumController id=2] Created partition __consumer_offsets-3 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.389003871Z [2024-04-17 09:43:44,388] INFO [QuorumController id=2] Created partition __consumer_offsets-4 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487133860Z [2024-04-17 09:43:44,486] INFO [QuorumController id=2] Created partition __consumer_offsets-5 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487168916Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-6 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487234328Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-7 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487279764Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-8 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487325979Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-9 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487386003Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-10 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487451104Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-11 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487515683Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-12 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487564709Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-13 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487641122Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-14 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487693037Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-15 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487742142Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-16 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487799610Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-17 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487857689Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-18 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487966407Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-19 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.487988522Z [2024-04-17 09:43:44,487] INFO [QuorumController id=2] Created partition __consumer_offsets-20 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488041273Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-21 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488141506Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-22 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488283162Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-23 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488289575Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-24 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488296296Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-25 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488515633Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-26 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488527893Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-27 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488531013Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-28 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488536873Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-29 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488920554Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-30 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488931034Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-31 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488934327Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-32 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488937298Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-33 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488940280Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-34 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.488943326Z [2024-04-17 09:43:44,488] INFO [QuorumController id=2] Created partition __consumer_offsets-35 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607334753Z [2024-04-17 09:43:44,601] INFO [QuorumController id=2] Created partition __consumer_offsets-36 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607357500Z [2024-04-17 09:43:44,601] INFO [QuorumController id=2] Created partition __consumer_offsets-37 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607374193Z [2024-04-17 09:43:44,601] INFO [QuorumController id=2] Created partition __consumer_offsets-38 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607377345Z [2024-04-17 09:43:44,601] INFO [QuorumController id=2] Created partition __consumer_offsets-39 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 0, 1], isr=[2, 0, 1], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607380447Z [2024-04-17 09:43:44,601] INFO [QuorumController id=2] Created partition __consumer_offsets-40 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 1, 2], isr=[0, 1, 2], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607384229Z [2024-04-17 09:43:44,601] INFO [QuorumController id=2] Created partition __consumer_offsets-41 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 2, 0], isr=[1, 2, 0], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607387488Z [2024-04-17 09:43:44,602] INFO [QuorumController id=2] Created partition __consumer_offsets-42 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607390501Z [2024-04-17 09:43:44,602] INFO [QuorumController id=2] Created partition __consumer_offsets-43 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607393370Z [2024-04-17 09:43:44,604] INFO [QuorumController id=2] Created partition __consumer_offsets-44 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607396430Z [2024-04-17 09:43:44,604] INFO [QuorumController id=2] Created partition __consumer_offsets-45 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607399348Z [2024-04-17 09:43:44,604] INFO [QuorumController id=2] Created partition __consumer_offsets-46 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607404520Z [2024-04-17 09:43:44,604] INFO [QuorumController id=2] Created partition __consumer_offsets-47 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[2, 1, 0], isr=[2, 1, 0], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607414281Z [2024-04-17 09:43:44,604] INFO [QuorumController id=2] Created partition __consumer_offsets-48 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[1, 0, 2], isr=[1, 0, 2], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.607417401Z [2024-04-17 09:43:44,604] INFO [QuorumController id=2] Created partition __consumer_offsets-49 with topic ID AYBz8iuVS5KvjNFEVdGv9g and PartitionRegistration(replicas=[0, 2, 1], isr=[0, 2, 1], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:44.781685147Z [2024-04-17 09:43:44,774] INFO [Broker id=2] Transitioning 16 partition(s) to local leaders. (state.change.logger) 2024-04-17T09:43:44.781709402Z [2024-04-17 09:43:44,774] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(__consumer_offsets-47, __consumer_offsets-14, __consumer_offsets-43, __consumer_offsets-9, __consumer_offsets-23, __consumer_offsets-19, __consumer_offsets-17, __consumer_offsets-32, __consumer_offsets-27, __consumer_offsets-25, __consumer_offsets-7, __consumer_offsets-39, __consumer_offsets-4, __consumer_offsets-36, __consumer_offsets-1, __consumer_offsets-34) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:43:44.781716314Z [2024-04-17 09:43:44,779] INFO [Broker id=2] Creating new partition __consumer_offsets-47 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.873492866Z [2024-04-17 09:43:44,862] INFO [LogLoader partition=__consumer_offsets-47, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.873503722Z [2024-04-17 09:43:44,872] INFO Created log for partition __consumer_offsets-47 in /bitnami/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:44.873854470Z [2024-04-17 09:43:44,872] INFO [Partition __consumer_offsets-47 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 2024-04-17T09:43:44.873863888Z [2024-04-17 09:43:44,872] INFO [Partition __consumer_offsets-47 broker=2] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:44.873868991Z [2024-04-17 09:43:44,872] INFO [Broker id=2] Leader __consumer_offsets-47 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:44.892697657Z [2024-04-17 09:43:44,890] INFO [Broker id=2] Creating new partition __consumer_offsets-14 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.895116737Z [2024-04-17 09:43:44,894] INFO [LogLoader partition=__consumer_offsets-14, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.895972537Z [2024-04-17 09:43:44,895] INFO Created log for partition __consumer_offsets-14 in /bitnami/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:44.895988269Z [2024-04-17 09:43:44,895] INFO [Partition __consumer_offsets-14 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 2024-04-17T09:43:44.896001400Z [2024-04-17 09:43:44,895] INFO [Partition __consumer_offsets-14 broker=2] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:44.896193082Z [2024-04-17 09:43:44,895] INFO [Broker id=2] Leader __consumer_offsets-14 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:44.905173417Z [2024-04-17 09:43:44,900] INFO Sent auto-creation request for Set(__consumer_offsets) to the active controller. (kafka.server.DefaultAutoTopicCreationManager) 2024-04-17T09:43:44.905191529Z [2024-04-17 09:43:44,903] INFO [Broker id=2] Creating new partition __consumer_offsets-43 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.918100422Z [2024-04-17 09:43:44,917] INFO [LogLoader partition=__consumer_offsets-43, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.918705948Z [2024-04-17 09:43:44,918] INFO Created log for partition __consumer_offsets-43 in /bitnami/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:44.918713661Z [2024-04-17 09:43:44,918] INFO [Partition __consumer_offsets-43 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 2024-04-17T09:43:44.918800348Z [2024-04-17 09:43:44,918] INFO [Partition __consumer_offsets-43 broker=2] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:44.918841937Z [2024-04-17 09:43:44,918] INFO [Broker id=2] Leader __consumer_offsets-43 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:44.953387422Z [2024-04-17 09:43:44,951] INFO [Broker id=2] Creating new partition __consumer_offsets-9 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.955823814Z [2024-04-17 09:43:44,955] INFO [LogLoader partition=__consumer_offsets-9, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.957491813Z [2024-04-17 09:43:44,956] INFO Created log for partition __consumer_offsets-9 in /bitnami/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:44.957532440Z [2024-04-17 09:43:44,957] INFO [Partition __consumer_offsets-9 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 2024-04-17T09:43:44.957546051Z [2024-04-17 09:43:44,957] INFO [Partition __consumer_offsets-9 broker=2] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:44.961523383Z [2024-04-17 09:43:44,957] INFO [Broker id=2] Leader __consumer_offsets-9 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:44.961815372Z [2024-04-17 09:43:44,961] INFO [Broker id=2] Creating new partition __consumer_offsets-23 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.970114307Z [2024-04-17 09:43:44,969] INFO [LogLoader partition=__consumer_offsets-23, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.972455191Z [2024-04-17 09:43:44,972] INFO Created log for partition __consumer_offsets-23 in /bitnami/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:44.972723092Z [2024-04-17 09:43:44,972] INFO [Partition __consumer_offsets-23 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 2024-04-17T09:43:44.972734308Z [2024-04-17 09:43:44,972] INFO [Partition __consumer_offsets-23 broker=2] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:44.973446054Z [2024-04-17 09:43:44,972] INFO [Broker id=2] Leader __consumer_offsets-23 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:44.978245320Z [2024-04-17 09:43:44,978] INFO [Broker id=2] Creating new partition __consumer_offsets-19 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.988965783Z [2024-04-17 09:43:44,983] INFO [LogLoader partition=__consumer_offsets-19, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.989957477Z [2024-04-17 09:43:44,989] INFO Created log for partition __consumer_offsets-19 in /bitnami/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:44.990243491Z [2024-04-17 09:43:44,990] INFO [Partition __consumer_offsets-19 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 2024-04-17T09:43:44.990505569Z [2024-04-17 09:43:44,990] INFO [Partition __consumer_offsets-19 broker=2] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:44.990923584Z [2024-04-17 09:43:44,990] INFO [Broker id=2] Leader __consumer_offsets-19 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:44.993899906Z [2024-04-17 09:43:44,993] INFO [Broker id=2] Creating new partition __consumer_offsets-17 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:44.999171437Z [2024-04-17 09:43:44,998] INFO [LogLoader partition=__consumer_offsets-17, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:44.999187228Z [2024-04-17 09:43:44,999] INFO Created log for partition __consumer_offsets-17 in /bitnami/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.000590252Z [2024-04-17 09:43:44,999] INFO [Partition __consumer_offsets-17 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 2024-04-17T09:43:45.000601625Z [2024-04-17 09:43:44,999] INFO [Partition __consumer_offsets-17 broker=2] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.000605574Z [2024-04-17 09:43:44,999] INFO [Broker id=2] Leader __consumer_offsets-17 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.006453410Z [2024-04-17 09:43:45,005] INFO [Broker id=2] Creating new partition __consumer_offsets-32 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.010034323Z [2024-04-17 09:43:45,009] INFO [LogLoader partition=__consumer_offsets-32, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.014208283Z [2024-04-17 09:43:45,011] INFO Created log for partition __consumer_offsets-32 in /bitnami/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.014223385Z [2024-04-17 09:43:45,011] INFO [Partition __consumer_offsets-32 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 2024-04-17T09:43:45.014226754Z [2024-04-17 09:43:45,011] INFO [Partition __consumer_offsets-32 broker=2] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.014230470Z [2024-04-17 09:43:45,011] INFO [Broker id=2] Leader __consumer_offsets-32 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.023086604Z [2024-04-17 09:43:45,020] INFO [Broker id=2] Creating new partition __consumer_offsets-27 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.024338669Z [2024-04-17 09:43:45,024] INFO [LogLoader partition=__consumer_offsets-27, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.031083266Z [2024-04-17 09:43:45,029] INFO Created log for partition __consumer_offsets-27 in /bitnami/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.032387319Z [2024-04-17 09:43:45,031] INFO [Partition __consumer_offsets-27 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 2024-04-17T09:43:45.032393655Z [2024-04-17 09:43:45,031] INFO [Partition __consumer_offsets-27 broker=2] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.032397551Z [2024-04-17 09:43:45,031] INFO [Broker id=2] Leader __consumer_offsets-27 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.037027075Z [2024-04-17 09:43:45,036] INFO [Broker id=2] Creating new partition __consumer_offsets-25 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.058026767Z [2024-04-17 09:43:45,057] INFO [LogLoader partition=__consumer_offsets-25, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.082570121Z [2024-04-17 09:43:45,082] INFO Created log for partition __consumer_offsets-25 in /bitnami/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.084517262Z [2024-04-17 09:43:45,083] INFO [Partition __consumer_offsets-25 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 2024-04-17T09:43:45.085700369Z [2024-04-17 09:43:45,085] INFO [Partition __consumer_offsets-25 broker=2] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.087512696Z [2024-04-17 09:43:45,086] INFO [Broker id=2] Leader __consumer_offsets-25 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.093600571Z [2024-04-17 09:43:45,093] INFO [Broker id=2] Creating new partition __consumer_offsets-7 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.097594834Z [2024-04-17 09:43:45,097] INFO [LogLoader partition=__consumer_offsets-7, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.098270108Z [2024-04-17 09:43:45,098] INFO Created log for partition __consumer_offsets-7 in /bitnami/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.099831896Z [2024-04-17 09:43:45,099] INFO [Partition __consumer_offsets-7 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 2024-04-17T09:43:45.100068102Z [2024-04-17 09:43:45,099] INFO [Partition __consumer_offsets-7 broker=2] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.100840573Z [2024-04-17 09:43:45,100] INFO [Broker id=2] Leader __consumer_offsets-7 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.117005455Z [2024-04-17 09:43:45,116] INFO [Broker id=2] Creating new partition __consumer_offsets-39 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.132406067Z [2024-04-17 09:43:45,128] INFO [LogLoader partition=__consumer_offsets-39, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.133747799Z [2024-04-17 09:43:45,133] INFO Created log for partition __consumer_offsets-39 in /bitnami/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.134788018Z [2024-04-17 09:43:45,133] INFO [Partition __consumer_offsets-39 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 2024-04-17T09:43:45.139069870Z [2024-04-17 09:43:45,137] INFO [Partition __consumer_offsets-39 broker=2] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.139085269Z [2024-04-17 09:43:45,137] INFO [Broker id=2] Leader __consumer_offsets-39 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.148409562Z [2024-04-17 09:43:45,146] INFO [Broker id=2] Creating new partition __consumer_offsets-4 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.169051284Z [2024-04-17 09:43:45,165] INFO [LogLoader partition=__consumer_offsets-4, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.169069616Z [2024-04-17 09:43:45,165] INFO Created log for partition __consumer_offsets-4 in /bitnami/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.169083745Z [2024-04-17 09:43:45,165] INFO [Partition __consumer_offsets-4 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 2024-04-17T09:43:45.169086662Z [2024-04-17 09:43:45,165] INFO [Partition __consumer_offsets-4 broker=2] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.169090795Z [2024-04-17 09:43:45,165] INFO [Broker id=2] Leader __consumer_offsets-4 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.182852246Z [2024-04-17 09:43:45,181] INFO [Broker id=2] Creating new partition __consumer_offsets-36 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.204407842Z [2024-04-17 09:43:45,204] INFO [BrokerToControllerChannelManager id=2 name=forwarding] Client requested disconnect from node 1 (org.apache.kafka.clients.NetworkClient) 2024-04-17T09:43:45.209015589Z [2024-04-17 09:43:45,207] INFO [broker-2-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka-controller-2.kafka-controller-headless.osm.svc.cluster.local:9093 (id: 2 rack: null) (kafka.server.BrokerToControllerRequestThread) 2024-04-17T09:43:45.249316095Z [2024-04-17 09:43:45,249] INFO [LogLoader partition=__consumer_offsets-36, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.250020490Z [2024-04-17 09:43:45,249] INFO Created log for partition __consumer_offsets-36 in /bitnami/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.250065820Z [2024-04-17 09:43:45,250] INFO [Partition __consumer_offsets-36 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 2024-04-17T09:43:45.250110973Z [2024-04-17 09:43:45,250] INFO [Partition __consumer_offsets-36 broker=2] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.250274960Z [2024-04-17 09:43:45,250] INFO [Broker id=2] Leader __consumer_offsets-36 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.277639449Z [2024-04-17 09:43:45,276] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='__consumer_offsets', numPartitions=50, replicationFactor=3, assignments=[], configs=[CreateableTopicConfig(name='compression.type', value='producer'), CreateableTopicConfig(name='cleanup.policy', value='compact'), CreateableTopicConfig(name='segment.bytes', value='104857600')]): TOPIC_ALREADY_EXISTS (Topic '__consumer_offsets' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:45.304353884Z [2024-04-17 09:43:45,304] INFO [Broker id=2] Creating new partition __consumer_offsets-1 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.457866596Z [2024-04-17 09:43:45,456] INFO [LogLoader partition=__consumer_offsets-1, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.457882333Z [2024-04-17 09:43:45,457] INFO Created log for partition __consumer_offsets-1 in /bitnami/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.465880383Z [2024-04-17 09:43:45,465] INFO [Partition __consumer_offsets-1 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 2024-04-17T09:43:45.465984810Z [2024-04-17 09:43:45,465] INFO [Partition __consumer_offsets-1 broker=2] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.469995279Z [2024-04-17 09:43:45,466] INFO [Broker id=2] Leader __consumer_offsets-1 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,0,1], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.502054023Z [2024-04-17 09:43:45,501] INFO [Broker id=2] Creating new partition __consumer_offsets-34 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.532451205Z [2024-04-17 09:43:45,532] INFO [LogLoader partition=__consumer_offsets-34, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.569323742Z [2024-04-17 09:43:45,569] INFO Created log for partition __consumer_offsets-34 in /bitnami/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.581260231Z [2024-04-17 09:43:45,569] INFO [Partition __consumer_offsets-34 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 2024-04-17T09:43:45.581703676Z [2024-04-17 09:43:45,581] INFO [Partition __consumer_offsets-34 broker=2] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.582234876Z [2024-04-17 09:43:45,582] INFO [Broker id=2] Leader __consumer_offsets-34 with topic id Some(AYBz8iuVS5KvjNFEVdGv9g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2,1,0], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.611067256Z [2024-04-17 09:43:45,610] INFO [Broker id=2] Transitioning 34 partition(s) to local followers. (state.change.logger) 2024-04-17T09:43:45.620469019Z [2024-04-17 09:43:45,620] INFO [Broker id=2] Creating new partition __consumer_offsets-15 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.667115411Z [2024-04-17 09:43:45,666] INFO [LogLoader partition=__consumer_offsets-15, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:45.693883797Z [2024-04-17 09:43:45,693] INFO Created log for partition __consumer_offsets-15 in /bitnami/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:45.693900500Z [2024-04-17 09:43:45,693] INFO [Partition __consumer_offsets-15 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 2024-04-17T09:43:45.706078715Z [2024-04-17 09:43:45,693] INFO [Partition __consumer_offsets-15 broker=2] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:45.792853428Z [2024-04-17 09:43:45,792] INFO [Broker id=2] Follower __consumer_offsets-15 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:45.805283871Z [2024-04-17 09:43:45,801] INFO [Broker id=2] Creating new partition __consumer_offsets-48 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:45.874953039Z [2024-04-17 09:43:45,874] INFO [LogLoader partition=__consumer_offsets-48, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.120682229Z [2024-04-17 09:43:46,112] INFO Created log for partition __consumer_offsets-48 in /bitnami/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.120696487Z [2024-04-17 09:43:46,113] INFO [Partition __consumer_offsets-48 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 2024-04-17T09:43:46.120699749Z [2024-04-17 09:43:46,113] INFO [Partition __consumer_offsets-48 broker=2] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.120703819Z [2024-04-17 09:43:46,113] INFO [Broker id=2] Follower __consumer_offsets-48 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.120707272Z [2024-04-17 09:43:46,119] INFO [Broker id=2] Creating new partition __consumer_offsets-13 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.131868952Z [2024-04-17 09:43:46,131] INFO [LogLoader partition=__consumer_offsets-13, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.308767113Z [2024-04-17 09:43:46,308] INFO Created log for partition __consumer_offsets-13 in /bitnami/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.308780860Z [2024-04-17 09:43:46,308] INFO [Partition __consumer_offsets-13 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 2024-04-17T09:43:46.308786913Z [2024-04-17 09:43:46,308] INFO [Partition __consumer_offsets-13 broker=2] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.308790612Z [2024-04-17 09:43:46,308] INFO [Broker id=2] Follower __consumer_offsets-13 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.308794095Z [2024-04-17 09:43:46,308] INFO [Broker id=2] Creating new partition __consumer_offsets-46 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.452422685Z [2024-04-17 09:43:46,451] INFO [LogLoader partition=__consumer_offsets-46, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.463822360Z [2024-04-17 09:43:46,457] INFO Created log for partition __consumer_offsets-46 in /bitnami/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.463841783Z [2024-04-17 09:43:46,457] INFO [Partition __consumer_offsets-46 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 2024-04-17T09:43:46.463852694Z [2024-04-17 09:43:46,458] INFO [Partition __consumer_offsets-46 broker=2] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.463856781Z [2024-04-17 09:43:46,458] INFO [Broker id=2] Follower __consumer_offsets-46 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.474437527Z [2024-04-17 09:43:46,473] INFO [Broker id=2] Creating new partition __consumer_offsets-11 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.524931613Z [2024-04-17 09:43:46,520] INFO [LogLoader partition=__consumer_offsets-11, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.524961236Z [2024-04-17 09:43:46,521] INFO Created log for partition __consumer_offsets-11 in /bitnami/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.524965315Z [2024-04-17 09:43:46,521] INFO [Partition __consumer_offsets-11 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 2024-04-17T09:43:46.524968642Z [2024-04-17 09:43:46,521] INFO [Partition __consumer_offsets-11 broker=2] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.524972027Z [2024-04-17 09:43:46,521] INFO [Broker id=2] Follower __consumer_offsets-11 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.524975451Z [2024-04-17 09:43:46,521] INFO [Broker id=2] Creating new partition __consumer_offsets-44 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.538711220Z [2024-04-17 09:43:46,538] INFO [LogLoader partition=__consumer_offsets-44, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.539278807Z [2024-04-17 09:43:46,539] INFO Created log for partition __consumer_offsets-44 in /bitnami/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.542065854Z [2024-04-17 09:43:46,539] INFO [Partition __consumer_offsets-44 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 2024-04-17T09:43:46.542078188Z [2024-04-17 09:43:46,539] INFO [Partition __consumer_offsets-44 broker=2] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.542082113Z [2024-04-17 09:43:46,539] INFO [Broker id=2] Follower __consumer_offsets-44 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.542085542Z [2024-04-17 09:43:46,539] INFO [Broker id=2] Creating new partition __consumer_offsets-42 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.568082030Z [2024-04-17 09:43:46,567] INFO [LogLoader partition=__consumer_offsets-42, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.568640811Z [2024-04-17 09:43:46,568] INFO Created log for partition __consumer_offsets-42 in /bitnami/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.568698588Z [2024-04-17 09:43:46,568] INFO [Partition __consumer_offsets-42 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 2024-04-17T09:43:46.568741873Z [2024-04-17 09:43:46,568] INFO [Partition __consumer_offsets-42 broker=2] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.568791878Z [2024-04-17 09:43:46,568] INFO [Broker id=2] Follower __consumer_offsets-42 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.568913937Z [2024-04-17 09:43:46,568] INFO [Broker id=2] Creating new partition __consumer_offsets-21 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.572685663Z [2024-04-17 09:43:46,571] INFO [LogLoader partition=__consumer_offsets-21, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.572695955Z [2024-04-17 09:43:46,571] INFO Created log for partition __consumer_offsets-21 in /bitnami/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.572699314Z [2024-04-17 09:43:46,572] INFO [Partition __consumer_offsets-21 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 2024-04-17T09:43:46.572702309Z [2024-04-17 09:43:46,572] INFO [Partition __consumer_offsets-21 broker=2] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.572705620Z [2024-04-17 09:43:46,572] INFO [Broker id=2] Follower __consumer_offsets-21 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.572710755Z [2024-04-17 09:43:46,572] INFO [Broker id=2] Creating new partition __consumer_offsets-30 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.715543673Z [2024-04-17 09:43:46,715] INFO [LogLoader partition=__consumer_offsets-30, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.719117809Z [2024-04-17 09:43:46,718] INFO Created log for partition __consumer_offsets-30 in /bitnami/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.719137374Z [2024-04-17 09:43:46,719] INFO [Partition __consumer_offsets-30 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 2024-04-17T09:43:46.719292252Z [2024-04-17 09:43:46,719] INFO [Partition __consumer_offsets-30 broker=2] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.719352777Z [2024-04-17 09:43:46,719] INFO [Broker id=2] Follower __consumer_offsets-30 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.720091861Z [2024-04-17 09:43:46,720] INFO [Broker id=2] Creating new partition __consumer_offsets-28 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.726733035Z [2024-04-17 09:43:46,726] INFO [LogLoader partition=__consumer_offsets-28, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.726792947Z [2024-04-17 09:43:46,726] INFO Created log for partition __consumer_offsets-28 in /bitnami/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.726878591Z [2024-04-17 09:43:46,726] INFO [Partition __consumer_offsets-28 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 2024-04-17T09:43:46.726884204Z [2024-04-17 09:43:46,726] INFO [Partition __consumer_offsets-28 broker=2] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.726955978Z [2024-04-17 09:43:46,726] INFO [Broker id=2] Follower __consumer_offsets-28 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.727011371Z [2024-04-17 09:43:46,726] INFO [Broker id=2] Creating new partition __consumer_offsets-26 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.740947310Z [2024-04-17 09:43:46,735] INFO [LogLoader partition=__consumer_offsets-26, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.740961757Z [2024-04-17 09:43:46,735] INFO Created log for partition __consumer_offsets-26 in /bitnami/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.740965724Z [2024-04-17 09:43:46,736] INFO [Partition __consumer_offsets-26 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 2024-04-17T09:43:46.740968771Z [2024-04-17 09:43:46,736] INFO [Partition __consumer_offsets-26 broker=2] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.740972091Z [2024-04-17 09:43:46,736] INFO [Broker id=2] Follower __consumer_offsets-26 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.740987567Z [2024-04-17 09:43:46,736] INFO [Broker id=2] Creating new partition __consumer_offsets-40 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.740990842Z [2024-04-17 09:43:46,738] INFO [LogLoader partition=__consumer_offsets-40, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.740993686Z [2024-04-17 09:43:46,738] INFO Created log for partition __consumer_offsets-40 in /bitnami/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.740997143Z [2024-04-17 09:43:46,738] INFO [Partition __consumer_offsets-40 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 2024-04-17T09:43:46.741000233Z [2024-04-17 09:43:46,738] INFO [Partition __consumer_offsets-40 broker=2] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.741003029Z [2024-04-17 09:43:46,739] INFO [Broker id=2] Follower __consumer_offsets-40 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.741005886Z [2024-04-17 09:43:46,739] INFO [Broker id=2] Creating new partition __consumer_offsets-5 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.746609278Z [2024-04-17 09:43:46,741] INFO [LogLoader partition=__consumer_offsets-5, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.746620896Z [2024-04-17 09:43:46,741] INFO Created log for partition __consumer_offsets-5 in /bitnami/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.746624718Z [2024-04-17 09:43:46,741] INFO [Partition __consumer_offsets-5 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 2024-04-17T09:43:46.746627657Z [2024-04-17 09:43:46,741] INFO [Partition __consumer_offsets-5 broker=2] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.746634088Z [2024-04-17 09:43:46,741] INFO [Broker id=2] Follower __consumer_offsets-5 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.746653411Z [2024-04-17 09:43:46,741] INFO [Broker id=2] Creating new partition __consumer_offsets-38 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.746656479Z [2024-04-17 09:43:46,744] INFO [LogLoader partition=__consumer_offsets-38, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.746659739Z [2024-04-17 09:43:46,744] INFO Created log for partition __consumer_offsets-38 in /bitnami/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.746662567Z [2024-04-17 09:43:46,744] INFO [Partition __consumer_offsets-38 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 2024-04-17T09:43:46.746665432Z [2024-04-17 09:43:46,744] INFO [Partition __consumer_offsets-38 broker=2] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.746668260Z [2024-04-17 09:43:46,744] INFO [Broker id=2] Follower __consumer_offsets-38 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.746671061Z [2024-04-17 09:43:46,744] INFO [Broker id=2] Creating new partition __consumer_offsets-3 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.751988361Z [2024-04-17 09:43:46,746] INFO [LogLoader partition=__consumer_offsets-3, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.751999460Z [2024-04-17 09:43:46,747] INFO Created log for partition __consumer_offsets-3 in /bitnami/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.752002894Z [2024-04-17 09:43:46,747] INFO [Partition __consumer_offsets-3 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 2024-04-17T09:43:46.752005609Z [2024-04-17 09:43:46,747] INFO [Partition __consumer_offsets-3 broker=2] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.752008188Z [2024-04-17 09:43:46,747] INFO [Broker id=2] Follower __consumer_offsets-3 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.752011229Z [2024-04-17 09:43:46,747] INFO [Broker id=2] Creating new partition __consumer_offsets-16 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.753485813Z [2024-04-17 09:43:46,749] INFO [LogLoader partition=__consumer_offsets-16, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.753496921Z [2024-04-17 09:43:46,749] INFO Created log for partition __consumer_offsets-16 in /bitnami/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.753500244Z [2024-04-17 09:43:46,750] INFO [Partition __consumer_offsets-16 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 2024-04-17T09:43:46.753502998Z [2024-04-17 09:43:46,750] INFO [Partition __consumer_offsets-16 broker=2] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.753534230Z [2024-04-17 09:43:46,750] INFO [Broker id=2] Follower __consumer_offsets-16 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.753570583Z [2024-04-17 09:43:46,750] INFO [Broker id=2] Creating new partition __consumer_offsets-45 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.753574427Z [2024-04-17 09:43:46,752] INFO [LogLoader partition=__consumer_offsets-45, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.753577564Z [2024-04-17 09:43:46,752] INFO Created log for partition __consumer_offsets-45 in /bitnami/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.753580652Z [2024-04-17 09:43:46,753] INFO [Partition __consumer_offsets-45 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 2024-04-17T09:43:46.753583439Z [2024-04-17 09:43:46,753] INFO [Partition __consumer_offsets-45 broker=2] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.753616791Z [2024-04-17 09:43:46,753] INFO [Broker id=2] Follower __consumer_offsets-45 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.753620710Z [2024-04-17 09:43:46,753] INFO [Broker id=2] Creating new partition __consumer_offsets-12 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.758478812Z [2024-04-17 09:43:46,757] INFO [LogLoader partition=__consumer_offsets-12, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.765798130Z [2024-04-17 09:43:46,761] INFO Created log for partition __consumer_offsets-12 in /bitnami/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.766267175Z [2024-04-17 09:43:46,761] INFO [Partition __consumer_offsets-12 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 2024-04-17T09:43:46.766271683Z [2024-04-17 09:43:46,761] INFO [Partition __consumer_offsets-12 broker=2] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.766275652Z [2024-04-17 09:43:46,761] INFO [Broker id=2] Follower __consumer_offsets-12 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.766279229Z [2024-04-17 09:43:46,761] INFO [Broker id=2] Creating new partition __consumer_offsets-41 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.766282369Z [2024-04-17 09:43:46,764] INFO [LogLoader partition=__consumer_offsets-41, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.766285442Z [2024-04-17 09:43:46,764] INFO Created log for partition __consumer_offsets-41 in /bitnami/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.766288328Z [2024-04-17 09:43:46,764] INFO [Partition __consumer_offsets-41 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 2024-04-17T09:43:46.766295175Z [2024-04-17 09:43:46,764] INFO [Partition __consumer_offsets-41 broker=2] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.766312082Z [2024-04-17 09:43:46,764] INFO [Broker id=2] Follower __consumer_offsets-41 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.766315335Z [2024-04-17 09:43:46,764] INFO [Broker id=2] Creating new partition __consumer_offsets-10 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.801143796Z [2024-04-17 09:43:46,800] INFO [LogLoader partition=__consumer_offsets-10, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.801781750Z [2024-04-17 09:43:46,801] INFO Created log for partition __consumer_offsets-10 in /bitnami/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.802139174Z [2024-04-17 09:43:46,801] INFO [Partition __consumer_offsets-10 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 2024-04-17T09:43:46.802148361Z [2024-04-17 09:43:46,801] INFO [Partition __consumer_offsets-10 broker=2] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.802152180Z [2024-04-17 09:43:46,802] INFO [Broker id=2] Follower __consumer_offsets-10 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.802419833Z [2024-04-17 09:43:46,802] INFO [Broker id=2] Creating new partition __consumer_offsets-24 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.805989753Z [2024-04-17 09:43:46,805] INFO [LogLoader partition=__consumer_offsets-24, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.807029907Z [2024-04-17 09:43:46,806] INFO Created log for partition __consumer_offsets-24 in /bitnami/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.807042180Z [2024-04-17 09:43:46,806] INFO [Partition __consumer_offsets-24 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 2024-04-17T09:43:46.807046035Z [2024-04-17 09:43:46,806] INFO [Partition __consumer_offsets-24 broker=2] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.807049661Z [2024-04-17 09:43:46,806] INFO [Broker id=2] Follower __consumer_offsets-24 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.807053327Z [2024-04-17 09:43:46,806] INFO [Broker id=2] Creating new partition __consumer_offsets-22 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.821591573Z [2024-04-17 09:43:46,820] INFO [LogLoader partition=__consumer_offsets-22, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.823946723Z [2024-04-17 09:43:46,820] INFO Created log for partition __consumer_offsets-22 in /bitnami/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.823958230Z [2024-04-17 09:43:46,821] INFO [Partition __consumer_offsets-22 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 2024-04-17T09:43:46.824014999Z [2024-04-17 09:43:46,821] INFO [Partition __consumer_offsets-22 broker=2] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.824019095Z [2024-04-17 09:43:46,821] INFO [Broker id=2] Follower __consumer_offsets-22 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.824025192Z [2024-04-17 09:43:46,821] INFO [Broker id=2] Creating new partition __consumer_offsets-20 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.824866397Z [2024-04-17 09:43:46,824] INFO [LogLoader partition=__consumer_offsets-20, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.827019237Z [2024-04-17 09:43:46,825] INFO Created log for partition __consumer_offsets-20 in /bitnami/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.827320898Z [2024-04-17 09:43:46,825] INFO [Partition __consumer_offsets-20 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 2024-04-17T09:43:46.827327148Z [2024-04-17 09:43:46,825] INFO [Partition __consumer_offsets-20 broker=2] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.827331048Z [2024-04-17 09:43:46,825] INFO [Broker id=2] Follower __consumer_offsets-20 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.827334589Z [2024-04-17 09:43:46,825] INFO [Broker id=2] Creating new partition __consumer_offsets-49 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.842421258Z [2024-04-17 09:43:46,839] INFO [LogLoader partition=__consumer_offsets-49, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.846563137Z [2024-04-17 09:43:46,845] INFO Created log for partition __consumer_offsets-49 in /bitnami/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.846582494Z [2024-04-17 09:43:46,845] INFO [Partition __consumer_offsets-49 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 2024-04-17T09:43:46.846586186Z [2024-04-17 09:43:46,846] INFO [Partition __consumer_offsets-49 broker=2] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.846589646Z [2024-04-17 09:43:46,846] INFO [Broker id=2] Follower __consumer_offsets-49 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.846593016Z [2024-04-17 09:43:46,846] INFO [Broker id=2] Creating new partition __consumer_offsets-18 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.864368494Z [2024-04-17 09:43:46,852] INFO [LogLoader partition=__consumer_offsets-18, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.864402514Z [2024-04-17 09:43:46,858] INFO Created log for partition __consumer_offsets-18 in /bitnami/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.864406425Z [2024-04-17 09:43:46,858] INFO [Partition __consumer_offsets-18 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 2024-04-17T09:43:46.864419157Z [2024-04-17 09:43:46,858] INFO [Partition __consumer_offsets-18 broker=2] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.864422637Z [2024-04-17 09:43:46,858] INFO [Broker id=2] Follower __consumer_offsets-18 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.864425915Z [2024-04-17 09:43:46,858] INFO [Broker id=2] Creating new partition __consumer_offsets-31 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.879495268Z [2024-04-17 09:43:46,869] INFO [LogLoader partition=__consumer_offsets-31, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.879511716Z [2024-04-17 09:43:46,869] INFO Created log for partition __consumer_offsets-31 in /bitnami/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.879515676Z [2024-04-17 09:43:46,869] INFO [Partition __consumer_offsets-31 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 2024-04-17T09:43:46.879519034Z [2024-04-17 09:43:46,869] INFO [Partition __consumer_offsets-31 broker=2] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.879526522Z [2024-04-17 09:43:46,869] INFO [Broker id=2] Follower __consumer_offsets-31 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.879530231Z [2024-04-17 09:43:46,869] INFO [Broker id=2] Creating new partition __consumer_offsets-0 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.879533384Z [2024-04-17 09:43:46,873] INFO [LogLoader partition=__consumer_offsets-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.879536805Z [2024-04-17 09:43:46,873] INFO Created log for partition __consumer_offsets-0 in /bitnami/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.879539905Z [2024-04-17 09:43:46,873] INFO [Partition __consumer_offsets-0 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 2024-04-17T09:43:46.879543031Z [2024-04-17 09:43:46,873] INFO [Partition __consumer_offsets-0 broker=2] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.879546064Z [2024-04-17 09:43:46,873] INFO [Broker id=2] Follower __consumer_offsets-0 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.879549098Z [2024-04-17 09:43:46,873] INFO [Broker id=2] Creating new partition __consumer_offsets-29 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.879552239Z [2024-04-17 09:43:46,876] INFO [LogLoader partition=__consumer_offsets-29, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.879555292Z [2024-04-17 09:43:46,876] INFO Created log for partition __consumer_offsets-29 in /bitnami/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.879588509Z [2024-04-17 09:43:46,876] INFO [Partition __consumer_offsets-29 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 2024-04-17T09:43:46.879591950Z [2024-04-17 09:43:46,876] INFO [Partition __consumer_offsets-29 broker=2] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.879595047Z [2024-04-17 09:43:46,876] INFO [Broker id=2] Follower __consumer_offsets-29 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.879598175Z [2024-04-17 09:43:46,876] INFO [Broker id=2] Creating new partition __consumer_offsets-8 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.880748144Z [2024-04-17 09:43:46,880] INFO [LogLoader partition=__consumer_offsets-8, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.881962765Z [2024-04-17 09:43:46,881] INFO Created log for partition __consumer_offsets-8 in /bitnami/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.881976643Z [2024-04-17 09:43:46,881] INFO [Partition __consumer_offsets-8 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 2024-04-17T09:43:46.881980063Z [2024-04-17 09:43:46,881] INFO [Partition __consumer_offsets-8 broker=2] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.881999269Z [2024-04-17 09:43:46,881] INFO [Broker id=2] Follower __consumer_offsets-8 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.882022021Z [2024-04-17 09:43:46,881] INFO [Broker id=2] Creating new partition __consumer_offsets-37 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.897042684Z [2024-04-17 09:43:46,883] INFO [LogLoader partition=__consumer_offsets-37, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.897071344Z [2024-04-17 09:43:46,890] INFO Created log for partition __consumer_offsets-37 in /bitnami/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.897092379Z [2024-04-17 09:43:46,890] INFO [Partition __consumer_offsets-37 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 2024-04-17T09:43:46.897096442Z [2024-04-17 09:43:46,890] INFO [Partition __consumer_offsets-37 broker=2] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.897100088Z [2024-04-17 09:43:46,890] INFO [Broker id=2] Follower __consumer_offsets-37 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.897103961Z [2024-04-17 09:43:46,890] INFO [Broker id=2] Creating new partition __consumer_offsets-6 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.897107190Z [2024-04-17 09:43:46,896] INFO [LogLoader partition=__consumer_offsets-6, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.897480078Z [2024-04-17 09:43:46,897] INFO Created log for partition __consumer_offsets-6 in /bitnami/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.897503345Z [2024-04-17 09:43:46,897] INFO [Partition __consumer_offsets-6 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 2024-04-17T09:43:46.897506603Z [2024-04-17 09:43:46,897] INFO [Partition __consumer_offsets-6 broker=2] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.897513887Z [2024-04-17 09:43:46,897] INFO [Broker id=2] Follower __consumer_offsets-6 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.897517377Z [2024-04-17 09:43:46,897] INFO [Broker id=2] Creating new partition __consumer_offsets-35 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.907679386Z [2024-04-17 09:43:46,903] INFO [LogLoader partition=__consumer_offsets-35, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.907694466Z [2024-04-17 09:43:46,903] INFO Created log for partition __consumer_offsets-35 in /bitnami/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.907705357Z [2024-04-17 09:43:46,903] INFO [Partition __consumer_offsets-35 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 2024-04-17T09:43:46.907708275Z [2024-04-17 09:43:46,903] INFO [Partition __consumer_offsets-35 broker=2] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.907711707Z [2024-04-17 09:43:46,903] INFO [Broker id=2] Follower __consumer_offsets-35 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 1. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.907715113Z [2024-04-17 09:43:46,903] INFO [Broker id=2] Creating new partition __consumer_offsets-33 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.907742514Z [2024-04-17 09:43:46,906] INFO [LogLoader partition=__consumer_offsets-33, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.914582703Z [2024-04-17 09:43:46,906] INFO Created log for partition __consumer_offsets-33 in /bitnami/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.914598863Z [2024-04-17 09:43:46,906] INFO [Partition __consumer_offsets-33 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 2024-04-17T09:43:46.914602524Z [2024-04-17 09:43:46,906] INFO [Partition __consumer_offsets-33 broker=2] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.914606480Z [2024-04-17 09:43:46,906] INFO [Broker id=2] Follower __consumer_offsets-33 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.914610233Z [2024-04-17 09:43:46,906] INFO [Broker id=2] Creating new partition __consumer_offsets-2 with topic id AYBz8iuVS5KvjNFEVdGv9g. (state.change.logger) 2024-04-17T09:43:46.914613908Z [2024-04-17 09:43:46,909] INFO [LogLoader partition=__consumer_offsets-2, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:46.914630494Z [2024-04-17 09:43:46,909] INFO Created log for partition __consumer_offsets-2 in /bitnami/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 2024-04-17T09:43:46.914633564Z [2024-04-17 09:43:46,909] INFO [Partition __consumer_offsets-2 broker=2] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 2024-04-17T09:43:46.914636672Z [2024-04-17 09:43:46,909] INFO [Partition __consumer_offsets-2 broker=2] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:46.914654401Z [2024-04-17 09:43:46,909] INFO [Broker id=2] Follower __consumer_offsets-2 starts at leader epoch 0 from offset 0 with partition epoch 0 and high watermark 0. Current leader is 0. Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:46.914660090Z [2024-04-17 09:43:46,910] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(__consumer_offsets-38, __consumer_offsets-49, __consumer_offsets-16, __consumer_offsets-8, __consumer_offsets-2, __consumer_offsets-13, __consumer_offsets-35, __consumer_offsets-46, __consumer_offsets-24, __consumer_offsets-5, __consumer_offsets-21, __consumer_offsets-10, __consumer_offsets-37, __consumer_offsets-48, __consumer_offsets-29, __consumer_offsets-40, __consumer_offsets-18, __consumer_offsets-45, __consumer_offsets-26, __consumer_offsets-15, __consumer_offsets-42, __consumer_offsets-31, __consumer_offsets-20, __consumer_offsets-12, __consumer_offsets-6, __consumer_offsets-28, __consumer_offsets-44, __consumer_offsets-3, __consumer_offsets-30, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-11, __consumer_offsets-22, __consumer_offsets-0) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:43:46.914664037Z [2024-04-17 09:43:46,910] INFO [Broker id=2] Stopped fetchers as part of become-follower for 34 partitions (state.change.logger) 2024-04-17T09:43:46.955710922Z [2024-04-17 09:43:46,955] INFO [ReplicaFetcherThread-0-0]: Starting (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.965634950Z [2024-04-17 09:43:46,960] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 0 for partitions Map(__consumer_offsets-49 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-38 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-16 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-13 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-2 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-46 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-5 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-21 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-10 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-40 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-18 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-26 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-42 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-6 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-28 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-30 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-33 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=0, host=kafka-controller-0.kafka-controller-headless.osm.svc.cluster.local:9094),0,0)) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:43:46.965664682Z [2024-04-17 09:43:46,964] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-6 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.965932756Z [2024-04-17 09:43:46,965] INFO [UnifiedLog partition=__consumer_offsets-6, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.968102379Z [2024-04-17 09:43:46,967] INFO [ReplicaFetcherThread-0-1]: Starting (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.969191402Z [2024-04-17 09:43:46,968] INFO [ReplicaFetcherManager on broker 2] Added fetcher to broker 1 for partitions Map(__consumer_offsets-8 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-24 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-35 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-37 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-48 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-29 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-45 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-15 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-31 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-20 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-12 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-44 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-3 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-41 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-0 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-11 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0), __consumer_offsets-22 -> InitialFetchState(Some(AYBz8iuVS5KvjNFEVdGv9g),BrokerEndPoint(id=1, host=kafka-controller-1.kafka-controller-headless.osm.svc.cluster.local:9094),0,0)) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:43:46.969263968Z [2024-04-17 09:43:46,969] INFO [Broker id=2] Started fetchers as part of become-follower for 34 partitions (state.change.logger) 2024-04-17T09:43:46.974153150Z [2024-04-17 09:43:46,973] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-3 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.974168005Z [2024-04-17 09:43:46,974] INFO [UnifiedLog partition=__consumer_offsets-3, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.975200973Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-28 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.975290111Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-28, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.975392522Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-21 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.975515493Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-21, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.975521499Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-10 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.975524570Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-10, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.975572228Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-40 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976039319Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-40, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976050426Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-18 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976085420Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-18, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976089112Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-33 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976091962Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-33, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976123660Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-30 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976127606Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-30, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976130651Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-26 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976133407Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-26, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976136226Z [2024-04-17 09:43:46,975] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-49 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976146232Z [2024-04-17 09:43:46,975] INFO [UnifiedLog partition=__consumer_offsets-49, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976156380Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-38 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976159422Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-38, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976245305Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-16 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976259378Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-16, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976347467Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-42 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976351762Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-42, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976487183Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-5 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976509758Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-5, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976517193Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-13 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976520305Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-13, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976607507Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-2 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976670048Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-2, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.976673536Z [2024-04-17 09:43:46,976] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Truncating partition __consumer_offsets-46 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.976756548Z [2024-04-17 09:43:46,976] INFO [UnifiedLog partition=__consumer_offsets-46, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982323834Z [2024-04-17 09:43:46,977] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-44 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982338445Z [2024-04-17 09:43:46,977] INFO [UnifiedLog partition=__consumer_offsets-44, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982341643Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-29 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982344491Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-29, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982347300Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-37 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982350160Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-37, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982353370Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-48 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982356229Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-48, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982359148Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-0 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982370892Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-0, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982373812Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-41 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982376564Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-41, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982379259Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-11 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982382019Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-11, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982384853Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-22 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982387582Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-22, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982390383Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-15 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982393632Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-15, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982398293Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-45 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982401473Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-45, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982404181Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-8 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982432801Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-8, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982436356Z [2024-04-17 09:43:46,978] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-12 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982439091Z [2024-04-17 09:43:46,978] INFO [UnifiedLog partition=__consumer_offsets-12, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982441876Z [2024-04-17 09:43:46,980] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-31 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982448354Z [2024-04-17 09:43:46,980] INFO [UnifiedLog partition=__consumer_offsets-31, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982451297Z [2024-04-17 09:43:46,980] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-20 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982476231Z [2024-04-17 09:43:46,981] INFO [UnifiedLog partition=__consumer_offsets-20, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982483235Z [2024-04-17 09:43:46,981] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-24 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982491479Z [2024-04-17 09:43:46,981] INFO [UnifiedLog partition=__consumer_offsets-24, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:46.982494586Z [2024-04-17 09:43:46,981] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Truncating partition __consumer_offsets-35 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) 2024-04-17T09:43:46.982497380Z [2024-04-17 09:43:46,981] INFO [UnifiedLog partition=__consumer_offsets-35, dir=/bitnami/kafka/data] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) 2024-04-17T09:43:47.002653692Z [2024-04-17 09:43:47,000] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002673180Z [2024-04-17 09:43:47,001] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002676229Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002679278Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002682122Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002689600Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002692917Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002695794Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002698613Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002701349Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002704421Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002713613Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002716627Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002719388Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002722191Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002725069Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002727810Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002730575Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002733379Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.002736692Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.002739493Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.003227198Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.003236966Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.003239807Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.003248099Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.003251020Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.003262451Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.003265505Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.003268467Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.003271289Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.003281452Z [2024-04-17 09:43:47,002] INFO [GroupCoordinator 2]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.003284340Z [2024-04-17 09:43:47,002] INFO [GroupMetadataManager brokerId=2] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.009939103Z [2024-04-17 09:43:47,009] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 15 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010349890Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010681696Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 48 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010691830Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010767058Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 13 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010795298Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010804292Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 46 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010807545Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010810493Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 11 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010829833Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010833584Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 44 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010836595Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010839483Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 42 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010861926Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010865427Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 21 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010868439Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010915422Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 30 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010919433Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010925387Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 28 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010928362Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010933574Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 26 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010936454Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010966360Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 40 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010974679Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.010981326Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 5 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.010984315Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011006012Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 38 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011009905Z [2024-04-17 09:43:47,010] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011015485Z [2024-04-17 09:43:47,010] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 3 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011036390Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011059832Z [2024-04-17 09:43:47,011] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 16 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011063431Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011091198Z [2024-04-17 09:43:47,011] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 45 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011094611Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011119202Z [2024-04-17 09:43:47,011] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 12 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011135840Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011184521Z [2024-04-17 09:43:47,011] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 41 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011219786Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011223285Z [2024-04-17 09:43:47,011] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 10 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011226384Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.011235649Z [2024-04-17 09:43:47,011] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 24 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.011238740Z [2024-04-17 09:43:47,011] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023715904Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 22 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023729299Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023738710Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 20 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023742084Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023745056Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 49 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023747917Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023750798Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 18 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023753630Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023756454Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 31 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023759172Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023761910Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 0 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023776457Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023779867Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 29 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023782856Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023785566Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 8 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023788671Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023791480Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 37 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023794280Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023797033Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 6 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023799841Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023803197Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 35 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023805936Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023820771Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 33 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023823934Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.023826831Z [2024-04-17 09:43:47,019] INFO [GroupCoordinator 2]: Resigned as the group coordinator for partition 2 in epoch OptionalInt[0] (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:43:47.023829555Z [2024-04-17 09:43:47,019] INFO [GroupMetadataManager brokerId=2] Scheduling unloading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.052848379Z [2024-04-17 09:43:47,052] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-47 in 50 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.059931647Z [2024-04-17 09:43:47,054] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-14 in 52 milliseconds for epoch 0, of which 52 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.059947853Z [2024-04-17 09:43:47,054] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-43 in 52 milliseconds for epoch 0, of which 52 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.059962247Z [2024-04-17 09:43:47,054] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-9 in 52 milliseconds for epoch 0, of which 52 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.059965346Z [2024-04-17 09:43:47,054] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-23 in 52 milliseconds for epoch 0, of which 52 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.059968089Z [2024-04-17 09:43:47,054] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-19 in 52 milliseconds for epoch 0, of which 52 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.181220187Z [2024-04-17 09:43:47,181] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-17 in 179 milliseconds for epoch 0, of which 52 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.184114073Z [2024-04-17 09:43:47,184] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-32 in 181 milliseconds for epoch 0, of which 180 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.186193638Z [2024-04-17 09:43:47,186] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-27 in 184 milliseconds for epoch 0, of which 182 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.188477316Z [2024-04-17 09:43:47,188] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-25 in 186 milliseconds for epoch 0, of which 184 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.189008147Z [2024-04-17 09:43:47,188] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-7 in 186 milliseconds for epoch 0, of which 186 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.196906440Z [2024-04-17 09:43:47,194] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-39 in 192 milliseconds for epoch 0, of which 192 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.196914988Z [2024-04-17 09:43:47,194] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-4 in 192 milliseconds for epoch 0, of which 192 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.196917915Z [2024-04-17 09:43:47,195] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-36 in 193 milliseconds for epoch 0, of which 193 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.196920672Z [2024-04-17 09:43:47,195] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-1 in 193 milliseconds for epoch 0, of which 193 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.196923322Z [2024-04-17 09:43:47,195] INFO [GroupMetadataManager brokerId=2] Finished loading offsets and group metadata from __consumer_offsets-34 in 193 milliseconds for epoch 0, of which 193 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.224307858Z [2024-04-17 09:43:47,224] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-15 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.245248254Z [2024-04-17 09:43:47,245] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-48 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.245809093Z [2024-04-17 09:43:47,245] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-13 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.246403072Z [2024-04-17 09:43:47,246] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-46 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.246566305Z [2024-04-17 09:43:47,246] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-11 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247215618Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-44 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247281347Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-42 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247950426Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-21 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247961072Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-30 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247964884Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-28 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247968234Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-26 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247971423Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-40 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.247974537Z [2024-04-17 09:43:47,247] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-5 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.249030917Z [2024-04-17 09:43:47,248] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-38 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.249168057Z [2024-04-17 09:43:47,249] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-3 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.250616108Z [2024-04-17 09:43:47,250] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-16 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.250629658Z [2024-04-17 09:43:47,250] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-45 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.250671422Z [2024-04-17 09:43:47,250] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-12 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.251789351Z [2024-04-17 09:43:47,251] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-41 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.251804374Z [2024-04-17 09:43:47,251] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-10 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.251866356Z [2024-04-17 09:43:47,251] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-24 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.258406329Z [2024-04-17 09:43:47,251] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-22 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.258420085Z [2024-04-17 09:43:47,255] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-20 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.258423463Z [2024-04-17 09:43:47,255] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-49 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.258426575Z [2024-04-17 09:43:47,255] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-18 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.258429429Z [2024-04-17 09:43:47,255] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-31 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.261689471Z [2024-04-17 09:43:47,261] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-0 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.261938210Z [2024-04-17 09:43:47,261] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-29 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.267483620Z [2024-04-17 09:43:47,267] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-8 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.268132697Z [2024-04-17 09:43:47,268] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-37 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.268446052Z [2024-04-17 09:43:47,268] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-6 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.268454032Z [2024-04-17 09:43:47,268] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-35 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.270082854Z [2024-04-17 09:43:47,268] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-33 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.285051295Z [2024-04-17 09:43:47,277] INFO [GroupMetadataManager brokerId=2] Finished unloading __consumer_offsets-2 for coordinator epoch OptionalInt[0]. Removed 0 cached offsets and 0 cached groups. (kafka.coordinator.group.GroupMetadataManager) 2024-04-17T09:43:47.322807735Z [2024-04-17 09:43:47,253] INFO [DynamicConfigPublisher broker id=2] Updating topic __consumer_offsets with new configuration : cleanup.policy -> compact,segment.bytes -> 104857600,compression.type -> producer (kafka.server.metadata.DynamicConfigPublisher) 2024-04-17T09:43:49.333933970Z [2024-04-17 09:43:49,332] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='admin', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:49.341035596Z [2024-04-17 09:43:49,339] INFO [QuorumController id=2] Created topic admin with topic ID djzIAqLrThumVJekqIm5XA. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:49.341063811Z [2024-04-17 09:43:49,339] INFO [QuorumController id=2] Created partition admin-0 with topic ID djzIAqLrThumVJekqIm5XA and PartitionRegistration(replicas=[1], isr=[1], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:49.498489021Z [2024-04-17 09:43:49,498] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='admin', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'admin' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:49.774120261Z [2024-04-17 09:43:49,773] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='admin', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'admin' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:50.198425264Z [2024-04-17 09:43:50,195] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='admin', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'admin' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:50.339774622Z [2024-04-17 09:43:50,338] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='admin', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'admin' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:50.417205408Z [2024-04-17 09:43:50,415] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='admin', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'admin' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:52.238200932Z [2024-04-17 09:43:52,212] INFO Sent auto-creation request for Set(ns) to the active controller. (kafka.server.DefaultAutoTopicCreationManager) 2024-04-17T09:43:52.364791658Z [2024-04-17 09:43:52,364] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='ns', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:52.364872171Z [2024-04-17 09:43:52,364] INFO [QuorumController id=2] Created topic ns with topic ID YWYqGnvWSaO4cvP_eBL0dA. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:52.364948253Z [2024-04-17 09:43:52,364] INFO [QuorumController id=2] Created partition ns-0 with topic ID YWYqGnvWSaO4cvP_eBL0dA and PartitionRegistration(replicas=[2], isr=[2], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:52.397958676Z [2024-04-17 09:43:52,396] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='ns', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'ns' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:52.441349529Z [2024-04-17 09:43:52,441] INFO [Broker id=2] Transitioning 1 partition(s) to local leaders. (state.change.logger) 2024-04-17T09:43:52.462660689Z [2024-04-17 09:43:52,462] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(ns-0) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:43:52.462699220Z [2024-04-17 09:43:52,462] INFO [Broker id=2] Creating new partition ns-0 with topic id YWYqGnvWSaO4cvP_eBL0dA. (state.change.logger) 2024-04-17T09:43:52.699668419Z [2024-04-17 09:43:52,695] INFO [LogLoader partition=ns-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:43:52.699681916Z [2024-04-17 09:43:52,696] INFO Created log for partition ns-0 in /bitnami/kafka/data/ns-0 with properties {} (kafka.log.LogManager) 2024-04-17T09:43:52.737893720Z [2024-04-17 09:43:52,737] INFO [Partition ns-0 broker=2] No checkpointed highwatermark is found for partition ns-0 (kafka.cluster.Partition) 2024-04-17T09:43:52.799609931Z [2024-04-17 09:43:52,793] INFO [Partition ns-0 broker=2] Log loaded for partition ns-0 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:43:52.799623939Z [2024-04-17 09:43:52,793] INFO [Broker id=2] Leader ns-0 with topic id Some(YWYqGnvWSaO4cvP_eBL0dA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:43:53.015156231Z [2024-04-17 09:43:53,015] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsi', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.015356474Z [2024-04-17 09:43:53,015] INFO [QuorumController id=2] Created topic nsi with topic ID 3o_wxT8JS62Li4_ohRU32A. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.015387499Z [2024-04-17 09:43:53,015] INFO [QuorumController id=2] Created partition nsi-0 with topic ID 3o_wxT8JS62Li4_ohRU32A and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.485567979Z [2024-04-17 09:43:53,485] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsi', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'nsi' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.851710268Z [2024-04-17 09:43:53,851] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vnf', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.851770253Z [2024-04-17 09:43:53,851] INFO [QuorumController id=2] Created topic vnf with topic ID 8zirEFyJQym84ZFjxWankg. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.851799065Z [2024-04-17 09:43:53,851] INFO [QuorumController id=2] Created partition vnf-0 with topic ID 8zirEFyJQym84ZFjxWankg and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:43:53.982113271Z [2024-04-17 09:43:53,981] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vnf', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vnf' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:01.890574579Z [2024-04-17 09:44:01,890] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:01.893974234Z [2024-04-17 09:44:01,893] INFO [QuorumController id=2] Created topic vim_account with topic ID 4jChzxUDS6CZ2mgldPbtwA. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:01.894017781Z [2024-04-17 09:44:01,893] INFO [QuorumController id=2] Created partition vim_account-0 with topic ID 4jChzxUDS6CZ2mgldPbtwA and PartitionRegistration(replicas=[1], isr=[1], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:02.159186757Z [2024-04-17 09:44:02,157] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:02.374761890Z [2024-04-17 09:44:02,374] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:02.469820073Z [2024-04-17 09:44:02,469] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:02.921917710Z [2024-04-17 09:44:02,921] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='wim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:02.922689094Z [2024-04-17 09:44:02,922] INFO [QuorumController id=2] Created topic wim_account with topic ID 9P-qicLjRSuy2DWaLY0Qkw. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:02.929779678Z [2024-04-17 09:44:02,929] INFO [QuorumController id=2] Created partition wim_account-0 with topic ID 9P-qicLjRSuy2DWaLY0Qkw and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.109002474Z [2024-04-17 09:44:03,108] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='wim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'wim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.193415908Z [2024-04-17 09:44:03,193] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='wim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'wim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.350703874Z [2024-04-17 09:44:03,350] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='wim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'wim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.436947537Z [2024-04-17 09:44:03,436] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='wim_account', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'wim_account' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.954954311Z [2024-04-17 09:44:03,954] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='sdn', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.954987638Z [2024-04-17 09:44:03,954] INFO [QuorumController id=2] Created topic sdn with topic ID ppRDTMNtTbCDRGZxlFIkWg. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:03.955032918Z [2024-04-17 09:44:03,954] INFO [QuorumController id=2] Created partition sdn-0 with topic ID ppRDTMNtTbCDRGZxlFIkWg and PartitionRegistration(replicas=[1], isr=[1], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:04.185681932Z [2024-04-17 09:44:04,185] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='sdn', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'sdn' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:04.732568540Z [2024-04-17 09:44:04,732] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='sdn', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'sdn' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:14.546122345Z [2024-04-17 09:44:14,543] INFO [GroupCoordinator 2]: Dynamic Member with unknown member id joins group nbi-server in Empty state. Created a new member id aiokafka-0.8.1-556f77b7-051f-4e49-b91d-a49f4d8a4a53 for this member and add to the group. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:14.555309029Z [2024-04-17 09:44:14,554] INFO [GroupCoordinator 2]: Preparing to rebalance group nbi-server in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member aiokafka-0.8.1-556f77b7-051f-4e49-b91d-a49f4d8a4a53 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:17.602608985Z [2024-04-17 09:44:17,602] INFO [GroupCoordinator 2]: Stabilized group nbi-server generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:17.630885322Z [2024-04-17 09:44:17,630] INFO [GroupCoordinator 2]: Assignment received from leader aiokafka-0.8.1-556f77b7-051f-4e49-b91d-a49f4d8a4a53 for group nbi-server for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:26.535052910Z [2024-04-17 09:44:26,534] INFO [GroupCoordinator 2]: Dynamic Member with unknown member id joins group lcm-server in Empty state. Created a new member id aiokafka-0.8.1-8c6bdde7-4e73-4796-88ef-cd96c5478590 for this member and add to the group. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:26.535728793Z [2024-04-17 09:44:26,535] INFO [GroupCoordinator 2]: Preparing to rebalance group lcm-server in state PreparingRebalance with old generation 0 (__consumer_offsets-32) (reason: Adding new member aiokafka-0.8.1-8c6bdde7-4e73-4796-88ef-cd96c5478590 with group instance id None; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:26.560425900Z [2024-04-17 09:44:26,560] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nslcmops', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS, CreatableTopic(name='vca', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS, CreatableTopic(name='k8scluster', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS, CreatableTopic(name='k8srepo', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS, CreatableTopic(name='pla', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.560595255Z [2024-04-17 09:44:26,560] INFO [QuorumController id=2] Created topic nslcmops with topic ID TudLnk8SRCihlaB5W9JzGw. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.560690948Z [2024-04-17 09:44:26,560] INFO [QuorumController id=2] Created partition nslcmops-0 with topic ID TudLnk8SRCihlaB5W9JzGw and PartitionRegistration(replicas=[2], isr=[2], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.560823789Z [2024-04-17 09:44:26,560] INFO [QuorumController id=2] Created topic vca with topic ID old1hJe6SxCBCRa0209haQ. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.560856789Z [2024-04-17 09:44:26,560] INFO [QuorumController id=2] Created partition vca-0 with topic ID old1hJe6SxCBCRa0209haQ and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.561327054Z [2024-04-17 09:44:26,560] INFO [QuorumController id=2] Created topic k8scluster with topic ID 7xVSjWF3Q5yIiU3j8P_Q8g. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.561337132Z [2024-04-17 09:44:26,561] INFO [QuorumController id=2] Created partition k8scluster-0 with topic ID 7xVSjWF3Q5yIiU3j8P_Q8g and PartitionRegistration(replicas=[2], isr=[2], removingReplicas=[], addingReplicas=[], leader=2, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.561340560Z [2024-04-17 09:44:26,561] INFO [QuorumController id=2] Created topic k8srepo with topic ID XokYlwtoQHK8Ydllziwl5w. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.561439751Z [2024-04-17 09:44:26,561] INFO [QuorumController id=2] Created partition k8srepo-0 with topic ID XokYlwtoQHK8Ydllziwl5w and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.563002308Z [2024-04-17 09:44:26,562] INFO [QuorumController id=2] Created topic pla with topic ID v6-DChVCRt2puiPnwlvIpQ. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.563013976Z [2024-04-17 09:44:26,562] INFO [QuorumController id=2] Created partition pla-0 with topic ID v6-DChVCRt2puiPnwlvIpQ and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:44:26.626778067Z [2024-04-17 09:44:26,626] INFO [Broker id=2] Transitioning 2 partition(s) to local leaders. (state.change.logger) 2024-04-17T09:44:26.627541530Z [2024-04-17 09:44:26,627] INFO [ReplicaFetcherManager on broker 2] Removed fetcher for partitions Set(k8scluster-0, nslcmops-0) (kafka.server.ReplicaFetcherManager) 2024-04-17T09:44:26.628165101Z [2024-04-17 09:44:26,627] INFO [Broker id=2] Creating new partition k8scluster-0 with topic id 7xVSjWF3Q5yIiU3j8P_Q8g. (state.change.logger) 2024-04-17T09:44:26.657656920Z [2024-04-17 09:44:26,657] INFO [LogLoader partition=k8scluster-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:44:26.660224555Z [2024-04-17 09:44:26,660] INFO Created log for partition k8scluster-0 in /bitnami/kafka/data/k8scluster-0 with properties {} (kafka.log.LogManager) 2024-04-17T09:44:26.662865105Z [2024-04-17 09:44:26,662] INFO [Partition k8scluster-0 broker=2] No checkpointed highwatermark is found for partition k8scluster-0 (kafka.cluster.Partition) 2024-04-17T09:44:26.662879149Z [2024-04-17 09:44:26,662] INFO [Partition k8scluster-0 broker=2] Log loaded for partition k8scluster-0 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:44:26.663209937Z [2024-04-17 09:44:26,663] INFO [Broker id=2] Leader k8scluster-0 with topic id Some(7xVSjWF3Q5yIiU3j8P_Q8g) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:44:26.672208270Z [2024-04-17 09:44:26,672] INFO [Broker id=2] Creating new partition nslcmops-0 with topic id TudLnk8SRCihlaB5W9JzGw. (state.change.logger) 2024-04-17T09:44:26.683687734Z [2024-04-17 09:44:26,683] INFO [LogLoader partition=nslcmops-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 2024-04-17T09:44:26.686158267Z [2024-04-17 09:44:26,686] INFO Created log for partition nslcmops-0 in /bitnami/kafka/data/nslcmops-0 with properties {} (kafka.log.LogManager) 2024-04-17T09:44:26.688110040Z [2024-04-17 09:44:26,688] INFO [Partition nslcmops-0 broker=2] No checkpointed highwatermark is found for partition nslcmops-0 (kafka.cluster.Partition) 2024-04-17T09:44:26.688532235Z [2024-04-17 09:44:26,688] INFO [Partition nslcmops-0 broker=2] Log loaded for partition nslcmops-0 with initial high watermark 0 (kafka.cluster.Partition) 2024-04-17T09:44:26.688722256Z [2024-04-17 09:44:26,688] INFO [Broker id=2] Leader nslcmops-0 with topic id Some(TudLnk8SRCihlaB5W9JzGw) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [2], adding replicas [] and removing replicas [] . Previous leader epoch was -1. (state.change.logger) 2024-04-17T09:44:29.544315387Z [2024-04-17 09:44:29,537] INFO [GroupCoordinator 2]: Stabilized group lcm-server generation 1 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:44:29.547882406Z [2024-04-17 09:44:29,545] INFO [GroupCoordinator 2]: Assignment received from leader aiokafka-0.8.1-8c6bdde7-4e73-4796-88ef-cd96c5478590 for group lcm-server for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:49:26.789229108Z [2024-04-17 09:49:26,788] INFO [GroupCoordinator 2]: Preparing to rebalance group lcm-server in state PreparingRebalance with old generation 1 (__consumer_offsets-32) (reason: Leader aiokafka-0.8.1-8c6bdde7-4e73-4796-88ef-cd96c5478590 re-joining group during Stable; client reason: not provided) (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:49:26.791910253Z [2024-04-17 09:49:26,791] INFO [GroupCoordinator 2]: Stabilized group lcm-server generation 2 (__consumer_offsets-32) with 1 members (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:49:26.795436170Z [2024-04-17 09:49:26,795] INFO [GroupCoordinator 2]: Assignment received from leader aiokafka-0.8.1-8c6bdde7-4e73-4796-88ef-cd96c5478590 for group lcm-server for generation 2. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 2024-04-17T09:50:04.044036932Z [2024-04-17 09:50:04,043] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vnfd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:04.044193416Z [2024-04-17 09:50:04,044] INFO [QuorumController id=2] Created topic vnfd with topic ID pe2nzxuYQremQ4_uOpvnPg. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:04.044352554Z [2024-04-17 09:50:04,044] INFO [QuorumController id=2] Created partition vnfd-0 with topic ID pe2nzxuYQremQ4_uOpvnPg and PartitionRegistration(replicas=[1], isr=[1], removingReplicas=[], addingReplicas=[], leader=1, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:04.142141770Z [2024-04-17 09:50:04,142] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vnfd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vnfd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:04.251262723Z [2024-04-17 09:50:04,251] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vnfd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vnfd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:04.367786647Z [2024-04-17 09:50:04,367] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='vnfd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'vnfd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.516425699Z [2024-04-17 09:50:06,516] INFO Sent auto-creation request for Set(nsd) to the active controller. (kafka.server.DefaultAutoTopicCreationManager) 2024-04-17T09:50:06.519953017Z [2024-04-17 09:50:06,519] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): SUCCESS (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.519987156Z [2024-04-17 09:50:06,519] INFO [QuorumController id=2] Created topic nsd with topic ID GzFpOhFCTZGTPm_KFoLOEg. (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.520100729Z [2024-04-17 09:50:06,520] INFO [QuorumController id=2] Created partition nsd-0 with topic ID GzFpOhFCTZGTPm_KFoLOEg and PartitionRegistration(replicas=[0], isr=[0], removingReplicas=[], addingReplicas=[], leader=0, leaderRecoveryState=RECOVERED, leaderEpoch=0, partitionEpoch=0). (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.635618547Z [2024-04-17 09:50:06,634] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'nsd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.768167926Z [2024-04-17 09:50:06,768] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'nsd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.872612980Z [2024-04-17 09:50:06,872] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'nsd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.880587157Z [2024-04-17 09:50:06,880] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'nsd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:50:06.890048685Z [2024-04-17 09:50:06,889] INFO [QuorumController id=2] CreateTopics result(s): CreatableTopic(name='nsd', numPartitions=1, replicationFactor=1, assignments=[], configs=[]): TOPIC_ALREADY_EXISTS (Topic 'nsd' already exists.) (org.apache.kafka.controller.ReplicationControlManager) 2024-04-17T09:52:29.505147965Z [2024-04-17 09:52:29,502] INFO [RaftManager id=2] Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) 2024-04-17T10:00:06.849713652Z [2024-04-17 10:00:06,848] INFO [BrokerToControllerChannelManager id=2 name=forwarding] Node 2 disconnected. (org.apache.kafka.clients.NetworkClient)