top of page
Writer's pictureAlperen ÜLKÜ

CRS-2412 Error and Resolution



Today, I'll tell you about the "CRS-2412: The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time." error and what to do to resolve it.


We saw that "/u01" index has grown too much in our 2-node RAC system due to CRS logs. We saw that below error logs were produced which reproduce in half-hour intervals when we examined the alert file.

2022-03-03 09:16:14.751 [OCTSSD(65459)]CRS-2412: The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/product/diag/crs/node1/crs/trace/octssd.trc.

2022-03-03 09:46:15.232 [OCTSSD(65459)]CRS-2412: The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/product/diag/crs/node1/crs/trace/octssd.trc.

2022-03-03 10:16:15.697 [OCTSSD(65459)]CRS-2412: The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/product/diag/crs/node1/crs/trace/octssd.trc.

2022-03-03 10:46:16.181 [OCTSSD(65459)]CRS-2412: The Cluster Time Synchronization Service detects that the local time is significantly different from the mean cluster time. Details in /u01/app/product/diag/crs/node1/crs/trace/octssd.trc.

We checked if there is a time lag among the servers and no time lag was found. If there is a time lag among the servers, we should focus on that problem firstly.

[oracle@node1 ~]$ date; ssh node2 date;
Thu Mar 3 17:02:39 +03 2022
Thu Mar 3 17:02:39 +03 2022

Upon working on the error, we found that it's a bug that there is an error log although there is no time lag. (Doc ID 2614333.1)


1- To apply the workaround steps set out in the document, first detect the current log levels in all nodes.

[root@node1 ~]$ crsctl get log ctss CTSS
Get CTSSD  Module: CTSS    Log Level: 3
[root@node2 ~]$ crsctl get log ctss CTSS
Get CTSSD  Module: CTSS    Log Level: 3

2- Set the log level to 4 in all nodes respectively.

[root@node1 ~]$ crsctl set log ctss CTSS=4
Set CTSSD  Module: CTSS    Log Level: 4
[root@node2 ~]$ crsctl set log ctss CTSS=4
Set CTSSD  Module: CTSS    Log Level: 4

3- Stop CTSS Daemon in all nodes.

[root@node1 ~]$ crsctl stop res ora.ctssd -init
CRS-2673: Attempting to stop 'ora.ctssd' on 'node1'
CRS-2677: Stop of 'ora.ctss' on 'node1' succeeded
[root@node2 ~]$ crsctl stop res ora.ctssd -init
CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'
CRS-2677: Stop of 'ora.ctss' on 'node2' succeeded

4- Start CTSS Daemon in all nodes.

[root@node1 ~]$ crsctl start res ora.ctssd -init
CRS-2672: Attempting to start 'ora.ctssd' on 'node1'
CRS-2676: Start of 'ora.ctss' on 'node1' succeeded
[root@node2 ~]$ crsctl start res ora.ctssd -init
CRS-2672: Attempting to start 'ora.ctssd' on 'node2'
CRS-2676: Start of 'ora.ctss' on 'node2' succeeded

5- Set the Log Level to the level at the 1st step again.

[root@node1 ~]$ crsctl set log ctss CTSS=3
Set CTSSD  Module: CTSS    Log Level: 3
[root@node2 ~]$ crsctl set log ctss CTSS=3
Set CTSSD  Module: CTSS    Log Level: 3

Upon applying those steps, we examined the CRS logs again and no same error was seen. Those repeating errors in alert files cause the file to grow uncontrollably and you may overlook important logs. That's why you should be cautious about it.


Hope to see you in new posts,

Take care.

610 views0 comments

Recent Posts

See All

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page