CRS Administration
CRS Administration |
|
starting | ## Starting CRS using Oracle 10g R1 not possible ## Starting CRS using Oracle 10g R2 $ORA_CRS_HOME/bin/crsctl start crs |
stopping | ## Stopping CRS using Oracle 10g R1 srvctl stop database -d <database> srvctl stop asm -n <node> srvctl stop nodeapps -n <node> /etc/init.d/init.crs stop ## Stopping CRS using Oracle 10g R2 |
disabling/enabling | ## use to stop CRS restarting after a reboot
## Oracle 10g R1 ## Oracle 10g R2 |
checking | $ORA_CRS_HOME/bin/crsctl check crs $ORA_CRS_HOME/bin/crsctl check evmd $ORA_CRS_HOME/bin/crsctl check cssd $ORA_CRS_HOME/bin/crsctl check crsd $ORA_CRS_HOME/bin/crsctl check install -wait 600 |
Resource Applications (CRS Utilities) |
|
status | $ORA_CRS_HOME/bin/crs_stat |
create profile | $ORA_CRS_HOME/bin/crs_profile |
register/unregister application | $ORA_CRS_HOME/bin/crs_register $ORA_CRS_HOME/bin/crs_unregister |
Start/Stop an application | $ORA_CRS_HOME/bin/crs_start $ORA_CRS_HOME/bin/crs_stop |
Resource permissions | $ORA_CRS_HOME/bin/crs_getparam $ORA_CRS_HOME/bin/crs_setparam |
Relocate a resource | $ORA_CRS_HOME/bin/crs_relocate |
Nodes |
|
member number/name | olsnodes -n |
local node name | olsnodes -l |
activates logging | olsnodes -g |
Oracle Interfaces |
|
display | oifcfg getif |
delete | oicfg delig -global |
set | oicfg setif -global <interface name>/<subnet>:public oicfg setif -global <interface name>/<subnet>:cluster_interconnect |
Global Services Daemon Control |
|
starting | gsdctl start |
stopping | gsdctl stop |
status | gsdctl status |
Cluster Configuration (clscfg is used during installation) |
|
create a new configuration | clscfg -install |
upgrade or downgrade and existing configuration | clscfg -upgrade clscfg -downgrade |
add or delete a node from the configuration | clscfg -add clscfg -delete |
create a special single-node configuration for ASM | clscfg -local |
brief listing of terminology used in the other nodes | clscfg -concepts |
used for tracing | clscfg -trace |
help | clscfg -h |
Cluster Name Check |
|
print cluster name | cemulto -n
Note: in Oracle 9i the ulity was called “cemutls” |
print the clusterware version | cemulto -w
Note: in Oracle 9i the ulity was called “cemutls” |
Node Scripts |
|
Add Node | addnode.sh
Note: see adding and deleting nodes |
Delete Node | deletenode.sh
Note: see adding and deleting nodes |
En queues |
|
displaying statistics | SQL> column current_utilization heading current SQL> column max_utilization heading max_usage SQL> column initial_allocation heading initial SQL> column resource_limit format a23; SQL> select * from v$resource_limit; |
Messaging (tickets) |
|
ticket usage | select local_nid local, remote_nid remote, tckt_avail avail, tckt_limit limit, snd_q_len send_queue, tckt_wait waiting from v$ges_traffic_controller; |
dump ticket information | SQL> oradebug setmypid SQL> oradebug unlimit SQL> oradebug lkdebug -t |
Lighwork Rule and Fairness Threshold |
|
downconvert | select cr_requests, light_works, data_requests, fairness_down_converts from v$cr_block_server;
Note: lower the _fairness_threshold if the ratio goes above 40%, set to 0 if the instance is a query only instance. |
Remastering |
|
force dynamic remastering (DRM) | ## Obtain the OBJECT_ID form the below table SQL> select * from v$gcspfmaster_info; ## Determine who masters it SQL> oradebug setmypid SQL> oradebug lkdebug -a <OBJECT_ID> ## Now remaster the resource |
GRD, SRVCTL, GSD and SRVCONFIG Tracing |
|
Enable tracing | $ export SRVM_TRACE=true |
Disable tracing | $ export SRVM_TRACE=”” |