Linux两块光网卡做心跳不绑定,可以实现安全冗余么?

[复制链接]
查看11 | 回复4 | 2012-10-9 18:11:48 | 显示全部楼层 |阅读模式
两台HP DL 580PC server组成双机集群,共有电口网卡2 ,光口网卡2,A机器的光口网卡为A1 A2B机器的光口网卡为B1 B2A1连接心跳交换机HW1 ;A2连接心跳交换机HW2 ;同样B1 连接HW1b2连接HW2 ,操作系统为redhat as 5.3,数据库为10.2.0.4,我在进行心跳测试的情况下,在主机端拔掉A机器一个心跳光纤,该机器自动重启。
目前集群中有两对心跳 10.10.10.1 10.10.10.2 和 192.168.10.1 和192.168.10.2其中 A机器的10.10.10.1 与B机器的10.10.10.2接在HW1 上 组成一对心跳,剩下的组成一对心跳
我如何在安装oracle 的时候,使这两个心跳都起作用,使我达到拔掉一根心跳的情况下,机器不会重启,集群不会重启,数据库集群都正常。
请问存在这种方法么?
[ 本帖最后由 kw002007 于 2010-5-28 17:26 编辑 ]
回复

使用道具 举报

千问 | 2012-10-9 18:11:48 | 显示全部楼层
难道我问的这个问题是一个很令人无语的问题么? 郁闷怎么也没人出来
回复

使用道具 举报

千问 | 2012-10-9 18:11:48 | 显示全部楼层
帮顶
回复

使用道具 举报

千问 | 2012-10-9 18:11:48 | 显示全部楼层
http://www.technomenace.com/2009/06/implement-bonding-rhel-5/
Bonding is the process of combining 2 NICs on a system into a single device. For e.g., if you have 2 network cards on a machine, eth0 and eth1, you combine the same into a bond device, bond0 and then configure an IP for this bond device.
Why do we have to do that, you may ask. In the case that I mentioned, if I configure bond0 as 192.168.1.5, both eth0 and eth1 can send or receive packets that are meant for bond device IP(192.168.1.5). Its like you now have 2 paths to reach a destination. Bond devices can be configured in different modes which can be utilized to provide fault tolerance, greater performance or both, depending on the mode.
Bonding is talked about in greater detail in/usr/share/doc/kernel-doc-/Documentation/networking/bonding.txt.
As usual, all my experiments are on either Xen or VMWare guests and this one is no different. The below steps successfully worked for me on a RHEL 5.3 Xen guest. To start with, eth0 was configured as 192.168.122.118 while eth1 remained unassigned. I am about to create a bond device bond0 with eth0 and eth1, and assign this IP into it.
1. Add the below lines to /etc/modprobe.conf
alias bond0 bonding
options bond0 mode=1 miimon=100
We are loading the bonding kernel module required to make this work, along with some options. mode=1 means that I have opted for active-backup setup. Here, only one slave in the bond device will be active at the moment. If the active slave goes down, the other slave becomes active and all traffic is then done via the newly active slave. If this sounds a bit confusing, just read on. Also, the value of miimon specifies how often MII link monitoring occurs. For a complete list of all the available arguments, feel free to check the kernel documentation.
2. Create bond0 device file, /etc/sysconfig/network-scripts/ifcfg-bond0 with the following content:
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
NETWORK=192.168.122.0
NETMASK=255.255.255.0
IPADDR=192.168.122.118
USERCTL=no
The lines are self-explanatory, defining the device name and then specifying its IP address, netmask and all.
3. Create /etc/sysconfig/network-scripts/ifcfg-eth0 with content:
DEVICE=eth0
MASTER=bond0
SLAVE=yes
USERCTL=no
BOOTPROTO=dhcp
IPV6INIT=yes
IPV6_AUTOCONF=yes
ONBOOT=yes
4. Create /etc/sysconfig/network-scripts/ifcfg-eth1 with content:
DEVICE=eth1
MASTER=bond0
SLAVE=yes
USERCTL=no
BOOTPROTO=dhcp
IPV6INIT=yes
IPV6_AUTOCONF=yes
ONBOOT=yes
The important lines here to note are “MASTER=bond0“, and “SLAVE=yes” which tells that both eth0 and eth1 are now part of bond0 device.
5. Restart network and you are done!
[root@localhost ~]# service network restart
Shutting down interface eth0:
[OK]
Shutting down loopback interface:
[OK]
Bringing up loopback interface:
[OK]
Bringing up interface bond0:
[OK]
[root@localhost ~]# ifconfig
bond0 Link encap:EthernetHWaddr 00:16:3E:1C:C5:A7

inet addr:192.168.122.118Bcast:192.168.122.255Mask:255.255.255.0

inet6 addr: fe80::216:3eff:fe1c:c5a7/64 Scope:Link

UP BROADCAST RUNNING MASTER MULTICASTMTU:1500Metric:1

RX packets:80 errors:0 dropped:0 overruns:0 frame:0

TX packets:54 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:11210 (10.9 KiB)TX bytes:11630 (11.3 KiB)
eth0Link encap:EthernetHWaddr 00:16:3E:1C:C5:A7

UP BROADCAST RUNNING SLAVE MULTICASTMTU:1500Metric:1

RX packets:45 errors:0 dropped:0 overruns:0 frame:0

TX packets:63 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:3288 (3.2 KiB)TX bytes:13120 (12.8 KiB)
eth1Link encap:EthernetHWaddr 00:16:3E:1C:C5:A7

UP BROADCAST RUNNING SLAVE MULTICASTMTU:1500Metric:1

RX packets:42 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:8384 (8.1 KiB)TX bytes:0 (0.0 b)
loLink encap:Local Loopback

inet addr:127.0.0.1Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNINGMTU:16436Metric:1

RX packets:10 errors:0 dropped:0 overruns:0 frame:0

TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:764 (764.0 b)TX bytes:764 (764.0 b)
As you can see from the output of ifconfig, device bond0 is listed as MASTER while devices eth0 and eth1 are listed as SLAVE. Also, the hardware address of bond0 and its underlying devices eth0 and eth1 are the same (00:16:3E:1C:C5:A7). In case you have multiple bond devices, comparing the hardware address of that bond device with the actual network device (ethX) will tell you whether it is a part of that particular bonding or not.
Now, the current status of the bond device bond0 is present in /proc/net/bonding/bond0. Time to fool around with bonding now…
[root@localhost ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:3e:1c:c5:a7
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:3e:58:02:c7
As I have highlighted above, the bonding mode is active-passive (since I used mode=1 to configure it in modprobe.conf). Also, both interfaces are up, but current active slave is eth0. Now, what happens when I down eth0? Normally when we down an interface, the IP associated with it also goes down (becomes unreachable). However, in bonding, it just switches over to next slave, eth1 - keeping the connection and the IP active:
[root@localhost ~]# ifdown eth0
[root@localhost ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:3e:58:02:c7
[root@localhost ~]# ifup eth0
[root@localhost ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.2.4 (January 28, 2008)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth1
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:3e:58:02:c7
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:16:3e:1c:c5:a7
Notice that when I started eth0 again (ifup eth0), it got added to the bond device automatically. Also, in the above output, even though the permanent HW Address of eth0 and eth1 are different, they retain the HW address of the bond device in ifconfig output:
[root@localhost ~]# ifconfig
bond0 Link encap:EthernetHWaddr 00:16:3E:1C:C5:A7

inet addr:192.168.122.118Bcast:192.168.122.255Mask:255.255.255.0

inet6 addr: fe80::216:3eff:fe1c:c5a7/64 Scope:Link

UP BROADCAST RUNNING MASTER MULTICASTMTU:1500Metric:1

RX packets:87 errors:0 dropped:0 overruns:0 frame:0

TX packets:35 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:6340 (6.1 KiB)TX bytes:4950 (4.8 KiB)
eth0Link encap:EthernetHWaddr 00:16:3E:1C:C5:A7

UP BROADCAST RUNNING SLAVE MULTICASTMTU:1500Metric:1

RX packets:85 errors:0 dropped:0 overruns:0 frame:0

TX packets:44 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:6360 (6.2 KiB)TX bytes:6424 (6.2 KiB)
eth1Link encap:EthernetHWaddr 00:16:3E:1C:C5:A7

UP BROADCAST RUNNING SLAVE MULTICASTMTU:1500Metric:1

RX packets:11 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:574 (574.0 b)TX bytes:0 (0.0 b)
loLink encap:Local Loopback

inet addr:127.0.0.1Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNINGMTU:16436Metric:1

RX packets:10 errors:0 dropped:0 overruns:0 frame:0

TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:764 (764.0 b)TX bytes:764 (764.0 b)
回复

使用道具 举报

千问 | 2012-10-9 18:11:48 | 显示全部楼层
感谢 楼上的前辈 你前面写的帖子我拜读了解决了不少疑惑
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

主题

0

回帖

4882万

积分

论坛元老

Rank: 8Rank: 8

积分
48824836
热门排行