Configure SRX 240 cluster Step by Step
作者:互联网
1. Understanding SRX240 Default Configuration
The following default configurations apply to SRX240 factory default settingsDefault configuration for Security Zone, Security Policy and NAT Rule:
2. Cluster Network Diagram in this LAB
If your devices were used before, it is best to reset them into default configuration. Here are some four different ways and commands to do it
a. request services fips zeroize
b. request system zeroize
c. Delete all commands in the configuration mode
[email protected]# delete
This will delete the entire configuration
Delete everything under this level? [yes,no] (no) no
you will need to set root password to be able to commit the changes
d. load factory-default
3. Set root password
By default, there is no password for root user.
set system root-authentication plain-text-password
4. Delete some default configurations on Node0
Please keep this in mind, you will need to delete those configuration on both nodes, node0 and node1.delete system name-server
delete system services dhcp
delete vlans
delete interfaces vlan
delete interfaces ge-0/0/0 unit 0
delete interfaces ge-0/0/1 unit 0
delete interfaces ge-0/0/2 unit 0
delete interfaces ge-0/0/3 unit 0
delete interfaces ge-0/0/4 unit 0
delete interfaces ge-0/0/5 unit 0
delete interfaces ge-0/0/6 unit 0
delete interfaces ge-0/0/7 unit 0
delete interfaces ge-0/0/8 unit 0
delete interfaces ge-0/0/9 unit 0
delete interfaces ge-0/0/10 unit 0
delete interfaces ge-0/0/11 unit 0
delete interfaces ge-0/0/12 unit 0
delete interfaces ge-0/0/13 unit 0
delete interfaces ge-0/0/14 unit 0
delete interfaces ge-0/0/15 unit 0
delete security
commit
5. Enable Cluster on node 0 and reboot
root>set chassis cluster cluster-id 2 node 0 reboot
6. Basic configuration based on the topology
set groups node0 system host-name fw-a
set groups node0 interfaces fxp0 unit 0 family inet address 10.9.12.9/24
set groups node0 interfaces fxp0 unit 0 family inet address 10.9.12.8/24 master-only
set groups node1 system host-name fw-b
set groups node1 interfaces fxp0 unit 0 family inet address 10.9.12.10/24
set groups node0 interfaces fxp0 unit 0 family inet address 10.9.12.8/24 master-only
set apply-groups “${node}”
set chassis cluster reth-count 2
set chassis cluster redundancy-group 0 node 0 priority 200
set chassis cluster redundancy-group 0 node 1 priority 100
set chassis cluster redundancy-group 1 node 0 priority 200
set chassis cluster redundancy-group 1 node 1 priority 100
set interfaces fab0 fabric-options member-interfaces ge-0/0/2
set interfaces fab1 fabric-options member-interfaces ge-5/0/2
set interfaces ge-0/0/3 gigether-options redundant-parent reth0
set interfaces ge-5/0/3 gigether-options redundant-parent reth0
set interfaces ge-0/0/4 gigether-options redundant-parent reth1
set interfaces ge-5/0/4 gigether-options redundant-parent reth1
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth1 redundant-ether-options redundancy-group 1
set security zones security-zone Zone1
set security zones security-zone Zone2
set security zones security-zone Zone1 host-inbound-traffic system-services all
set security zones security-zone Zone2 host-inbound-traffic system-services all
set interfaces reth0 unit 0 family inet address 10.9.132.18/24
set security zones security-zone Zone1 interfaces reth0.0
set interfaces reth1 unit 0 family inet address 10.9.136.18/24
set security zones security-zone Zone2 interfaces reth1.0
set system backup-router destination 10.0.0.0/8 10.9.12.1
set routing-options static route 0.0.0.0/0 next-hop 10.9.12.1
set security policies from-zone Zone1 to-zone Zone2 policy allow_any match source-address any
set security policies from-zone Zone1 to-zone Zone2 policy allow_any match destination-address any
set security policies from-zone Zone1 to-zone Zone2 policy allow_any match application any
set security policies from-zone Zone1 to-zone Zone2 policy allow_any then permit
set security policies from-zone Zone2 to-zone Zone1 policy allow_any match source-address any
set security policies from-zone Zone2 to-zone Zone1 policy allow_any match destination-address any
set security policies from-zone Zone2 to-zone Zone1 policy allow_any match application any
set security policies from-zone Zone2 to-zone Zone1 policy allow_any then permit
7. Enable Cluster on node 1 and reboot
After did cable connections between two clusters on g0/1 and g0/2, we can enable cluster node 1
Before enable cluster on node1, some basic configuration has to be deleted.
delete system name-server
delete system services dhcp
delete vlans
delete interfaces vlan
delete interfaces ge-0/0/0 unit 0
delete interfaces ge-0/0/1 unit 0
delete interfaces ge-0/0/2 unit 0
delete interfaces ge-0/0/3 unit 0
delete interfaces ge-0/0/4 unit 0
delete interfaces ge-0/0/5 unit 0
delete interfaces ge-0/0/6 unit 0
delete interfaces ge-0/0/7 unit 0
delete interfaces ge-0/0/8 unit 0
delete interfaces ge-0/0/9 unit 0
delete interfaces ge-0/0/10 unit 0
delete interfaces ge-0/0/11 unit 0
delete interfaces ge-0/0/12 unit 0
delete interfaces ge-0/0/13 unit 0
delete interfaces ge-0/0/14 unit 0
delete interfaces ge-0/0/15 unit 0
delete security
commit
After commit, enable chassic cluster :
set chassis cluster cluster-id 1 node 1 reboot
note: If you are using multiple Juniper Cluster in same Ethernet Environment, they have to configure unique cluster-id. Else the mac address will be conflicted on switch interface and firewall will not be able to handle network traffic properly in that zone. In following example, cluster-id is set to 5 to avoid conflicting. The range for the cluster-id is 0-15.
[email protected]> set chassis cluster cluster-id 5 node 0 reboot
Successfully enabled chassis cluster. Going to reboot now.
{primary:node0}[edit]
[email protected]> show chassis cluster status
Cluster ID: 5
Node Priority Status Preempt Manual failover
Redundancy group: 0 , Failover count: 1
node0 100 primary no no
node1 1 secondary no no
Redundancy group: 1 , Failover count: 1
node0 0 primary no no
node1 0 secondary no no
Verification:
You can check the cluster status with the following commands.
show chassis cluster status
show chassis cluster interfaces
show chassis cluster statistics
show chassis cluster control-plane statistics
show chassis cluster data-plane statistics
show chassis cluster status redundancy-group 1
Reference:
- Configuration Guide: Juniper Networks Branch SRX Series Services Gateways
- Unique cluster IDs required on Juniper firewalls configured with NSRP in A/P mode
- Juniper SRX 240 Chassis Cluster (High Availability) Configuration
Share this:
Like this:
Like Loading...Related
标签:zone,interfaces,ge,0delete,cluster,Step,SRX,unit 来源: https://blog.csdn.net/cpongo7/article/details/98947791