31
we unify communications Welcome to WWW.BICOMSYSTEMS.COM

we unify communications

  • Upload
    rene

  • View
    21

  • Download
    0

Embed Size (px)

DESCRIPTION

Welcome to. we unify communications. WWW.BICOMSYSTEMS.COM. Advanced Telephony Computing Architecture 1 st Year. Los Angeles Controller. New York Mirror. MIRRORING. Hardware 4 nodes x 2 vSWITCH = 8 nodes. Hardware 4 nodes x 2 vSWITCH = 8 nodes. vSWITCH. Primary Controller. - PowerPoint PPT Presentation

Citation preview

Page 1: we unify communications

we unify communications

Welcome to

WWW.BICOMSYSTEMS.COM

Page 2: we unify communications

Advanced Telephony Computing Architecture 1st YearLos AngelesController

New York MirrorMIRRORING

Hardware 4 nodes x 2 vSWITCH = 8 nodes Hardware 4 nodes x 2 vSWITCH = 8 nodes

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Los AngelesNormal Operation Capacity 1000 concurrent calls, 10000 extensions circa.Failover Operation Capacity 15000 extensions.

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

New YorkNormal Operation Capacity 500 concurrent calls, 5000 extensions circa.

Failover Operation Capacity 15000 extensions.

LEGEND

Primary ControllerMonitors all nodes and ensure that services are working.

Secondary ControllerMonitors Primary Controller and mirror to itself. Should Primary Controller fail or if Central Office should become unavailable it will assume Primary Controller role.

Live HostWorking services.

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

New York

Los Angeles

vSWITCH

vSWITCH vSWITCH

vSWITCH

Hot SpareAssume services for unavailable or failed Live Host.

Cold SpareCold Spares are switched off and are available as extra capacity or to become new Hot Spares.

Storage ClusterStorage Cluster is Network Redundant storage from which all services are running from.

Page 3: we unify communications

Scenario 1Primary Controller role

Page 4: we unify communications

Primary Controller monitors all live nodes: Live Hosts, Hot Spares & Storage Nodes. Primary Controller also ensures that data is duplicated from Live Hosts to Storage Clusters.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 1

Page 5: we unify communications

Scenario 2Secondary Controller role

Page 6: we unify communications

Secondary Controller monitors only the Primary Controller for availability and mirror to itself.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 2

Page 7: we unify communications

Scenario 3Los Angeles Live Host becomes unavailable

Page 8: we unify communications

If Live Host in Los Angeles encounters physical failure and becomes unavailable.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 3

Page 9: we unify communications

Primary Controller will instruct first available Los Angeles Hot Spare to assume service.

Primary Controller

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Live Host 500ccNode 4

Scenario 3

Page 10: we unify communications

Scenario 4New York Live Host becomes unavailable

Page 11: we unify communications

If Live Host in New York encounters physical failure and becomes unavailable.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 4

Page 12: we unify communications

Primary Controller will instruct first available New York Hot Spare to assume service.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Live Host 500cc Node 3

Scenario 4

Page 13: we unify communications

Scenario 5New York becomes totaly unavailable

Page 14: we unify communications

If New York becomes totally unavailable …due toNetwork failure, Act of Terror, Natural Disaster orother cause of total loss of Location.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare3

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 5

Page 15: we unify communications

Primary Controller will instruct first available Hot Spare in Los Angeles to assume services which were running in New York.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Node 1

Node 2

Node 3

Node 4

Node 5

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

Los Angeles Controller

New YorkMirror

Live Host 500ccNode 4

Scenario 5

Page 16: we unify communications

Scenario 6Primary Controller failure

Page 17: we unify communications

If Primary Controller encounters physical failure and becomes unavailable.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 6

Page 18: we unify communications

Secondary Controller assumes Primary Controller role. All other Los Angeles nodes continue uninterrupted.

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Live Host 500cc

Live Host 500cc

Node 3

Node 4

Scenario 6

Page 19: we unify communications

Scenario 7Los Angeles becomes totaly unavailable

Page 20: we unify communications

If Los Angeles becomes totally unavailable …due toNetwork failure, Act of Terror, Natural Disaster orother cause of total loss of Location.

Primary Controller

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Scenario 7

Page 21: we unify communications

Secondary Controller will instruct available Hot Spares in New York to assume services which were running in Los Angeles.

Secondary ControllerNode 1

Node 2

Node 3

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

Live Host 500cc Node 2

Hot Spare

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Cold Spare

Storage Cluster

Storage Cluster

Node 6

Node 7

Node 8

vSWITCH 1

vSWITCH 2

vSWITCH 3

vSWITCH 4

Los Angeles Controller

New YorkMirror

Live Host 500cc

Live Host 500cc

Node 3

Node 4

Scenario 7

Page 22: we unify communications

Advanced Telephony Computing Architecture 2nd YearLos Angeles Controller

New York MirrorMIRRORING

Hardware 4 nodes x 4 vSWITCH = 16 nodes Hardware 4 nodes x 4 vSWITCH = 16 nodes

Primary Controller

Live Host 500cc

Live Host 500cc

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Hot Spare

Secondary ControllerNode 1

Node 2

Node 3

Los Angeles Normal Operation Capacity 2000 concurrent calls, 20000 extensions circa.Failover Operation Capacity 30000 extensions.

Node 1

Node 4

Node 5

Node 6

Node 7

Node 8

New YorkNormal Operation Capacity 1000 concurrent calls, 10000 extensions circa.

Failover Operation Capacity 30000 extensions.

Live Host 500cc Node 2

Live Host 500cc

Hot Spare

Hot Spare

Node 3

Node 4

Node 5

Hot Spare

Hot Spare

Hot Spare

Node 6

Node 7

Node 8

vSWITCH

vSWITCH vSWITCH

vSWITCH

Hot Spare

Cold Spare

Cold Spare

Cold Spare

Storage Cluster

Storage Cluster

Storage Cluster

Storage Cluster

Node 9

Node 10

Node 11

Node 12

Node 13

Node 14

Node 15

Node 16

Hot Spare Node 9

Cold Spare

Cold Spare

Cold Spare

Node 10

Node 11

Node 12

Storage Cluster

Storage Cluster

Storage Cluster

Node 13

Node 14

Node 15

Storage Cluster Node 16

vSWITCH vSWITCH

vSWITCH vSWITCH

24 port Infiniband SAN SwitchSwitch 1

24 port Infiniband SAN Switch

Switch 2

24 port Infiniband SAN Switch Switch 1

24 port Infiniband SAN Switch

Switch 2

Switch 2 is a backup for the Switch 1 in case of failure. Switch 2 is a backup for the Switch 1 in case of failure.

New York

Los Angeles

Page 23: we unify communications

Advanced Telephony Computing Architecture 2nd Year

LEGEND

Primary ControllerMonitors all nodes and ensure that services are working.

Secondary ControllerMonitors Primary Controller and mirror to itself. Should Primary Controller fail or if Central Office should become unavailable it will assume Primary Controller role.

Live HostWorking services.

Hot SpareAssume services for unavailable or failed Live Host.

Cold SpareCold Spares are switched off and are available as extra capacity or to become new Hot Spares.

Storage ClusterStorage Cluster is Network Redundant storage from which all services are running from.

Page 24: we unify communications

Advanced Telephony Computing Architecture 3rd YearLos Angeles Controller

New York MirrorMIRRORING

Hardware 4 nodes x 4 vSWITCH = 16 nodes Hardware 4 nodes x 4 vSWITCH = 16 nodes

Primary Controller

Live Host 500cc

Live Host 500cc

Live Host 500cc

Live Host 500cc

Live Host 500cc

Hot Spare

Node 1

Node 2

Node 3

Los Angeles Normal Operation Capacity 3000 concurrent calls, 30000 extensions circa.Melbourne Failover Operation Capacity 50000 extensions.

Node 4

Node 5

Node 6

Node 7

Node 8

New York Normal Operation Capacity 2000 concurrent calls 20000 extensions circa.

Failover Operation Capacity 50000 extensions.

vSWITCH

vSWITCH vSWITCH

vSWITCH

Hot Spare

Hot Spare

Hot Spare

Cold Spare

Cold Spare

Storage Cluster

Storage Cluster

Storage Cluster

Node 9

Node 10

Node 11

Node 12

Node 13Node 14

Node 15

Node 17

vSWITCH vSWITCH

vSWITCH

vSWITCH

24 port Infiniband SAN SwitchSwitch 1

24 port Infiniband SAN Switch

Switch 2 Switch 2 is a backup for the Switch 1 in case of failure.

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Storage Cluster

Storage Cluster

Storage Cluster

Node 16

Node 18

Node 19

Node 20

Node 21

Node 22

Node 23

Node 24

Secondary Controller

Live Host 500cc

Live Host 500cc

Live Host 500cc

Hot Spare

Hot Spare

Hot Spare

Node 1

Node 2

Node 3

Node 4

Node 5

Node 6

Node 7

Node 8

Hot Spare

Hot Spare

Hot Spare

Cold Spare

Cold Spare

Storage Cluster

Storage Cluster

Storage Cluster

Node 9

Node 10

Node 11

Node 12

Node 13Node 14

Node 15

Node 17

Live Host 500cc

Hot Spare

Hot Spare

Cold Spare

Storage Cluster

Storage Cluster

Storage Cluster

Storage Cluster

Node 16

Node 18

Node 19

Node 20

Node 21

Node 22

Node 23

Node 24Storage Cluster

24 port Infiniband SAN Switch Switch 1

24 port Infiniband SAN Switch Switch 2

vSWITCH

vSWITCH

vSWITCH

vSWITCH

New York

Los Angeles

Page 25: we unify communications

Advanced Telephony Computing Architecture 3rd Year

LEGEND

Primary ControllerMonitors all nodes and ensure that services are working.

Secondary ControllerMonitors Primary Controller and mirror to itself. Should Primary Controller fail or if Central Office should become unavailable it will assume Primary Controller role.

Live HostWorking services.

Hot SpareAssume services for unavailable or failed Live Host.

Cold SpareCold Spares are switched off and are available as extra capacity or to become new Hot Spares.

Storage ClusterStorage Cluster is Network Redundant storage from which all services are running from.

Page 26: we unify communications

Failover Mechanism

Primary Controller node failureIf Primary Controller node only or complete vSWITCH with the with the Controller node goes down, tasks such as monitoring, replication and failover mechanism will be taken and executed instantly by Secondary Controller node, which basicaly is live backup of the main Controller node.

Live Host failureIf Live Host node goes down or is unavailable on the network, all data of that Live Host will be copied from the Storage Cluster node to the available Hot Swap node and continue to operate on that node.

Page 27: we unify communications

Hardware Specification

Computer Node 1: PRIMARY CONTROLLER & SECONDARY CONTROLLER

Interconnect: Dual Gigabit Ethernet (Intel 82576 Dual-Port)CPU: 2 x Intel Xeon E5504 Quad-Core 2.00GHz 4MB Cache, CPU ProcessorRAM: 6GB (6 x 1GB) Kingston 1GB DDR3-11066Mgz ECC REG Memory# KVR1066D3S8R7S/1G Management: Integrated IPMI with KVM over LANLP PCIe x16 2.0: No Item Selected Hot-Swap Drive - 1: SOLID STATE DISK 60GB WD5000AAKS SATAII 7200RPM 3.5" HDD

Extra Nodes : Live Hosts, Hot Spare, Cold Spare

Interconnect: Dual Gigabit Ethernet (Intel 82576 Dual-Port)CPU: 2 x Intel Xeon E5504 Quad-Core 2.00GHz 4MB Cache, CPU ProcessorRAM: 6GB (6 x 1GB) Kingston 1GB DDR3-11066Mgz ECC REG Memory# KVR1066D3S8R7S/1G Management: Integrated IPMI with KVM over LANLP PCIe x16 2.0: No Item SelectedHot-Swap Drive - 1: SOLID STATE DISK 60GB WD5000AAKS SATAII 7200RPM 3.5" HDD

Extra Nodes : Storage Cluster

Interconnect: Dual Gigabit Ethernet (Intel 82576 Dual-Port)CPU: 2 x Intel Xeon E5504 Quad-Core 2.00GHz 4MB Cache, CPU ProcessorRAM: 6GB (6 x 1GB) Kingston 1GB DDR3-11066Mgz ECC REG Memory# KVR1066D3S8R7S/1G Management: Integrated IPMI with KVM over LANLP PCIe x16 2.0: No Item SelectedHot-Swap Drive - 1: RAID 5 3TB StorageHot-Swap Drive - 2: RAID 5 3TB StorageHot-Swap Drive - 3: RAID 5 3TB Storage

Page 28: we unify communications

SIP Proxy: Registration

SIP Client registration for all users (Residential, Business, Hosted PBXware and Wholesale) happens over SIP Proxy, which authenticate user "username", "password" or "IP address" in order to determine where the user belongs to, then forwards SIP registration to the appropriate VPS, except when it comes to the Wholesale type of user which does not register to the VPS but only to the Client Database.

Residential

Business

Hosted PBXware

Wholesale

SIP Clients

SIP Clients

VPS 2

VPS 4

Residential

Residential

Business

Business

Hosted PBXware

SIP Reg. RequestSIP Reg. Request

SIP Reg. Request

SIP Reg. Request

SIP Proxy

Client Checking

Client Checking

Client Checking

Client Checking

Client DatabaseSIP ClientRegistration

SIP Client Registration

SIP C

lient

Regis

tratio

n

VPS 1

VPS 3

VPS 5

Page 29: we unify communications

SIP Proxy: Outgoing/Incoming Calls for Residential & Business Users

Outgoing/Incoming Calls for Residential & Business users, SIP Proxy will first send those type of users to their appropriate VPS in order to check for their Enhanced Services permissions.

Residential

VPS 2

Residential

Residential

SIP Proxy

VPS 1

VoIP/PSTN Trunk

1. SIP Client Outgoing Call.

4. SIP Proxy sends Incoming call to the SIP Client.

4. SIP Proxy selects appropriate trunk for Outgoing call.

1. Incoming call first comes to the SIP Proxy.

2. SIP Proxy sends the SIP Client to the appropriate VPS, to acquire specific SIP Client data.

3. VPS sends back SIP Client with to the SIP Proxy with SIP Client Data.

2. SIP Proxy first check for the Incoming DID and sends Incoming call to the VPS where DID related user is located.

3. VPS sends back Incoming call to the SIP Proxy.

Diagram shows example for Outgoing/Incoming Calls for Residential type of user.

Page 30: we unify communications

SIP Proxy: Outgoing/Incoming Calls for Hosted PBXware Users

Outgoing/Incoming Calls for Hosted PBXware users, SIP Proxy will first send those type of users to their appropriate VPS in order to check for their Enhanced Services permissions.

Hosted PBXware

VPS 5

Hosted PBXware

Hosted PBXware

SIP Proxy

VPS 5

VoIP/PSTN Trunk

1. SIP Client Outgoing Call.

4. SIP Proxy sends Incoming call to the SIP Client.

4. SIP Proxy selects appropriate trunk for Outgoing call.

1. Incoming call first comes to the SIP Proxy.

2. SIP Proxy sends the SIP Client to the appropriate VPS, to acquire specific SIP Client data.

3. VPS sends back SIP Client with to the SIP Proxy with SIP Client Data.

2. SIP Proxy first check for the Incoming DID and sends Incoming call to the VPS where DID related user is located.

3. VPS sends back Incoming call to the SIP Proxy.

Page 31: we unify communications

SIP Proxy: Outgoing/Incoming Calls for Wholesale users

For Wholesale users, SIP Proxy sends the call straight through appropriate trunk as per client data which involve settings in LCR, Routing and Rating Engine.

Wholesale SIP Proxy VoIP/PSTN Trunk

1. SIP Client Outgoing Call.

4. SIP Proxy sends Incoming call to the SIP Client.

4. SIP Proxy selects appropriate trunk for Outgoing call.

1. Incoming call first comes to the SIP Proxy.

3. SIP Proxy uses LCR, Routing and Rating Engine to determine which trunk should be used for sending Outgoing calls.

2. SIP Proxy first check for the Incoming DID and sends Incoming call to the SIP Client IP address.

Rating Engine

Routing

LCR