45
Topic: Business Information Searching relevant scholarly journal articles, research and discuss the following prompts. Include a minimum of two (2) scholarly journal articles relevant to each prompt for a total of eight (8) or more references. The initial response to each prompt requires over 400 words, which means the entire initial post will be over 1,600 words. See the grading rubrics for more details. 1. A distributed database is a collection of several different databases distributed among multiple computers. Discuss why a company would want a distributed database. 2. Productivity increases as rapid response times are achieved. Discuss what is considered an acceptable system response time for interactive applications. 3. A fully centralized data processing facility is centralized in many senses of the word. Discuss the key characteristics of centralized data processing facilities. 4. Equipment and communication redundancies are common in today's data centers. Discuss the major types of equipment and communication redundancies found in today's data centers. Critique and reply to 4 peer initial threads, responding with supported reason and logic within each reply. Michael Cagnina Redundancy for the End of the World! Collapse 1. A distributed database is a collection of several different databases distributed among multiple computers. Discuss why a company would want a distributed database.

files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Embed Size (px)

Citation preview

Page 1: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Topic: Business Information

Searching relevant scholarly journal articles, research and discuss the following prompts. Include a minimum of two (2) scholarly journal articles relevant to each prompt for a total of eight (8) or more references.  The initial response to each prompt requires over 400 words, which means the entire initial post will be over 1,600 words.  See the grading rubrics for more details.

1. A distributed database is a collection of several different databases distributed among multiple computers. Discuss why a company would want a distributed database.

2. Productivity increases as rapid response times are achieved. Discuss what is considered an acceptable system response time for interactive applications.

3. A fully centralized data processing facility is centralized in many senses of the word. Discuss the key characteristics of centralized data processing facilities.

4. Equipment and communication redundancies are common in today's data centers. Discuss the major types of equipment and communication redundancies found in today's data centers.

Critique and reply to 4 peer initial threads, responding with supported reason and logic within each reply.

Michael Cagnina Redundancy for the End of the World! Collapse

1.       A distributed database is a collection of several different databases distributed among multiple computers. Discuss why a company would want a distributed database.

“A distributed database system consists of a collection of local databases, geographically located in different points (nodes of a network of computers) and logically related by functional relations so that they can be viewed globally as a single database” (Ozsu, & Valduriez, 2011).

Why would a company want this?  In a 2015 case study by Iacob and Moise the example in the paper described how an e-learning company (a university perhaps) would benefit from a distributed database when supporting geographically disperse users by keeping the appropriate data closer to the physical location of the user consuming the data.

Page 2: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Users don’t like waiting for an application to respond.  Database administrators don’t like losing data. Network administrators don’t like a lot of network traffic. No one likes outages. A distributed database helps with all of these.

Take network outages for an example.  “The underlying physical backbone of the internet is surprisingly fragile, and fail safes don't always work” (Newman, 2018).  Just this past June Comcast/Xfinity had an outage and “the combination of disruptions in New York and North Carolina were enough to turn off the internet for millions of people.” (Newman, 2018).  Suppose your company’s database server was offline for an hour or a day.  If the average user will only wait a few seconds for an application to respond, they certainly aren’t going to wait for a day to buy a product, they will just go somewhere else.  A distributed database offers a solution.  “Distributing data across multiple autonomous sites confines the effects of a computer breakdown to its point of occurrence. (Chen, Ng, & Greenfield, 2013 p2). 

Fault tolerance is another good reason for a distributed database. “Database replication is widely used to improve both fault tolerance and DBMS performance. Replicated database architecture (RA) has all or part of the database copied at two or more computers. The key advantage of data replication is that it provides backup and recovery from both network and server failure” (Chen, Ng, & Greenfield, 2013 p4).

Utilization spikes impact response times.  Consider New Years Eve, people on the East Coast flood the phone lines wishing Happy New Year three hours before the West Coast.  If the company sells diet food, then a distributed database would provide a mechanism for shifting resources to the East Coast getting the data closer to the active users.  “The need for scalable database services for Web applications demands the database server being replicated in different location.” (Chen, Ng, & Greenfield, 2013 p2)

Security is also a reason a company would want a distributed database.  The administrator of the HR database really doesn’t need access to the inventory database, site autonomy helps with this. “Site autonomy means that each server participating in a distributed database is administered independently” (An Introduction to Distributed Databases, n.d.) so the database administrator of the inventory database would have no access to the HR database and vice versa limiting the risks from nefarious or otherwise disgruntled employees.

Backups are another reason a company would want a distributed database.  Backing up a huge database might take the company database offline for a long period of time.  Backing up several

Page 3: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

smaller databases would be faster.  Using the previous example once its finally New Years day on the West Coast the East Coast users have gone to bed…it would be a good time to backup the East Coast database without impacting the West Coast sales.

  

2.       Productivity increases as rapid response times are achieved. Discuss what is considered an acceptable system response time for interactive applications.

Response times should be as fast as possible.  This is the general answer.  Citing a specific time in seconds or milliseconds is difficult because humans are difficult and different.  “A research team lead by Dr. Gitte Lindgaard found that people can make rough decisions about a web page's visual appeal after being exposed to it for as little as 50 ms” (Nielsen, 2009) which supports the “as fast as possible” perspective.  Amazon and Walmart would agree, Amazon found that one second of load lag would cast them 1.6 billion in sales per year while Walmart found for every 1 second of load time improvement, they experience a 2% conversion increase. (Green, 2016).

With that being said the task a user is performing influences an acceptable response time along with the progress feedback (Nah, 2003), the device being used and whatever outside distractions the user may experience. “74% of users would give up after waiting for five seconds for a website to load on a mobile device.” (Siotes, 2012). On a computer “10 seconds is about the limit for keeping the user’s attention focused on the dialogue” (Nah, 2003).  However, on a tablet or smart phone most users expect a website to load in two seconds or less and a majority of users will give up waiting after five seconds. (Siotes, 2012).  No one wants to stare at a smart phone in public waiting for a page to load.

If the goal of an application is to capture impulse purchases, then the response time needs to be as fast as possible.  If the goal of an application is to expose the user to advertisements, then the response time might be purposefully longer…to keep the ads in front of the user.  When I pull up my local radio station on radio.com I am presented with an add.  If that add is 5 seconds or less, I will just let it play, if its more than 5 seconds then I will mute the add until its done.

Edge case scenarios aside, the general answer is again “the fast the better”.  Acceptable response times are about the human interaction.  Between 1 and 10 seconds the user will notice they are

Page 4: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

waiting and at 10 seconds the average attention span is maxed out and the users mind start to wander (Nielsen, 2009) “short-term memory plays a critical role in human information processing; interference with short-term memory can occur when an individual senses an awareness of waiting after approximately 2 seconds” (Nah, 2003). 

 

3.     A fully centralized data processing facility is centralized in many senses of the word. Discuss the key characteristics of centralized data processing facilities.

A centralized data processing facility keeps all the systems eggs in one basket.  Colloquially that idea is frowned upon as being risky, however when you have to pay people to watch the eggs, turn them every few hours, keep them at a constant temperature and keep predators away, watching one location is far cheaper than watching many and requires fewer people.  Microsoft, Amazon and Google found this to be the primary benefit of a centralized data processing facility “Having a few centralized facilities means economies of scale in purchases and operational expenditures.” (Dogra, 2012). 

A centralized facility increases the performance of the various application servers by keeping them close to each other.  Servers that are physically close to each other can be connected with shorter cables, higher speed cables such as Gigabit or ten Gigabit fiber cables.  Faster data access means faster response times between servers and increases overall performance.  Even Langley Research Center, way back in 1970, had a very similar approach with their design for what they called a centralized on-line data processing system “The computer operating system allows up to seven different application-oriented programs to reside in central memory and execute in a multiprogram mode.” (Kopelson & Nolan, 1970 p9).

The software licensing costs of the applications are also reduced “Centralization also cuts hardware and software licensing costs; spreading systems out to more departments leads to more system and software purchases” (Korzeniowski, 2013).  Fewer employees means fewer Microsoft licenses,

Staffing is also a huge benefit to a central location.  Administrators salaries can be expensive and experienced administrators can be difficult to find.  Having one location near a large city or

Page 5: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

college campus can help properly staff the facility without having to duplicate the expense across many different physical locations.  Having less staff means fewer problems created by staff such as locked passwords, lost security badges and coffee spills.

Physical management is also less expensive.  Two entrances are easier to watch than ten.  One cooling unit is cheaper to run than several.  Stocking the supplies in one break room and two bathrooms are also less expensive.

Control of course is inherent to all the different benefits of a centralized facility.  Management has an easier time managing one location, ensuring standards are being followed, backups are being taken, security procedures are being implemented.

“Additional benefits include traveling less, having fewer shipping locations, simplifying training costs and eliminating redundant staff positions, processes and systems” (Korzeniowski, 2013).

All these examples circle back to the primary reason for a centralized facility, economies of scale.  Two buildings are more expensive than one.  More of everything is needed and costs vary widely depending on the physical location.

4.       Equipment and communication redundancies are common in today's data centers. Discuss the major types of equipment and communication redundancies found in today's data centers.

The course book describes four tiers of redundancy for a data center with Tier one being no redundancy (the server is on the administrator’s desk) to Tier four all components have redundant backups (Stallings & Case 2013)

Using the OSI Layers as a guide, redundancy for anything, including a data center, should start at the physical level layer one. Redundant power and redundant backup power and cooling.  What’s worse having no power in the data center or no cooling?  With no power the applications are down but the equipment will survive.  With no cooling the applications can stay up at the risk of destroying the equipment and going down anyway.  Power redundancy isn’t limited to just the

Page 6: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

building as a whole, “redundancy levels also dictate the number of power supplies installed in the equipment itself and hence the number of power plugs required.” (Cisco Unified Computing System Site Planning Guide: Data Center Power and Cooling, 2017).  You can’t plug ten servers into a power strip and then plug that into one outlet in the wall for power.  The data center will need lots of redundant wall outlets.

Network redundancy in OSI Layers two through four was touched upon in our Project Phase 1 requirements.  The Hospital network needed redundancy and the Cisco “Borderless Campus Architecture” N-Tier architecture recommended redundancy in the Core Layer at the very least.  Starting with redundant Network Interface Cards (NICs) on the servers including redundant switches and redundant routers (two or more).

Redundancy in OSI Layers five through seven deal with redundancy for the applications and databases and other software themselves.  Is your database or application on one large very powerful server (not redundant) or perhaps it’s a cluster of servers (better redundancy) or perhaps all the servers are virtual on a big storage array (also fairly good redundancy).

One last fairly over looked redundant equipment need is Keyboard, Video and Mouse (KVM) connectivity.  Is there only one KVM in the data center?  How are multiple administrators supposed to fix anything if they have to wait for a keyboard to bring back up a router or switch.  Do you have a server rack with a lot of blade servers squeezed in together?  Will two KVMs fit side by side on servers right next to each other if both need attention?

Yes, cover the standard redundancies: power, cooling, switches, routers, outlets, cables but also consider the smaller redundancies that are often over looked like KVMs, chairs, tools…even coffee cups.

One last fun quote on “redundancy for the world” perhaps only needed by those left behind after rapture…”At this level, IT and the business may have other matters to worry about. However, if you really want to be able to put in place a possible disaster recovery plan for this, send your data out from the planet as a maser (a microwave-emitting version of the laser) data stream. Provided you can then get ahead of it at some future time, you can recapture all your data for use from another planet.” (Longbottom, 2013)

Page 7: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

References

An Introduction to Distributed Databases. (n.d.). Retrieved November 27, 2018, from https://docs.oracle.com/cd/A57673_01/DOC/server/doc/SCN73/ch21.htm

Borderless Campus 1.0 Design Guide - Borderless Campus 1.0 Design and Deployment Models [Design Zone for Campus]. (2013, October 31). Retrieved November 12, 2018, from https://www.cisco.com/c/en/us/td/docs/solutions/Enterprise/Campus/Borderless_Campus_Network_1-0/Borderless_Campus_1-0_Design_Guide/BN_Campus_Models.html

Chen, S., Ng, A., & Greenfield, P. (2013). A performance evaluation of distributed database architectures. Concurrency and Computation: Practice and Experience, 25(11), 1524-1546. doi:10.1002/cpe.2891

Cisco Unified Computing System Site Planning Guide: Data Center Power and Cooling (2017). Retrieved November 28, 2018, from https://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/unified-computing/white_paper_c11-680202.pdf

Dogra, R. (2012, August 23). Distributed vs Centralized Data Center. Retrieved November 28, 2018, from http://www.datacenterjournal.com/distributed-vs-centralized-data-center/

Green, V. (2016, January 24). Impact of slow page load time on website performance. Retrieved November 27, 2018, from https://medium.com/@vikigreen/impact-of-slow-page-load-time-on-website-performance-40d5c9ce568a

Iacob, N. M., & Moise, M. L. (2015). Centralized vs. Distributed Databases. Case Study. Academic Journal of Economic Studies, 1(4), 119–130.

Page 8: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Kopelson, S. & Nolan, J. (1970, December 1).  Design and Operating Characteristics of a Centralized On-Line Data Processing System at Langley Research Center. NASA-TN-D-6088, L-6415

Korzeniowski, P. (2013, November). Following both sides of the decentralized vs. centralized IT debate. Retrieved November 28, 2018, from https://searchdatacenter.techtarget.com/opinion/Following-both-sides-of-the-decentralized-vs-centralized-IT-debate

Longbottom, C. (2013, August). How to plan and manage datacentre redundancy. Retrieved from https://www.computerweekly.com/feature/How-to-plan-and-manage-datacentre-redundancy

Nielsen, J. (2009, October 5). Powers of 10: Time Scales in User Experience. Retrieved November 27, 2018, from https://www.nngroup.com/articles/powers-of-10-time-scales-in-ux/

Nah, Fiona. (2003). A Study on Tolerable Waiting Time: How Long Are Web Users Willing to Wait?. Behaviour & Information Technology - Behaviour & IT. 23. 285. 10.1080/01449290410001669914.

Newman, L. H. (2018, June 29). Friday's Massive Comcast Outage Shows How Fragile the Internet Is. Retrieved November 27, 2018, from https://www.wired.com/story/friday-comcast-outage-cut-fiber/

Ozsu, M.T. & Valduriez, P. (2011). Principles of Distributed Database Systems. (3th ed.), New York: Springer, 2011.

Siotes, M. (2012). Slow Pages Lose Customers. How Site Performance Optimisation Can Increase Revenue on Desktop and Mobile Sites. Retrieved November 27, 2018 from https://www.icrossing.com/uk/sites/default/files_uk/insight_pdf_files/SlowPagesLoseUsers_FINAL.pdf

Page 9: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Stallings, W., & Case, T. (2013). Business data communications: Infrastructure, networking and security. Boston: Pearson.

1 day ago

Craig Wiggins DB Forum 2 Collapse

A centralized database maintains the data in single site or location (D Asir et al., 2017).  In a centralized database system, if the site crashes then the entire data stream is lost (D Asir et al., 2017).  This system makes it difficult when the site is up when there is an increase in the amount of user accessing the one site.  This model has many limitations and can affect the users and transactions needed.  Distributed databases help with the limitations of the centralized databases (D Asir et al., 2017).  The distributed databases, store data in different databases which are located at different sites (D Asir et al., 2017).  This model will help a company such as a bookstore improver availability, fault tolerance, and durability (D Asir et al., 2017).  Lake (2013) states, “Cloud computing is considered distributed computing where the application and data a user may be working with is located somewhere on the internet (p. 33).  Companies in today’s world have the option to use datacenters for their databases.  Instead of building their own they can rent space in different parts of the world to support their actions.  Amazon Elastic Compute Cloud was one of the first vendors who supplied computing and data capacity in the cloud (Lake, 2013).  

Acceptable system response times for interactive applications can very in the respond time required.  The acceptable time will depend on what the action needed.  An emergency 911 center uses their computer aided systems send fire, police, and paramedics to an emergency scene (911, 1993).  The quicker the dispatcher can generate maps or pull up standard operating procedures for the situation the better (911, 1993).  Robert Miller in 1968 wrote an article called, “Response time in man-computer conversational transactions” (Dabrowski, 2011) Robert started the search for the best computer response time (Dabrowski, 2011).  Since then people have been looking for and increasing the response time through any system.  Ben Shneiderman developed a four-tier task model of suggested computer system response times in 1987 (Dabrowski, 2011).  Typing and mouse movements were 50-150ms, Simple frequent tasks were 1s, common tasks

Page 10: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

were 2-4s, and complex tasks were 8-12s (Dabrowski, 2011).  Are humans able to keep up with an interactive application such as a web-page?  Will the limitations of the human being keep the computers slower?  Using the 911 example, the limitations of humans should not stop the system response time from being or getting quicker.  The faster information can be given, then the faster a team can respond to a crisis.  Second matter in a life and death situation.  On the other hand, there are times when a system response needs to be slowed down.  If a person works in a manufacturing plant and the speed of the assembly line prevents a person from going faster, then you would not need a quick response time.  The time it takes for a system to tighten lugs nuts could be 15 seconds because going faster would disrupt the assembly line itself. 

.

Centralized databases will consolidate numerous autonomous department servers (Korzeniowski, 2018).  This model can also reduce hardware and software license costs to an organization (Korzeniowski, 2018).  If an organization was to have multiple distributed database systems then the cost would increase because of the multiple departments needed to support.  However, with a centralized database there will be a cost savings because the number of departments purchasing licenses and facilities will be reduced.  The IT staff will be more focused and there will be a centralized function for the helpdesk, disaster recovery, and complexity (Korzeniowski, 2018).  Data processing facilities are not just a big building where a person throws a few server racks in and starts transmitting data.  These building are massive undertaking of security, power, water, air conditioning, and space. 

            When discussing redundancy within datacenters, most people start thinking about router and switches.  Since these datacenters contain critical systems for enterprise networks, it is important they stay operational (Virtual, 2016).  To maintain the datacenter, an organization must have computing equipment, communication equipment, heating, ventilation, and air conditioning equipment which is used to maintain a proper operating environment (Virtual, 2016).  The power requirements and electricity costs can be a major portion of the operating budget (Virtual, 2016).  Power requirements demanded by the rack-mounted devices, reliable and efficient power delivery is crucial for successful operation (Virtual, 2016). Reliability and availability obligations placed on the devices powering the datacenter infrastructure must meet or exceed predetermined statutory requirements, as is the case of financial institutions (Virtual, 2016).  Producing clear redundant power is one of the most important pieces of a data center.  Heating and air conditioning are next.  There have been developments in liquid cooling which many have claimed to have a cost benefit (Sorell, 2015).  When this author was in Afghanistan, we had two data centers which ran 24 hours a day, 7 days a week.  Power was checked every six hours and there were always 2 generators per site ready to go just in case.  The equipment ran hot and always needed the two air conditioners running to keep the equipment from overheating.  One time in the middle of the winter, the temperature was 30 degrees outside with the air conditioners running inside.  Normally a person would walk into a heater but we had to keep it cold to protect the equipment.

.

 

Page 11: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

References

D Asir, A. G., Leavline, E. J., & Malini, S. (2017). Distributed database for bookstore application. International Journal of Advanced Research in Computer Science, 8(3) Retrieved from http://ezproxy.liberty.edu/login?url=https://search-proquest-com.ezproxy.liberty.edu/docview/1901458194?accountid=12085 Johnston, R., & Kong, X. (2011). The customer experience: A road-map for improvement. Managing Service Quality, 21(1), 5-24. doi:http://dx.doi.org.ezproxy.liberty.edu/10.1108/09604521111100225

Dabrowski, J., & Munson, E. V. (2011). 40 years of searching for the best computer system response time. Interacting with Computers, 23(5), 555-564. doi:http://dx.doi.org.ezproxy.liberty.edu/10.1016/j.intcom.2011.05.008

Korzeniowski, P. (2018 November 30). Following both sides of the decentralized vs. centralized IT debate.  Retrieved from https://searchdatacenter.techtarget.com/opinion/Following-both-sides-of-the-decentralized-vs-centralized-IT-debate.

Lake, P. & Crowther, P. (2013). Cocise Guide to Database. Undergraduate Topics in Computer Science. Retrieved from: https://link.springer.com/chapter/10.1007%2F978-1-4471-5601-7_2.Doi:10.1007/978-1-4471-5601-7_2

Sorell, V., P.E., Carter, B., Zeighami, R., Smith, S. F., P.E., & Steinbrecher, R. (2015). Liquid-cooled IT equipment in data centers. ASHRAE Journal, 57(12), 12-14,16,18-20,22. Retrieved from http://ezproxy.liberty.edu/login?url=https://search-proquest-com.ezproxy.liberty.edu/docview/1747235082?accountid=12085

Virtual power systems inc.; researchers submit patent application, "datacenter power management using variable power sources", for approval (USPTO 20180052431). (2018, Mar 16). Energy Weekly News Retrieved from http://ezproxy.liberty.edu/login?url=https://search-proquest-com.ezproxy.liberty.edu/docview/2011973470?accountid=12085 911 system to speed response times; (911 system speeds response times EARLY). (1993, Nov 01). Edmonton Journal Retrieved from http://ezproxy.liberty.edu/login?url=https://search-proquest-com.ezproxy.liberty.edu/docview/251998079?accountid=12085

2 days ago

Page 12: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

John Anderson Forum 2 Business information Collapse

Forum 2 Business information

Distributed Databases

            The Internet, in and of itself is probably the most visible example of distributed databases to people.  The term nothing dies on the internet comes to mind.  You have a giant umbrella network of servers and databases that can and do share information.  While sections can be disabled, removed, or disconnected you often have other sections that can pick up and take over. Figure 10-2 in Business data communications and networking shows an example of the internet. You start with your Internet exchange points, which branch out to Tier 1 internet service providers and then Tier2 and 3 providers. (Fitzgerald et al. 2015)These are the links to the home. Each of these points has redundancies of their own.  A corporation, using a similar structure gains multiple benefits. They can put physical access points and servers geographically closer to the offices that primarily use those therefore decreasing response times, while still allowing other locations to access the same data. They have failsafe redundancies in that if one area goes down, they do not lose the entire network. And depending on how the servers are setup you can have data redundancies.  

            Our textbook refers to Distributed data processing and describes it as dispersing servers throughout an organization to process information in the most effective way possible. (Stallings, Case 2013) They point out three primary reasons for a company to go this route, Operational, Geographical, and economic.  Operationally, you get redundancies and better access. Geographically you continue to improve access and response rates as well as maintain redundancy. You also have systems that will need to be located geographically for geopolitical reasons in many multinational companies. Economically you have benefits of using smaller systems that cost less initial capital and can be easier to replace as they come to end of life.  Part of the economics is scalability. A distributed system can literally grow with the company. As new offices are added or new centers opened, they are really just one connection away from becoming a part of the overall web that is a corporate network.  To harken back to my initial point, more and more corporate networks begin to resemble the internet. A web of servers, routers and switches that link multinational companies across the globe.  Some of the downsides to a distributed system are difficulty in diagnosing failures, control and management of the system, duplication of effort and data and most important to many companies, Security.  A distributed system has many more access points to the outside world than a centralized system.

           

Page 13: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Response times

            Response time is very important across any number of applications. Our textbook declares that response times between .1 and 1 second are good for interactive applications. The table 2.4 gives examples ranging from greater to 15 seconds to the Deci second range.  On the higher end you are sitting and waiting for a response from the system, on the lower end, you are expecting to see the letter or number you typed on a keyboard essentially “immediately”. (Stallings,Case 2015) 

            Studies have shown that the one second range keeps a user’s thoughts flowing seamlessly.(Ben 2011) Or at least, that is the impression gained from studies at that time.  They know there is some delay and that the computer is processing but still feel like they can control the system.  There is no sense of waiting on the system. Faster response times are of course better but are not always possible.

            I myself work in the VR software field. Response times in applications are incredibly important to what we do. There is the response time of the system reading motion capture sensors and processing that data, sending it to a headset or screen and then drawing the image for the eyes to see. You need incredibly fast response times in this situation. When breaking it down to frame rates, most Head mounted display systems needs to be running between 75 and 120 frames per second to avoid VR or Cyber sickness. This translates into delays of only milliseconds, but that is all it takes to make someone sensitive notice the latency in an image being drawn directly in front of their eyes. (Raaen K. Kjellmo I. 2015) Several studies and experiments have been done over the last few years with headsets to try and find the optimal response rate to prevent the average person from filling ill when using VR. It is considered that latency should be under 20 ms.  That is something that really has only been reached in the last year or two with more powerful video cards and faster processors.  

            While not specifically related to business use in general of interactive applications, this is a personal example that I could give. In the early days of VR we had many issues with illness and motion sickness caused by poor response times. It is a VERY direct example of why faster is better for most people.  In general everyday use, you can happily work away on a computer with a deci second response. But specific applications may require higher rates and more power. You need to always keep in mind the end goal of the product and the needs of the client.

 

Centralized Data Processing Facilities

            Centralized architecture is generally done on a cluster of computers that are physically located in a central location.   These got their start in the old server rooms of large mainframe systems.  Often, in older corporations they are in fact still located in old converted mainframe rooms. I have had the experience of working in a number of these over the years. In this type of system, often, you have centralized computers and all applications are run on the computers in this primary location.  Data will be stored in databases in this single facility and you will have direct control over everything. One benefit to this is you can have staff centralized as well and

Page 14: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

easy access to all systems. This type of system also makes security easier. You can limit access more easily both physically and digitally to the system, limiting outside access points.  This makes the system more easily protected. (Stallings, Chase 2015). Another benefit to this style of center is cost reductions. When your system is located in one place you can more easily manage the power consumption, equipment capital and long term reliability issues. (Taylor, J., Tucker C., 1989)

            Other aspects of a centralized system allow you to use virtualization to reduce hardware needs and address security issues. You also have the ability to allow other clients within the same organization to have access to the tools they need all stationed out of one central location.  This makes it easier to manage the customer’s tools and applications.   Software automation makes managing a central system much easier and lets you optimize utilization of the system while minimizing staffing needs. (Beck P. et al. 2013)

Some of the limitations around this architecture are scalability for larger organizations, and the potential for single points of failure that could take down an entire location. Think a power outage or storm that shuts down a hospital and now the outside clinics that feed off the system are completely cut from the network.  There is less flexibility in a centralized system than you will find in a distributed system.

 

Redundency

            Redundancy is incredibly important in any network architecture. Electronics have a finite lifespan and can be very susceptible to damage.  The client I work for is a multinational. Our particular location is an office complex attached to a large manufactory.   While we are connected worldwide via the WAN and limited cloud computing, highly sensitive data is all handled internally on a centralized network.  Because we are attached to a manufacturing facility, there is always and has been the potential for power issues. The facility itself generates some of its power needs onsite but the rest comes from the grid. Because of this, if something knocks out power to part of the town, it can put a huge draw on the onsite generators and cause brown outs and blackouts.  Battery backup and surge protection systems are very important in this type of situation. Not just to protect the systems from crashing during a sudden power draw or loss, but to protect against surges when the power is returned. A factory is not running on standard 120 volt power systems and at times power can feed back into the system and cause havoc in an office environment. 

            Reaching 100% redundancy by duplicating an entire system can be incredibly expensive. Often instead you will work to find the right balance between cost and complexity and duplicated data and servers.  Backup systems as I mentioned earlier, UPS’s and generators for power, duplicated drives or other data backup “ usually off sight” for loss recovery and Detection and monitoring software are all part of this process. Detection systems can range from HVAC controls to power and smoke detectors. In some locations you may even find video surveillance to track human interactivity for security reasons. (Alberdig C 2016)

Page 15: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

 

In Conclusion

            I feel that it is important to be certain of a client’s needs when integrating either a centralized or distributed system. The geographical, economical and response time needs of a company will all play a part in deciding the preferred architecture and the hardware needs of a network.

 

Reference

Fitzgerald J., Dennis A., Durcikova A. (2015) Business Data Communications & Networking 12th Edition Chapter 10 “The Internet”.

 

Stallings, W., & Case, T. (2013). Business data communications: infrastructure, networking and security (7th ed.). Upper Saddle River, NJ: Prentice Hall.

 

Ben (2011) Characterizing and Measuring Response Time for Web Applications Retrieved on November 29, 2018 from https://blog.nexcess.net/2011/06/29/characterizing-and-measuring-response-time-for-web-applications-2/

 

Raaen K., Kjellmo I,. (2015)  Measuring Latency in Virtual Reality Systems. Konstantinos Chorianopoulos; Monica Divitini; Jannicke Baalsrud Hauge; Letizia Jaccheri; Rainer Malaka. 14th International Conference on Entertainment Computing (ICEC), Sep 2015, Trondheim, Norway. Springer International Publishing, Lecture Notes in Computer Science, LNCS-9353, pp.457-462, 2015, Entertainment Computing - ICEC 2015 Retrieved November 30 2018 from https://hal.inria.fr/hal-01758473/document

 

 

Taylor, J. R., & Tucker, C. C. (1989). Reducing Data Processing Costs Through Centralized Procurement. MIS Quarterly, 13(4), 487–499. Retrieved from http://ezproxy.liberty.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=iih&AN=4679464&site=ehost-live&scope=site

 

Page 16: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Beck, P., Clemens, P., Freitas, S., Gatz, J., Girola, M., Gmitter, J., … Tate. J. (2013). IBM and Cisco: Together for a world class data center. IBM Redbooks. Retrieved from

https://proquest-safaribooksonline-com.ezproxy.liberty.edu/book/operating-systems-and-server-administration/data-center-management/0738438421/chapter-3dot-designing-networks-for-the-modern-data-center/ww478331_xhtml?query=((shared+data+center+resources))#X2ludGVybmFsX0h0bWxWaWV3P3htbGlkPTA3Mzg0Mzg0MjElMkZ3dzQ2MTEyM194aHRtbCZxdWVyeT0oKHNoYXJlZCUyMGRhdGElMjBjZW50ZXIlMjByZXNvdXJjZXMpKQ==

 

Alberding C. (2016) Reliability: Understanding the High-Availability Data Center Retrieved on November 29 2018 from

http://www.datacenterjournal.com/reliability-redundancy-understanding-high-availability-data-center/

4 days ago

Patrick Howard Forum 2 Collapse

 

 

 

 

 

Page 17: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

 

 

 

Forum 2

Patrick Howard

Liberty University

Page 18: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

 

Forum 2

            For this assignment, four questions are being asked covering a range of topics. The topics

are why would a company want a distributed database, what are acceptable response times for

interactive applications, discuss the key characteristics of a centralized data processing facility,

and discuss the mayor type of equipment and communication redundancies found in today’s data

centers. These topics will be presented in the order given.

1. A distributed database is a collection of several different databases distributed among multiple computers. Discuss why a company would want a distributed database.

2. Productivity increases as rapid response times are achieved. Discuss what is considered an acceptable system response time for interactive applications.

3. A fully centralized data processing facility is centralized in many senses of the word. Discuss the key characteristics of centralized data processing facilities.

4. Equipment and communication redundancies are common in today's data centers. Discuss the major types of equipment and communication redundancies found in today's data centers.

 

Four Questions

Distributed Databases

            Stallings and Case (2013) describe a distributed database as “a database that is not stored

in a single location but is dispersed over a network of interconnected computers” (p 376). As

stated, it is not a single database. However, it may appear as a single database when in use.

While a single database in a data center may work well for a single sited organization where the

users are local, this configuration does not work so well when users are spread out and the need

Page 19: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

for a subset of the database to be local that can be quickly accessed. Gillenson (2011) refers to

this a “location transparency.” To the user who could be located anywhere, the database seems as

if it is local. Each site would have a copy of the database that is relevant to that site with the rest

of the database distributed to the area where it is needed. Tsuchlya and Mariani (1984) provide

the following benefits of using a distributed database; access is quicker with a local database,

locating data in different areas reduces contention, load balancing reduces bottlenecks, and the

failure or loss of one database affects a small portion of the data.

            Lin et al. (2016) have an interesting improvement to the traditional distributed database.

They have proposed LEAP (Localizing Executions via Aggressive Placement of data). Using

LEAP, if the data is stored locally, then the transaction is processed locally. LEAP saves on the

network latency costs of sending the data to a remote system to be processed. LEAP has been

demonstrated to be more efficient than the currently used two-phase commit (2PC) protocol.

With 2PC, the first phase sets up the change where a copy of the current state is made, and the

database is made ready for the change. The second phase either makes the change or reverts to

the original state if unable to make the change. The backup to a failed change is the copy of the

original state made in phase one (Rodgers et al., 2011). LEAP is a process that is designed to

deal the network overhead of 2PC.

            Two concerns with a distributed database have to do with the implementation. With

fragmentation, different parts of the database are located in different areas. The disadvantage

when the database is fragmented is if that the site becomes inaccessible, then the data stored on

that site is also unavailable When using replication, copies are the database disbursed to various

Page 20: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

locations. While replication solves the issue with accessibility, it introduces the concurrency

issue or which database is the current data (Gillenson 2011).

Acceptable Response Times

            Response times about the user is a highly subjective subject. On the surface, it is how

engaged the user is with the application. Hogan (2017) states "users expect pages to load in two

seconds, and after three seconds, up to 40% of users will abandon your site" (p 3). Not having a

nearly immediate response from the application infers the user will lose interest and move onto

other tasks. Dabrowski and Munson (2011) provide two different response times. Control, where

the user needs a response quickly to remain focused and continue on task and conversational,

where the need for a rapid response is not required for the task at hand to proceed.  

            Módos, Šůcha, Václavík, Smejkal, and Hanzálek bring up the financial side of response

times. Where the system is based on the average workload and the user's experience with the

response time is acceptable. With this model, there are periods of heavy workload when the

response time falls below acceptable limits, but the expense of upgrading the system to decrease

the response time is not worth the investment. They also define response time as “as the duration

between the arrival of the task to the scheduling system and its completion” (p 98). This

definition is a one-sided definition as it does to take into account the user side of the equation.

With the user included, it would include the time from when the request was sent from and

received by the user. The user perceives response time from their viewpoint, not from where the

system is working on their request.

Page 21: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

            Han (2004) coins a term of tolerable waiting time (TWT) and suggests that a delay of 1

second will keep the user engaged. For delays of more than 10 seconds, some time of indicator

letting the user know what is going on may persuade them to remain with the application instead

of moving on to something else. Han states “perceived waiting time may be more relevant and

important than true waiting time” (p 162). While the actual wait time may be within acceptable

limits if the user perceives that it is longer that becomes their reality. The longer the wait, the

more dissatisfied the user becomes.

            While the question looks for a quantifiable response to acceptable response times,

examples show that acceptable response time is in the view of the user. Does the user perceive

the system is responding in a timely fashion? If so, then it is. If not, it becomes unacceptable.

Centralized Data Center Facilities

            Stallings and Case (2013) state that a centralized data center centralizes, among other

items, the following; computers, processing, data, control, and support staff (p 68).  The benefit

of centralizing so many items include the financial savings of having assets located in a single

facility. In a centralized data center, having system collocated will provide savings on physical

systems by virtualizing many servers to a few physical hosts. Also, the resources required to

support the IT equipment such as power and HVAC will be reduced. The potential in fewer

higher quality support staff may be realized as all of the systems will be in a single facility.

            Rather than using the traditional three-tier architecture, Beck et al. (2013) indicate the

future of the data center incorporates a “DC fabric.” The innovative DC fabric will enable the

data center to distribute the workload more efficiently across the network. Tso, Jouet, and

Page 22: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Pezaros (2016) describe the DC fabric as a mesh of redundant connections that connect the

server racks. The purpose is to shorten the OSPF by eliminating layers of hierarchical

connections. By bypassing layers of the network will increase speed and remove potential

bottlenecks.

            Smith and Waldenfels (2018) state the items centralized data center share are that “they

all provide space, power, and environmental conditioning” (p 14). They go on further by stating

that a defining factor of a data center is reliability. The key factors they state in maintaining

reliability is power and air conditioning. The loss of either can shut down a data center. I had a

personal experience when my employer hired a firm to combine two server rooms. Not long after

the project was completed, we discovered the cooling system could not support the additional

equipment and the entire IT staff participated in an emergency shutdown of all the servers in the

room, this included all backup systems. Shutting down the servers without warning impacted our

uptime for the month.

            Rogers (2013) infers and overlooked characteristic is the management and staff. While

the design of the data center may go into great detail, those who will manage and run the data

center will need to be trained in the various aspects of the facility. While the management and

staff may be qualified, by providing the requisite training will ensure they are qualified and

capable. Another aspect cited is to ensure all involved are included in the Continuous Process

Improvement Program. Two of the benefits of participating in a process improvement program

are a reduced risk of failure and an increased return on investment (Wysocki, 2009).

Data Center Redundancies

Page 23: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

            When considering redundant systems, some replace redundant with backup. Having a

backup system is not in and of itself a redundant system. Schmidt (2006) indicates that to be a

redundant system, a management system must be in place that can detect when the primary

system is not working and take it offline and make the alternate system the primary system.

When the primary and alternate systems are managed, then you have a redundant system. Since

heat is a cause of computer failures, Schmidt calls for redundant air conditioning equipment with

separate controllers along with monitors to keep track of current air temperature in the data

center.

            Another item that needs attention is the power system. While uninterruptable power

supplies (UPS) have been in use for a long time, Wang, Z., Hou, Chen, Liang, and Guo (2016),

propose a power attack defense (PAD). A PAD system is designed to use battery units in a two-

level approach to smooth out power with a large-scale system providing a buffer to the data

center and mini systems on server racks providing power to the individual servers. The PAD

system would provide protection much like a combination of a UPS and a surge protector.  

            Continuity of operations (COOP) require an offsite location capable of taking over if the

primary site is unable to function. While some in management may see the costs of maintaining a

COOP site as an unnecessary expense, it should be considered an insurance expense (Peltier,

2013). Something that one hopes never to use but will appreciate in an emergency. Most COOP

sites are not designed to restore services for the entire organization; they are usually set up to

ensure key operations are kept running. The key operations may not be predetermined but

decided on at the time of the event based upon what is going within the organization at the time

of the event (Rohde & Haskett, 1990). It is also important to test the COOP plans. Testing the

Page 24: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

COOP will require moving key personnel to the COOP site to see if they can get up and running

with the critical functions of the organization.

            In addition to redundant sites, HVAC, and power, it is also important to ensure there are

backup connections to the Internet. For double redundancy, using a different Internet service

provider (ISP) for the backup connection will ensure the data center has connectivity in case

something happens with the ISP. Having an outage with the ISP without an alternative could

leave the users of the organization's data center thinking it is offline, which it technically would

be.

Conclusion

            A distributed database has benefits in that it appears local to the user; care needs to be

taken to ensure the database is also accessible to users who are not local to its location. Response

time is best measured from the user side since it is what is perceived rather than what the actual

response time is that matters. Centralized data centers are many things but being reliable should

be considered one of the most important. Data centers also have many redundancies built in; one

important feature should be a management system that can failover when an issue arises.

 

Page 25: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

 

References

Beck, P., Clemens, P., Freitas, S., Gatz, J., Girola, M., Gmitter, J., … Tate. J. (2013). IBM and

Cisco: Together for a world class data center. IBM Redbooks. Retrieved from

https://proquest-safaribooksonline-com.ezproxy.liberty.edu/book/operating-systems-and-

server-administration/data-center-management/0738438421/chapter-3dot-designing-

networks-for-the-modern-data-center/ww478331_xhtml?

query=((shared+data+center+resources))#X2ludGVybmFsX0h0bWxWaWV3P3htbGlkP

TA3Mzg0Mzg0MjElMkZ3dzQ3ODMzMV94aHRtbCZxdWVyeT0oKHNoYXJlZCUyM

GRhdGElMjBjZW50ZXIlMjByZXNvdXJjZXMpKQ==

Dabrowski, J., & Munson, E. V. (2011). 40 years of searching for the best computer system

response time. Interacting with Computers 23, 555–556. Retrieved from

http://rx9vh3hy4r.search.serialssolutions.com/?ctx_ver=Z39.88-2004&ctx_enc=info

%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid

%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx

%3Ajournal&rft.genre=article&rft.atitle=40+years+of+searching+for+the+best+compute

r+system+response+time&rft.jtitle=Interacting+with+Computers&rft.au=Dabrowski

%2C+Jim&rft.au=Munson

%2C+Ethan+V&rft.date=2011&rft.pub=Elsevier+B.V&rft.issn=0953-

5438&rft.eissn=1873-

7951&rft.volume=23&rft.issue=5&rft.spage=555&rft.epage=564&rft_id=info:doi/

Page 26: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

10.1016%2Fj.intcom.2011.05.008&rft.externalDocID=doi_10_1016_j_intcom_2011_05

_008¶mdict=en-US

Gillenson, M. L. (2011). Fundamentals of database management systems, second edition. John

Wiley & Sons. Retrieved from https://proquest-safaribooksonline-

com.ezproxy.liberty.edu/book/databases/9781118213575/chapter-12-client-server-

database-and-distributed-databasie/distributed_database?

query=((distributed+database))&reader=html&imagepage=

Han, F. F.-H. (2004). A study on tolerable waiting time: how long are web users willing to wait?

Behaviour & Information Technology, 23(3). 154–155. Retrieved from

http://rx9vh3hy4r.search.serialssolutions.com.ezproxy.liberty.edu/?ctx_ver=Z39.88-

2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid

%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx

%3Ajournal&rft.genre=article&rft.atitle=A+study+on+tolerable+waiting+time

%3A+How+long+are+Web+users+willing+to+wait

%3F&rft.jtitle=Behaviour+and+Information+Technology&rft.au=Nah%2C+Fiona+Fui-

Hoon&rft.date=2004-05-01&rft.issn=0144-929X&rft.eissn=1362-

3001&rft.volume=23&rft.issue=3&rft.spage=153&rft.epage=163&rft_id=info:doi/

10.1080%2F01449290410001669914&rft.externalDBID=n

%2Fa&rft.externalDocID=38633771¶mdict=en-US

Hogan, L. C. (2017). Web performance is user experience. O'Reilly Media, Inc. 3. Retrieved

from https://proquest-safaribooksonline-com.ezproxy.liberty.edu/book/web-

Page 27: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

development/usability/9781492029823/web-performance-is-user-experience/

performance_is_user_experience_html?query=((user+experience))#snippet

Lin, Q., Chang, P., Chen, G. Chin, B., Tan, K.-L., & Want, Z. (2016). Towards a Non-2PC

Transaction Management in Distributed Database Systems. Proceedings of the 2016

International Conference on Management of Data (1659–1660). San Francisco, CA.

Retrieved from https://dl-acm-org.ezproxy.liberty.edu/citation.cfm?id=2882923

Módos, I., Šůcha, P., Václavík, R., Smejkal, J., and Hanzálek, Z. (2016). Adaptive online

scheduling of tasks with anytime property on heterogeneous resources. Computers &

Operations Research, 76, 95–98. Retrieved from https://www-sciencedirect-

com.ezproxy.liberty.edu/science/article/pii/S030505481630140X

Peltier, T. R. (2013). Informaton security fundamentals (148). Auerbach Publications; New

York, NY. 148. Retrieved from

https://www-taylorfrancis-com.ezproxy.liberty.edu/books/9781439810637

Schmidt, K. (2006). High availability and disaster recovery (61–62 & 381–382). Springer;

Berlin, Germany. Retrieved from

https://link-springer-com.ezproxy.liberty.edu/book/10.1007%2F3-540-34582-5

Smith, W. F., and Waldenfels, P. L. (2018). Power Considerations In The Colocation Data

Center: Reliability is what sets a colocation data center apart. Mission Critical, 11.5, 14.

Retrieved from http://go.galegroup.com.ezproxy.liberty.edu/ps/i.do?

ty=as&v=2.1&u=vic_liberty&it=DIourl&s=RELEVANCE&p=CDB&qt=TI~

Page 28: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

%22Power+Considerations+In+The%22~~SP~14~~IU~5~~SN~1947-

1521~~VO~11&lm=DA~120180000&sw=w

Stallings, W., & Case, T. (2013). Business data communications: infrastructure, networking and

security (7th ed.), (576). Upper Saddle River, NJ: Prentice Hall.

Rogers, P., Salla, A., Bari, P., Fadel, L., Horn, A., Janssen, R., … Stoeckel, T. (2011). ABCs of

z/OS system programming: Volume 5. IBM Redbooks. Retrieved from https://proquest-

safaribooksonline-com.ezproxy.liberty.edu/book/operating-systems-and-server-

administration/ibm-system-z/0738435511/chapter-3dot-resource-recovery-services-rrs/

ww466220_xhtml?query=((two-phase+commit+protocol))#snippet

Rodgers, T. L. (2013). Commissioning data center operations (OPx): add OPx to your playbook

for ongoing building improvements. Mission Critical. 6.2, 12-16. Retrieved from

http://go.galegroup.com.ezproxy.liberty.edu/ps/retrieve.do?

tabID=T003&resultListType=RESULT_LIST&searchResultsType=SingleTab&searchTy

pe=BasicSearchForm¤tPosition=2&docId=GALE

%7CA328420151&docType=Article&sort=Relevance&contentSegment=&prodId=CDB

&contentSet=GALE

%7CA328420151&searchId=R2&userGroupName=vic_liberty&inPS=true

Rohde, R., & Haskett, J. (1990). Disaster recovery planning for academic computing centers.

Communications of the ACM 33(6), 654. Retrieved from https://dl-acm-

org.ezproxy.liberty.edu/citation.cfm?id=78975

Page 29: files.transtutors.com  · Web viewThe purpose is to shorten the OSPF by eliminating layers of hierarchical connections. By bypassing layers of the network will increase speed and

Tso, F. P. T., Jouet, S., & Pezaros, D. P. (2016). Network and server resource management

strategies for data centre infrastructures: A survey. Computer Networks 106, 210–211.

Retrieved from https://www-sciencedirect-com.ezproxy.liberty.edu/science/article/pii/

S1389128616302298

Tsuchlya, M., & Mariani, M. P. (1984). Performance modeling of distributed database. 1984

IEEE First International Conference on Data Engineering. Los Angeles, CA. 571.

Retrived from https://ieeexplore-ieee-org.ezproxy.liberty.edu/document/7271320

Wang, Z., Hou, X., Chen, H., Liang, X., & Guo, M. (2016). Power attack defense: securing

battery-backed data centers. Proceedings of the 43rd International Symposium on

Computer Architecture, 494. Seoul, Republic of Korea. Retrieved from https://dl-acm-

org.ezproxy.liberty.edu/citation.cfm?id=3001189

Wysocki, R. K. (2009). Effective project management: Traditional, agile, extreme. John Wiley &

Sons. Retrieved from

https://proquest-safaribooksonline-com.ezproxy.liberty.edu/book/software-engineering-

and-development/project-management/9780470423677/building-an-effective-project-

management-infrastructure/establishing_and_managing_a_continuous_p?

query=((Continuous+Process+Improvement+Program))&reader=html&imagepage=#X2l

udGVybmFsX0h0bWxWaWV3P3htbGlkPTk3ODA0NzA0MjM2NzclMkZyZWFsaXpp

bmdfdGhlX2JlbmVmaXRzX29