Developing a Cybersecurity Assessment Neural-Network
111
Developing a Cybersecurity Assessment Neural-Network by Otis David Scott B.S. in Computer Science, May 2003, Dillard University M.S. in Information Management, December 2009, Washington University of St. Louis A Praxis submitted to The Faculty of The School of Engineering and Applied Science of The George Washington University in partial fulfillment of the requirements for the degree of Doctor of Engineering January 10, 2019 Directed by Thomas F. Bersson Adjunct Professor of Engineering and Applied Science
Developing a Cybersecurity Assessment Neural-Network
Microsoft Word - Scott_Otis_D_Praxis_Final_12-4-2018.docxby Otis
David Scott
B.S. in Computer Science, May 2003, Dillard University M.S. in
Information Management, December 2009, Washington University of St.
Louis
A Praxis submitted to
The Faculty of The School of Engineering and Applied Science
of The George Washington University in partial fulfillment of the
requirements for the degree of Doctor of Engineering
January 10, 2019
Thomas F. Bersson Adjunct Professor of Engineering and Applied
Science
ii
The School of Engineering and Applied Science of The George
Washington University
certifies that Otis David Scott has passed the Final Examination
for the degree of Doctor
of Engineering as of October 17, 2018. This is the final and
approved form of the Praxis.
Developing a Cybersecurity Assessment Neural-Network
Otis David Scott
Praxis Research Committee:
Thomas F. Bersson, Adjunct Professor of Engineering and Applied
Science, Praxis Director
Thomas A. Mazzuchi, Professor of Engineering Management and Systems
Engineering & of Decision Sciences, Committee Member
Michael Grenn, Professional Lecturer of Engineering Management and
Systems Engineering, Committee Member
iii
iv
Dedication
TO MY MOTHER (RIP)
For all of your love, strength, and encouragement throughout the
years
TO MY GODMOTHER (RIP)
You put the tech in my hands and showed me the path towards my
success
AND TO MY FAMILY
v
Abstract
Developing a Cybersecurity Assessment Neural-Network
Cybersecurity is the foundation of technology that determines the
resilience and
protections of operations throughout the world. With the increase
of innovation, the
requirements for cybersecurity authorization and governance
increases to mitigate the
potential threats for malicious activities. Maintenance of a
cybersecurity program to
safeguard critical systems and infrastructure is a continuous
organizational process. To
mitigate the risk associated with data loss, corruption, or misuse,
organizations must
adopt innovation to balance resources and optimize security
functions.
In this Praxis, the author developed a continuous monitoring
approach based on a
neural network framework, dubbed the Security Continuous Monitoring
Neural-Network,
to automate the continuous monitoring functions found in the Risk
Management
Framework to improve operational scalability by reducing the impact
of this function as
part of effective cybersecurity program. The use of the Security
Continuous Monitoring
Neural-Network can augment an effective cybersecurity program to
optimize
organizational resources in support of the Risk Management
Framework.
Keywords: Enterprise Risk Management, Cybersecurity, Information
Assurance, Deep
Learning
vi
Abstract
...............................................................................................................................v
1.5 Research Questions
.............................................................................................9
3.5 Security Continuous Monitoring Neural-Network Development
................39
3.6 Training the Artificial Neural Network
..........................................................41
3.7 Data Validation
.................................................................................................44
Network
...................................................................................................................73
5.3 Practical Application
........................................................................................80
viii
List of Figures Figure 1-1 Risk Management (CRC NIST, 2016)
...............................................................3
Figure 2-1 Literature Review
.............................................................................................14
Figure 3-1 Total System Vulnerabilities
............................................................................32
Figure 3-2 Total System Vulnerabilities Control/Test Virtual
Machine ...........................35
Figure 3-3 Continuous Monitoring (Norman Marks, 2011)
..............................................40
Figure 3-4 Security Continuous Monitoring Neural-Network (SCMN)
............................41
Figure 3-5 SCMN with Weighted Values
..........................................................................43
Figure 3-6 SCMN Logic Model
.........................................................................................45
Figure 4-1 Control-Standalone: SCMN Success Probability
............................................59
Figure 4-2 Control-Hybrid: SCMN Success Probability
...................................................60
Figure 4-3 Control-Enterprise: SCMN Success Probability
..............................................62
Figure 4-4 Test-Standalone: SCMN Success Probability
..................................................63
Figure 4-5 Test-Hybrid: SCMN Success Probability
.........................................................64
Figure 4-6 Test-Enterprise: SCMN Success Probability
...................................................66
Figure 4-7 Control-Standalone: SCMN Performance
.......................................................68
Figure 4-8 Control-Hybrid: SCMN Performance
..............................................................69
Figure 4-9 Control-Enterprise: SCMN Performance
.........................................................70
Figure 4-10 Test-Standalone: SCMN Performance
...........................................................71
Figure 4-11 Test-Hybrid: SCMN Performance
..................................................................72
Figure 4-12 Test-Enterprise: SCMN Performance
............................................................73
Figure 4-13 Global SCMN Performance
...........................................................................74
ix
List of Tables Table 3-1 Expert Sampling Requirements - Information
Assurance .................................29
Table 3-2 System Vulnerabilities - Tenable Nessus
..........................................................33
Table 4-1 Week 1: Security Assessment
...........................................................................49
Table 4-2 Week 2: Security Assessment
...........................................................................50
Table 4-3 Week 3: Security Assessment
...........................................................................51
Table 4-4 Week 4: Security Assessment
...........................................................................52
Table 4-5 Week 5: Security Assessment
...........................................................................53
Table 4-6 Week 6: Security Assessment
...........................................................................54
Table 4-7 Week 7: Security Assessment
...........................................................................55
Table 4-8 Probability of Success for Weekly Authorizations
...........................................58
Table 4-9 Control-Standalone: Authorization Results
......................................................59
Table 4-10 Control-Hybrid: Authorization Results
...........................................................60
Table 4-11 Control-Enterprise: Authorization Results
......................................................61
Table 4-12 Test-Standalone: Authorization Results
..........................................................63
Table 4-13 Test-Hybrid: Authorization Results
.................................................................64
Table 4-14 Test-Enterprise: Authorization Results
............................................................65
x
Binary Step Function (3.2)
......................................................................................43
Sigmoid Derivative Function: Back Propagation (3.4)
...............................................44
Binomial Distribution (4.1)
.....................................................................................56
Global Continuous Monitoring Assessment - Accuracy (4.3)
.....................................74
xi
AI Artificial Intelligence AIS Automated Information Systems CI
Confidence Interval CIA Confidentiality, Integrity, and
Availability CISO Chief Information Security Officer COTS
Commercial Off-The-Shelf CVE Common Vulnerabilities and Exposures
CVSS Common Vulnerability Scoring System DAO Designated Authorizing
Official DAOR Designated Authorizing Official Representative DBN
Deep Belief Networks ELM Extreme Learning Machine EO Executive
Order ERM Enterprise Risk Management GOTS Government Off-The-Shelf
IA Information Assurance IDS Intrusion Detection Systems NIST
National Institute of Standards and Technology NSF National Science
Foundation NVD National Vulnerability Database ODNI Office of the
Director of National Intelligence POA&M Plan of Action and
Milestones RMF Risk Management Framework RNN Recurrent Neural
Network SCAP Security Content Automation Protocol SCMN Security
Continuous Monitoring Neural-Network SVM Support Vector Machine VM
Virtual Machine
1
Organizational security threats are continuous. The effectiveness
of an Enterprise
Risk Management (ERM) strategy is dependent on the appropriate use
of security
practices to safeguard an organization against cybersecurity
threats (Bayuk, et al., 2012).
The catalysts for implementing ERM originate from best industry
practices, mission
requirements, and regulatory mandates. The functions of maintaining
an efficient ERM
security posture may differ per environment but share the common
objective to provide
sufficient protection against persistent security threats (Dwyer,
et al., 2009).
Implementing ERM is costly to develop, integrate, and support
leading to a strain on
organizational resources (time, people, and money). Limitations in
available resources
can hinder the ERM strategy, implementation, and supportability to
protect against
cybersecurity threats (NIST, 2014). To mitigate the strain on
organizational resources,
automation of the cybersecurity assessment function using Deep
Learning Artificial
Intelligence (AI) to develop a neural-network can lower costs and
time while increasing
productivity of mandatory cybersecurity assessments. The use of AI
to develop a
cybersecurity assessment neural-network can support ERM mandates to
maintain a
secure environment (NIST, 2014).
The adaptability of a cybersecurity assessment neural-network is
dependent on the
development and durability of the neural-network to support
cybersecurity assessment
functions (Beam, 2017). The cybersecurity assessment neural-network
functions will
align to the continuous monitoring standards of the National
Institute of Standards and
Technology (NIST) Special Publication (SP) 800-39: Managing
Information Security
(NIST 800-39, 2011). Automating the continuous monitoring will
enable organizations to
perform consistent security assessments minimizing time and
performance waste while
2
changing the focus of the subject matter experts (SME) to address
the increasing security
threats (NIST SP 800-37, 2014; NIST SP 800-53, 2014). Effective ERM
is essential to
the survival of the organization structures (McNeil, 2013). Deep
learning enables
organizations to automate crucial functions performed by SMEs and
reallocate resources
in support of the continuous security changes of the organization
(Roberts, 2015). The
inability to automate security functions would hinder
organizational growth, stability, and
security (Gou & Liu, 2016).
This chapter indicates the problem statement, the purpose of the
study, the nature
of the study, and the significance of cybersecurity and provides
information on the
importance of ERM, deep learning, and the development of a
cybersecurity assessment
neural-network to automate the continuous monitoring function of an
organization.
1.1 Background
On September 15, 2008, The Office of the Director of National
Intelligence
(ODNI) enacted an Intelligence Community Directive, Number 503
(ICD-503) to
develop a consistent ERM approach throughout the Intelligence
Community (IC) (ICD-
503, 2008). The refocus on information security governance mandated
a comprehensive
system security assessments of the IC Automated Information Systems
(AIS). The
purpose of the directive provided assurance the AIS is authorized
to process data in the
appropriate system categorization and classification. To achieve
this requirement, each
AIS will undergo an initial authorization and periodic system
security assessments to
verify the security posture is maintained and aligned with the
conditions of the security
authorization. The effort to incorporate an effective enterprise
risk management and
cybersecurity program, the implementation of the Risk Management
Framework (RMF)
3
enables the requirements to achieve system authorization and
maintenance per ICD-503.
Figure 1-1 shows the six (6) steps of the RMF required for system
authorization.
Figure 1-1 Risk Management (CRC NIST, 2016)
The steps of the RMF are critical in safeguarding federal systems
(NIST 800-39,
2011). The cybersecurity infrastructure of federal systems is
dependent on the
effectiveness of the organizational security posture. Completion of
the RMF ensures each
step in the process corresponds to the standardization of
organizational functions to
identify, deploy, maintain, and decommission federal systems while
safeguarding the
organizational data.
Starting with Step 1: Categorize System, the Confidentiality,
Integrity, and
Availability (CIA) of the AIS are rated low, moderate, or high
based on the data
processed and output. The categorization of the AIS is determined
by the data type, its
4
classification, and potential harm if the data is compromised. The
coordination between
system stakeholders such as the program manager, system security
engineer, system
administrator, information assurance officer, and designated
authorizing official
representative is used to categorize the system. A successful
system categorization
requires all stakeholders to thoroughly understand the mission
objectives, data processing
methods, classification, and safeguards to protect the data in the
appropriate environment.
The CIA values for the AIS are determined by the system owner or
program
manager and confirmed by the Designated Authorizing Official
Representative (DAOR).
The Confidentiality values can range from low, moderate, or high,
but cannot be lower
than the Integrity or Availability values. The values for Integrity
cannot exceed the values
for Confidentiality and cannot be lower than the Availability of
the AIS. The Availability
of the AIS cannot exceed the values of Confidentiality and
Integrity. Once the CIA values
are established, the system owner can proceed to Step 2 of the Risk
Management
Framework.
Step 2: Select Controls specifies the security controls applicable
to the security
posture of the AIS. Once the CIA values are identified, the DAOR
determines the
security controls appropriate for the environment of the AIS. The
selected security
controls require the system security engineer and the system
administrator to determine
the risk mitigation and remediation strategy per security control.
The quantity of security
controls to safeguard the system varies per the environment and
classification of data.
The AIS environment is the method in which system data is
transferred throughout the
lifecycle of the system. AIS environments are categorized as
Standalone, Hybrid, and
Enterprise environments to drive the applicability and selection of
the security controls.
5
Following the completion of categorizing the system and selection
of security controls,
the system owner must implement the security controls.
In Step 3: Implement Controls, each security control must be
implemented per the
guidance provided in NIST SP 800-128. The implementation of the
security controls is
dependent on the number of security controls selected, the
environment of the AIS, and
the availability of personnel resources to implement the controls.
This step is critical to
the operability of the system. The implementation of security
controls can hinder system
functions if the system administrator is not informed of the
potential risk of the control.
Step 3 requires a knowledgeable system administrator that
understands how the data is
stored, processed, and disseminated according to the mission
objectives. With steps 1-3
completed successfully, the system owner must perform an internal
security assessment
of the selected security controls.
Step 4: Assess Control is a self-assessment of the implemented
security controls
to ensure the AIS vulnerabilities are remediated or mitigated to
baseline the security
posture. Security controls can be remediated or mitigated through
its inheritance or
individually. The system administrator will coordinate with the
system security engineer
to ensure the implemented controls are operating according to the
standard operating
procedures of the automated information system. This process is
repeated throughout the
catalog of implemented controls and verified by the system owner.
The due diligence
performed in Step 4 will contribute to the course of action from
the DAOR to authorize
or reject the system.
The completion of step 4 places the responsibility solely on the
system owner.
During this step of the RMF, the system owner must provide a stable
and secure
environment under the continual constraints of security threats
with each day of
6
inoperability. Negligence of step 4 can prolong the RMF
indefinitely if the security
controls and system vulnerabilities are not within an acceptable
range of risk as
determined by the DAOR. The system owner is at risk to their
budget, schedule, and
resources if step 4 is not within an acceptable period. The
threshold provided by the
DAOR for completion is thirty (30) days from the initiation of step
4.
Step 5: Authorize System is the final approving authority decision
provided by the
Designated Authorizing Official Representative (DAOR) on behalf of
the Designated
Authorizing Official (DAO) of the organization. This step is the
responsibility of the
DAOR to collect, validate, and assess the information collected in
steps 1- 4. The CIA
values, security controls, and remediations/mitigations become the
body of evidence that
represents the Automated Information System. The DAOR performs a
comprehensive
security assessment to authorize or reject the system based on the
body of evidence. Once
the system receives authority to operate, the system owner must
maintain an acceptable
security posture as a condition for an approved operational
status.
The final step of the RMF, Step 6: Monitor Controls is the
continuous monitoring
assessment of the AIS to maintain the security posture throughout
the lifecycle of the
AIS. The implementation of step 6 of the RMF is the foundation of
the quantitative case
study. The importance of selecting a case study as the appropriate
research design is
defined as both a quantitative and qualitative method to examine
people, events,
processes, an institution, or social groups (Seawright, 2008). The
importance of
collecting and analyzing manual continuous monitoring assessments
are critical to the
development of the artificial neural-network. The Risk Management
Framework is an
extensive process that supports Enterprise Risk Management in the
Federal government.
7
to perform timely and consistent security assessments leading to
increased organizational
costs and loss of productivity.
1.3 Purpose
assessments functions to improve time, consistency, and
productivity.
1.4 Nature and Significance of the Study
Automation of system functions is not a new phenomenon in
information security.
Currently, commercial packages support the scanning of systems for
vulnerabilities and
reporting. To date, Step 6: Monitoring Controls is a manual process
that requires expert
knowledge of information assurance, cybersecurity, and the system
functions to perform
an effective continuous monitoring security assessment (NIST
800-137, 2011). The
function of continuous monitoring is manually performed by the
DAOR. Assessments
conducted by the DAOR are essential to the lifecycle of the AIS
being reviewed (NIST
800-137, 2011). The DAOR must verify the security posture of the
system is maintained
per the initial authorization agreement. As a standard practice in
the Federal government,
the continuous monitoring process has a threshold of 90-days from
the initial continuous
monitoring request to completion of the task (NIST 800-137, 2011).
During this
timeframe, the owner of the AIS must allocate program resources to
address the
vulnerabilities identified by the DAOR leading to a work stoppage
of the AIS intended.
To expedite the continuous monitoring process, internal and
external pressures from
organizational leaders contribute to DAOR discrepancies in the
continuous monitoring
assessment.
8
Failure to maintain an acceptable security posture could lead to
enforcing work
stoppage of AIS functions, fines, or potential imprisonment (NIST
800-160, 2016).
Continuous monitoring is an extensive task that requires time to
complete successfully.
Dependent on the necessity of the AIS, organizational leaders push
to expedite
continuous monitoring leading to inconsistencies with the
assessments. Bias is the
variable a DAOR must address to perform consistent and impartial
continuous
monitoring assessments (Such, et al., 2016). The consistency of
continuous monitoring
assessments is key to the integrity and productivity of the RMF.
With a manually
performed process the probability of bias and human error increases
(Wilson, 2013). The
use of a neural-network to learn DAOR functions can minimize bias,
human error, and
time to perform continuous monitoring assessments (NISTIR 7756,
2012).
The Security Continuous Monitoring Neural-Network (SCMN) was
developed by
the author as a tool for the quantitative case study with the
purpose to automate Step 6 of
the Risk Management Framework, Monitor Controls. The research for
SCMN was
designed to solve the persistent problem of the time and resources
required to perform
mandatory continuous monitoring assessments throughout
organizations that implement
the ICD-503 Risk Management Framework. The SCMN will automate
continuous
monitoring to provide an impartial, consistent, and time-effective
method to perform a
system security assessment. The distinction between the commercial
packages for
vulnerability monitoring and the functions of the SCMN is the
function to perform
system authorizations. The SCMN is not an existing Commercial
Off-The-Shelf (COTS)
or Government Off-The-Shelf (GOTS) product. The SCMN is an original
tool developed
for this quantitative case study.
9
• MRQ1: Can the Security Continuous Monitoring Neural-Network
(SCMN) meet
or exceed the threshold performance of the Designated Authorizing
Official to
perform a continuous monitoring cybersecurity assessment?
Sub Research Question:
• SQR1: Can the performance of the Security Continuous Monitoring
Neural-
Network (SCMN) exceed the objective in performing a continuous
monitoring
cybersecurity assessment?
1.6 Hypotheses
H10: Using vulnerability scan data from Tenable Nessus the Security
Continuous
Monitoring Neural-Network (SCMN) will not meet or exceed the
performance
threshold of the Designated Authorizing Official Representative
(DAOR) to perform
a continuous monitoring cybersecurity assessment in the Standalone
environment.
H1a: Using vulnerability scan data from Tenable Nessus the Security
Continuous
Monitoring Neural-Network (SCMN) will meet or exceed the
performance threshold
of the Designated Authorizing Official Representative (DAOR) to
perform a
continuous monitoring cybersecurity assessment in the Standalone
environment.
H20: Using vulnerability scan data from Tenable Nessus the Security
Continuous
Monitoring Neural-Network (SCMN) will not meet or exceed the
performance
threshold of the Designated Authorizing Official Representative
(DAOR) to perform
a continuous monitoring cybersecurity assessment in the Hybrid
environment.
H2a: Using vulnerability scan data from Tenable Nessus the Security
Continuous
Monitoring Neural-Network (SCMN) will meet or exceed the
performance threshold
10
of the Designated Authorizing Official Representative (DAOR) to
perform a
continuous monitoring cybersecurity assessment in the Hybrid
environment.
H30: Using vulnerability scan data from Tenable Nessus the Security
Continuous
Monitoring Neural-Network (SCMN) will not meet or exceed the
performance
threshold of the Designated Authorizing Official Representative
(DAOR) to perform
a continuous monitoring cybersecurity assessment in the Enterprise
environment.
H3a: Using vulnerability scan data from Tenable Nessus the Security
Continuous
Monitoring Neural-Network (SCMN) will meet or exceed the
performance threshold
of the Designated Authorizing Official Representative (DAOR) to
perform a
continuous monitoring cybersecurity assessment in the Enterprise
environment.
1.7 Limitations
The limitations of the quantitative case study were determining the
scope of the
RMF to automate using a deep learning neural-network. The functions
and requirements
of the RMF are limited to the automation of Step 6: Monitor
Controls as shown in Figure
1-1, to enable successful data collection and analysis within the
research study limits
(Cone & Foster, 2006). The development of the neural-network,
testing, and
implementation of the tool are limited to Step 6. Monitor Controls
under the challenging
schedule. The quantitative case study is dependent on the impartial
responses of the
manual security assessments. Ensuring the guidance is followed to
ensure consistency is
a concern. The response from the participant is essential to
providing the baseline of the
Security Continuous Monitoring Neural-Network (SCMN) to
substantiate the quantitative
case study (Christensen, Johnson, & Turner, 2010).
11
1.8 Delimitations
Delimitations in a research study address the traits to isolate the
scope of the
research (Yin, 2009; Salminen, Harra, & Lautamo, 2006). The
quantitative case study
identified a method to automate a continuous monitoring assessment
using SCMN. The
testing of the SCMN is based on a small pool of data using a
compressed schedule. This
case study has three delimitations.
First, the testing pool of data is limited to seven (7) weeks of
collected data using
one commercially available vulnerability assessment tool. The seven
(7) weeks of data
collection was selected to provide the SCMN with substantial data
while adhering to the
access window for the software evaluation by the vendor Tenable.
Weekly vulnerability
scans can generate hundreds of raw system vulnerability entries. To
determine the
significant vulnerabilities from the bulk of data, the system
vulnerabilities are parsed and
filtered manually to determine the unique threat vectors from the
vulnerability report. The
seven (7) weeks of data collection supports the balance between
acquiring substantial
data versus superfluous data for the changes in the system security
posture. Following
week 7, the security posture for the Control and Test virtual
machines would degrade past
the point for collecting differentiating authorization decisions
based on the quantity of
non-remediated system vulnerabilities. Exceeding seven weeks of
data collection did not
add value to the changes in the security posture of the virtual
systems.
Second, the available DAORs to perform the manual security
assessment was
capped to five (5) users with the range of experience between
(5-15) years. This range
provided an array of expert responses to support the case
study.
Third, the use of R-Studio to develop the SCMN limits the scope of
the neural-
network to isolate the continuous monitoring function ignoring the
previous steps of the
12
RMF. The delimitations were necessary to conduct the quantitative
case study.
Understanding the delimitations of a study is critical to the
methodology of the research
(Christensen, Johnson, & Turner, 2010).
1.9 Summary
Chapter 1 provided the topic of the research study, the background
of the
problem, problem statement, purpose statement, nature and
significance of the study,
research questions, hypotheses, limitation, and delimitations. The
significance of this
study to develop a cybersecurity assessment tool using deep
learning provides the
foundation for optimizing the RMF throughout organizations. The
value in developing an
automated continuous monitoring tool will enable consistency,
impartiality, and
developing efficiencies in the implementation of the Risk
Management Framework.
Chapter 2 provides a literature review to substantiate the purpose
of the
quantitative case study. Chapter 2 contains a historical overview
of the research problem,
consolidation of the challenges of the praxis, and applicable
cybersecurity methods and
solutions to validate the development of the SCMN to automate
continuous monitoring.
Chapter 3 provides the methodology of the SCMN to develop a
cost-effective and
efficient neural-network tool to automate continuous monitoring.
Chapter 3 contains the
data collection, information assurance hierarchy, cybersecurity
baseline development,
designated authorizing official manual assessment, security
continuous monitoring
neural-network development, data analysis, and data validation of
the research study.
Chapter 4 provides the results of the research case study,
descriptive data
statistics, analysis of the SCMN, performance vectors, and overall
accuracy of the
SCMN. The data from the SCMN substantiates the automation of
continuous monitoring.
13
Chapter 5 provides a conclusion to the quantitative case study.
Chapter 5 contains
the limitations of the research, recommendations for future
research, and practical
application and benefits of automating continuous monitoring.
14
Chapter 2 - Literature Review
The literature review is the composition of specialty areas that
form the
foundation of the quantitative case study. As shown in Figure 2-1
the literature review
will examine the areas of Enterprise Risk Management, Cyber
Security, Deep Learning,
and Information Assurance. The development of the hypothesis,
research questions, and
SCMN derive from the information collected in the literature
review. The practical
application of the SCMN is currently not used in industry. This is
the first known
quantitative case study of the Security Continuous Monitoring
Neural-Network to
automate continuous monitoring requirements of the ICD-503 Risk
Management
Framework.
The foundation of a cybersecurity effort originates from effective
enterprise risk
management (ERM) practices. The purpose of ERM is the alignment and
management of
data, applications, processes, and associated risks to ensure
consistency and productivity
through an organization (McNeil, 2013). The flow of data throughout
each business unit
contributes to the increase in knowledge sharing while minimizing
redundancy and
15
waste. Reducing redundancy and waste ensures the confidentiality,
integrity, and
availability of the data (Asadi, Fazlollahtabar, & Shirazi,
2015).
The basis for effective ERM is the alignment of the flow of data
and the
organizational leadership. Creating a network of information
sharing provides a
framework to address organizational risks.
Culture
In many organizations, the culture is inherent to the behaviors of
the older
workforce (Nonaka & Nishiguchi, 2001). Organizational cultures
differ throughout
generations and may contribute to the inability to accept the core
values of ERM. The
culture of the organization will have to evolve from a productivity
driven environment to
focus on quality (Drucker, 1999). It is the responsibility of
leadership to identify the
laggards of the organization and take necessary actions to prevent
derailment of the ERM
initiative. The organizational culture is the first of many factors
to address in
understanding ERM.
Structure
The importance of the organizational structure plays a significant
role when
implementing ERM. The hierarchical structure of the organization
can determine the
ability to support ERM effectively (Becerra-Fernandez &
Sabherwal, 2010).
Organizations with active hierarchical structures not conducive to
open communication
will have to compensate the management structure to encourage
knowledge sharing. The
inability to change the hierarchy will hinder the ERM effort
resulting in loss of quality
and innovation. The use and efficiency of the ERM program reduce
redundancies
throughout the organization through the use of lean initiatives.
Reducing redundant
16
functions will promote efficient use of the workforce and encourage
knowledge sharing
in support of ERM (Arazy & Gellatly, 2012).
The organizational structure is a critical element in supporting
ERM. The typical
top-down hierarchies of organizations yield fraud, waste, and abuse
throughout the multi-
layers of bureaucracy (Jones, 2010). Developing an environment with
a definitive leader
and reducing the amount of middle management will create an
organization, which
knowledge sharing is the core competency of the entire organization
(Becerra-Fernandez
& Sabherwal, 2010).
Technologies
An effective ERM initiative is only as effective as its information
technology
infrastructure (Nonaka & Nishiguchi, 2001). The emergence of
information technologies
provides solutions for supporting ERM. IT is the backbone of the
technological source of
knowledge. The amount of information readily available is conducive
to the success of
the ERM initiative (Becerra-Fernandez & Sabherwal, 2010).
The implementation of information technologies will not solely
provide an ERM
solution in an organization. The factors of organizational culture,
structure, and
leadership will enable the success of ERM. Organizations that
depend on the IT solution
as the definitive solution for ERM will not receive the desirable
return on investment.
Information technologies will provide a means to support ERM, but
IT will not enable
organizational change. The emergence of global competitors
encourages organizations to
use technology for improvement, effectiveness, and efficiency, not
merely for automation
purposes (Lund, Manyika, & Ramaswamy, 2012).
The most critical factor in ERM implementation is top executive
leadership
support. With organizational change comes uncertainty and
reluctance. It is the
17
successful (Becerra-Fernandez & Sabherwal, 2010). In
preparation for organizational
change, executive leadership should champion the initiative to
promote buy-in
throughout the organization. Leadership should ensure the culture,
structure, and
technologies are in-sync to enable organizational change (Jones,
2010). Organizational
leaders should believe in the ERM initiative and use the resources
available to promote
open communication and knowledge sharing.
A significant obstacle in an ERM effort is the lack of dedication
from the
organizational leadership team. Without the support from executive
management, the
ERM effort will not have the adoption rate forecast by the
leadership team (Durcikova &
Gray, 2009). Organizational initiatives that do not receive full
leadership support become
wasteful projects with ample promises but lack of execution,
adoption, and support
(Wilson, 2002). The value of the ERM program is dependent on the
nurturing and effort
of executive leadership.
ERM Framework
Figure 2-2 outlines the ERM process and risk components (Beasley,
2016).
Setting the strategy and objective setting, the risk
identification, risk assessment, risk
response, and monitoring provide the foundation for the RMF and
continuous monitoring
(Beasley, 2016; McNeil, 2013). The components of the ERM derive
from the risk culture
and leadership of the organization. Effective organizational
leaders enable change,
culture, and innovation throughout. The ERM process is conducive to
continual change
and growth, but it is heavily dependent on the culture to drive the
importance of risk. The
effective and appropriate identification of risk can prevent
potential pitfalls that can affect
the stability, profitability, and relevancy of an
organization.
18
Figure 2-2 ERM Framework (Beasley, M., 2016)
ERM is critical to the support of the organizational structure. The
risk posture of
organizational changes continually and the effectiveness of the ERM
is dependent on the
flexibility of the organization (Dwyer, et al., 2009). Failure to
address the changes in risk
can yield disastrous results. ERM is not the implementation of IT
tools, but a
methodology for addressing organizational risks.
2.2 Cybersecurity
Cybersecurity is a continual effort to safeguard data against
malicious threats and
activities (Kshetri, 2013). The abundance of data readily available
requires processes and
policies to ensure the confidentiality, integrity, and availability
of the data. Organizations
deploy various mechanisms to minimize the probability of threats
from internal and
external sources (Bayuk, et al., 2012). The cybersecurity threats
to a critical infrastructure
began as early as the 1980’s with the implementation of compromised
software
distributed by the United States to damage the oil production
pipeline in the Soviet Union
contributing to the end of the Cold War (Clarke & Knake, 2010).
The implementation of
19
the Internet migrated the cybersecurity threat from on-premise to
the remote attacks
(Clarke & Knake, 2010).
The Office of the Director of National Intelligence (ODNI) is a
federal agency
enacted to provide governance to safeguard data in the federal
government. The
publications dedicated to the risk management framework use best
industry practices to
ensure mitigation and remediation methods for the verification of
data. The Executive
Order 13636, Improving Critical Infrastructure Cybersecurity was
approved on February
12, 2013, to direct federal agencies to perform cybersecurity
assessments of the
infrastructure (NIST, 2014). This Executive Order (EO) was in
response to the increasing
national and international security threats that could exploit
vulnerabilities within the
various agencies. The result was the implementation of a
comprehensive risk
management framework supported by ODNI to protect federal
systems.
The increase in global cybersecurity threats creates a challenge to
protect the data
of organizations and maintain the security posture. The National
Vulnerability Database
(NVD) system vulnerability repository maintained by the NIST using
the Security
Content Automation Protocol (SCAP) standards. (NIST, 2016). The NVD
uses Common
Vulnerabilities and Exposures (CVE) format to standardize the
vulnerabilities within the
repository (NIST 800-53, 2013). The NVD contains more than 85,000
vulnerabilities and
continues to increase with each new vulnerability that is
discovered (NCCIC, 2017). The
threats of cybersecurity exploits are continuous and require
persistence the safeguarding
organizational data and networks.
2.3 Deep Learning
Artificial intelligence (AI) is the development of machines with
the capability to
learn and mimic human activities. The use of AI contributes to the
development of
20
intelligent machines to perform complex tasks for learning,
planning, and problem-
solving (Roberts, 2015). The foundation of AI research is knowledge
engineering and
machine learning. The availability of quality of data through
knowledge engineering
determines the capability of the AI to mimic the human actions
based on the properties of
the data. The advancements in AI research enabled the global
adoption of AI in
technology products and services. The increased computational
processing and storage
capacity enabled the growth and potential of artificial
intelligence.
Machine learning uses algorithms in the form of models to parse and
disseminate
data while learning from the inputs (Roberts, 2015). Tasks such as
the spam email filters
to the digital personal assistants use machine learning as the
foundation of their
development. Limitations with machine language algorithms are based
on the data and
design of the algorithm. The use of machine language has inherent
benefits, but the
benefits are based on the integrity of the data and the delivery
method, and the design of
the algorithm.
Deep learning is the derivative of machine language algorithms that
use nodes to
form a neural-network to perform complex tasks to enable a machine
to accommodate for
hidden variables to solve a problem set (Al-Hamadani, 2015). Deep
learning neural-
networks thrive from the collection of data, and the algorithms
developed to manipulate
the data for a variety of uses. Deep learning training relies
heavily on the algorithm of the
neural-network to develop patterns from the input data. The pattern
recognition
algorithms of the neural-network supplements the training process
versus the manual
development of other machine language models (LeCun, Bengio, &
Hinton, 2015).
An artificial neural-network is a rudimentary system structured to
mimic the
neural structure of the human brain. The neural-network can
demonstrate the essential
21
decision functions of the brain through the development of neural
nodes and bias nodes.
The limitations of the neural-network are hindered by the input
quality, and integrity of
the data set, and tailoring of the neural and bias nodes is
required to generate the desired
results (Rojas, 2013).
A neural-network is limited by its design and purpose. In Figure
2-3, the
composition of the neural-network is determined by the input layer,
the processing layer,
and output layer. The characteristics of the neural-network can
perform repetitive actions
and improve performance through the machine learning. These
repetitive actions will not
deviate from the initial process, but the speed and performance of
the response may
increase with each learning iteration (Rojas, 2013). The learning
strategies of the neural-
network are categorized as supervised learning and unsupervised
learning.
Supervised learning is the learning algorithm of a neural-network
where the
output is predetermined. During the supervised learning process,
the input patterns are
provided to supplement the input layer of the neural-network (Beam,
2017). The input
pattern is distributed throughout the neural-network using forward
propagation to the
output layer of the neural network to produce an output pattern. If
the generated output
pattern differs from the target pattern, and the error value is
generated to represent the
misalignment between the output pattern and target pattern. The
errors generated from
the output misalignment can be traced using back propagation of the
neural-network to
determine the source. The reinforcement of the supervised learning
strategy is built on
observation and adjusting the weighted values as appropriate (Beam,
2017).
Unsupervised learning permits computational models of the
neural-network to
produce an output based on the hidden or unknown patterns of the
data set (Beam, 2017).
As an example of an unsupervised system, these strategies have
drastically enhanced the
22
ability of the Intrusion Detection System (IDS) to acknowledge and
respond to security
threats. The unsupervised neural-network of the IDS finds the
unpredictable neural
decision in the extensive data sets.
By using the back propagation, the neural-network can be corrected
to ensure the
data being processed follows the set criteria of the intended
design and function. The use
of neural-networks and deep learning have provided benefits in
learning pattern
recognition to enhance the capabilities of systems to make firm
decisions based on the
criteria of the data set. The continual use and development of a
neural-network will
provide enhancements to technology to support the increase in
neural-network capacities.
AIS security risks, threat, and exploits have expanded with the
advancements in
technology. The frequent use of a neural-network is to address the
persistent security
threats that plague organizations (Al-Hamadani, 2015). Implementing
a neural-network
as a function of an Intrusion Detection System (IDS) can perform
continuous analysis of
organizational security threats (Kang & Kang, 2016).
An IDS utilizing a robust neural-network can improve the security
of various type
of AIS and environments. The parameters in fabricating the
neural-network structure are
prepared using threat vectors and weighted risk to determine the
likelihood of exploit to
the AIS (Alom, Bontupalli, & Taha, 2015). Introducing the
parameters through the
unsupervised learning enhances the precision of the IDS.
23
Figure 2-3 Artificial Neural-Network (Science Clarified,
2017)
The two (2) types of neural-networks vary and perform different
functions. The
feedforward neural-network is a unidirectional processing unit used
for pattern
recognition and generation. The feedforward neural-network contains
no feedback loops,
and a fixed input is processed to generate an adjusted output. The
feedback or recurrent
artificial neural-network (RNN) is similar to the feedforward
neural-network but enables
the use of feedback loops. An RNN is appropriate in content
addressable memories.
In the development of the Security Continuous Monitoring
Neural-Network, the
use of a deep learning neural-network AI was selected to assess the
range of systems
vulnerability inputs to automate the function of the Designated
Authorizing Official
Representative (DAOR). The SCMN is a supervised RNN designed for
this quantitative
case study to automate the continuous monitoring process. With the
continual increase in
system vulnerabilities and threat vectors, the use of the Gaussian
mixture model or
decision tree models are not effective in performing an automated
system continuous
monitoring assessment due to the constant change in
vulnerabilities. For the Gaussian or
24
decision tree models to be effective, a manual assessment would
need to be conducted
and stored per system vulnerability for the machine learning
approach to be effective.
The use of machine learning models other than the neural-network
model will not yield
an effective automated solution for the problem set.
2.4 Information Assurance
Information Assurance (IA) is the protection and defense of the
Confidentiality,
Integrity, and Availability of the data (CNSSI 4009, 2003).
Protection of the
authentication and non-repudiation are critical to understanding
information assurance.
The confidentiality of the data is the assurance the data is
transmitted to the intended
recipients (CNSSI 4009, 2003). The integrity of the data is the
assurance the data is
untampered at rest or in transit (CNSSI 4009, 2003). Availability
of the data is the
assurance data is available at the time of request by authorized
individuals (CNSSI 4009,
2003).
The governance of managing risks and threats are critical to an
effective risk
management program. IA use and appropriateness of an organizational
structure reflect
the associated risks (Scott & Davis, 2007). Distributed
computing systems have
cultivated various concerns about information security and
protection against continual
threats (Pringle & Burgess, 2014). The CIA of data resources is
currently reliant upon
incorporated data frameworks to include their security controls and
organizational
boundaries (Hamill, Deckro, & Kloeber, 2005).
The foundation of IA is the assessment and analysis of AIS risks.
The type of
system vulnerabilities is based on the risk and probability of an
exploit (Kuhn, Rossman,
& Liu, 2009). The vulnerability values of Critical, High,
Moderate, and Low determines
the action to mitigate or remediate the system vulnerability (NIST
800-37, 2014). The
25
steps required to address the system vulnerabilities are based on
the type of data being
processed and the environmental variables of the AIS. The
environment variables are
based on the system architecture. A standalone system is a system
that is not
interconnected to other systems or connected to a network. Hybrid
systems are a cluster
of interconnected systems not connected to an external network. An
enterprise system is a
series of systems connected through a standard network (NIST
800-53, 2013). An
effective IA program determines the value of addressing a risk
based on the potential to
exploit and threat vectors. IA program has limited resources to
secure an AIS, and the
resources must be used efficiently (Kuhn & Johnson,
2010).
An effective information assurance program within an organization
comprises of
knowledgeable and efficient information security professionals with
the common
objective to protect organizational data from unauthorized
disclosure or use (CNSSI
4009, 2003). The roles and responsibilities of the information
assurance professionals
align with the steps of the risk management framework. During the
initial process of
acquiring a system security authorization.
The interconnection of frameworks and systems of the world provides
new
chances in exploiting IT threats. Information assurance is the
systematic method for
security governance and implementation to minimize security
exploits internal and
external to organizations (Kuhn, Rossman, & Liu, 2009).
The culture of the organization is vital to assess the influence of
IA throughout the
organization. The organizational culture is the core values of the
organization (Yuan,
Williams, et al., 2017). A strong organizational culture allows
leadership and followers to
work together in the best interest of the organization. With the
advancements in IT design
26
and theory, the use of good information assurance practices yields
benefits and
challenges to achieve the goals and objectives of the
organization.
The shift in organizational paradigms, theories, and IA programs
varies per
organizational structure. The implementation of change within an
organization requires
the support of executive leadership and the leadership teams
(Colwill, Todd, et al., 2001).
It is essential for leadership to understand the appropriateness of
organizational
development and design to ensure the successful implementation of
an IA program
(Chakraborty, Ramireddy, et al., 2010). With a short life
expectancy of a company,
leaders must embrace innovation to remain relevant. Organizational
leaders must
empower their followers, as they are the innovators of an
organization.
2.5 Summary
Chapter 2 provided the literature review of the quantitative
research study. The
key research areas for enterprise management, cybersecurity, deep
learning, and
information assurance are the foundation for understanding the risk
management
framework. The benefits of the quantitative research can contribute
to the development of
AI tools to automate functions within the Federal government. To
date, the
implementation of AI in the Federal government is limited to the
procurement of
commercially developed tools with the minimal flexibility to adapt
to the unique
requirements of a Federal agency. The global possibilities of a
neural-network to
integrate functions and capabilities across the Federal government
can be discovered
through further research of AI replacing government functions.
Performing an automated
continuous monitoring assessment is the first step toward adopting
efficiencies and
consistency within the Federal government.
27
The practical contributions of the Security Continuous Monitoring
Neural-
Network developed as part of the quantitative case study will add
value to the continuous
monitoring mandate by performing impartial, consistent, and
expedited cybersecurity
assessments. The continuous monitoring function is designed to be
impartial but is
currently limited to a manual process. The limitations of the
continuous monitoring
process are based on human elements that could influence
impartiality and consistency.
Automation of the continuous monitoring function using the SCMN
will augment
the DAOR functions to produce efficiencies throughout the
continuous monitoring
process. The inherent benefits of the SCMN will expedite the
continuous monitoring
process, minimize costs, and resources constraints while providing
an increase in
cybersecurity performance. To expedite the continuous monitoring
functions, the SCMN
must reduce the processing time to receive a system authorization.
Reduction of the
processing time will contribute to additional functions. Minimizing
costs to the system
owner is a benefit of reducing the processing time. If security
concerns arise, the system
owner can address the security concern without exceeding the budget
or extending
resources to support continuous monitoring.
The manual continuous monitoring function requires Designated
Authorizing
Official Representative (DAOR) knowledge to verify the security
posture of AIS and the
data being processed. The continuous monitoring process is a
time-consuming process
that can lead to a strain on organizational resources (time,
people, and money) resulting
in general errors and inconsistent cybersecurity assessments
(United States General
Accounting Office, 1999).
28
The methodology of the quantitative case study is the development
of a
cybersecurity assessment neural-network that mimics the logic and
actions of the DAOR
to perform a continuous monitoring assessment. Automating the
cybersecurity
governance functions will minimalize human errors, provide
consistency, and expedite
continuous monitoring assessments.
3.1 Data Collection
The appropriate method for data collection, sampling, and analysis
is dependent
on the scope, duration, and purpose of the research study (Singh,
2007). The use of
nonprobability sampling in the quantitative case study is
appropriate to ensure the data
sampling is purposive to address the research problem (Kitamaya
& Cohen, 2010). In this
instance of automating the continuous monitoring function of the
Risk Management
Framework, the use of probability sampling is inappropriate to
determine the success or
failure of a cybersecurity assessment (Singh, 2007). To supplement
the nonprobability
sampling, the use of expert sampling increases the success of the
quantitative case study
to ensure the experts in the field of information assurance and
cybersecurity assessment
will determine the feasibility to automate the continuous
monitoring function.
Using cybersecurity experts is critical to validate the sampling
method (Kitamaya
& Cohen, 2010). To support the quantitative case study, the
collection of ten (10) security
experts will be separated into two (2) groups of five (5) based on
their functions and
objectives. The groups will consist of five (5) DAOs and five (5)
DAORs. The DAOs
will develop the cybersecurity baseline and the DAORs will perform
the manual
continuous monitoring assessments. As shown in Table 3-1, the
distinction in expert
sampling between the DAO and DAOR is determined by the requirements
for years of
experience, employee type, certification, and reporting.
29
Table 3-1 Expert Sampling Requirements - Information
Assurance
The purpose for using five (5) DAOs ensures the development of the
baseline is
aligned to the criteria of the risk management framework minimizing
individual bias. The
assurance of an impartial baseline will validate the capabilities
of the SCMN.
The environment for data collection was the creation of two (2)
virtual machines
(VM). The Control and Test VMs were developed using the Microsoft
Windows 10
operating system. Each system received identical systems resources
for system
vulnerability collection. The application to collect system
vulnerabilities is Tenable
Nessus. This vulnerability scanner is the primary application used
by the Federal
government and generates vulnerability reports aligned to the
criteria of the ICD-503
Risk Management Framework. The output Tenable Nessus is in the
Microsoft Excel
(.csv) format. This format can be read and analyzed by the
SCMN.
The sample output from Tenable Nessus in its raw data format (.csv)
provides a
snapshot of the input values of the SCMN ingest. The Tenable Nessus
output includes
thirteen (13) categories of system and security data to enable the
SCMN to perform and
automated cybersecurity assessment. The data categories are as
follows:
Plugin ID: The plugin ID is the primary key of the data set. Each
plugin ID is
unique and corresponds to a system vulnerability identified by the
Common
Vulnerabilities and Exposures (CVE) or Common Vulnerabilities
Scoring System
Security Assessors Experience Employee Type
Certification Reporting
Designated Authorizing Official Representative
Designated Authorizing Official (DAO)
Security Officer (CISO)
(CVSS) data repository. The Tenable organization generates and
maintains the
plugin ID database and cross-references the plugin IDs to the
external data
sources to ensure its products and services such as the Tenable
Nessus Scanner
are updated with valid security signatures.
CVE: The Common Vulnerability and Exposures database is maintained
by The
MITRE Corporation as a publicly available reference for
cybersecurity
vulnerabilities. The CVE is sponsored by the federal government as
a national
resource database. The vulnerability entries in this column are
assigned a CVE
number for the parent vulnerability and its child components. The
child
components of the vulnerability share the same CVE number that
could
potentially give the impression of additional vulnerabilities or
false positives, if
not analyzed appropriately.
CVSS: The Common Vulnerability Scoring System is a vulnerability
open
standard to identify security vulnerability severity in the form of
a numeric value.
The numerical score represents the critical, high, medium, and low
severity of the
system vulnerabilities. Similar to the CVE, the parent
vulnerability is identified
with the same CVSS score as its child vulnerabilities resulting in
potential
duplication of work for the untrained eye.
Risk: The risk column of the Tenable Nessus converts the CVSS score
into the
values of critical, high, medium, and low.
Host: This column identifies the target system of the vulnerability
scanner. The
target systems for the quantitative case study are Control and
Test.
Protocol: The protocol column identifies the networking protocol
for
communication between the Nessus scanner and system plugins
31
Port: The port identifies the port in use for communication between
the Nessus
scanner and system plugins.
Name: The common name of the system vulnerability.
Synopsis: A brief synopsis of the system vulnerability and overview
of the area of
concern.
Solution: A potential remediation of the system vulnerability. The
remediation
may have a general solution that does not account for the custom
configuration of
the operating system or security layers.
See Also: This column provides potential solutions based on lessons
learned.
Plugin Output: The plugin output is the cumulative data of the
system
vulnerability to include the system registry paths critical to the
security exploit.
During execution, the SCMN analyzes the data categories to
determine the
appropriate course of action to authorize or deny a system based on
the severity and
threat vectors of the vulnerability.
The duration of the data collection period was seven (7) weeks.
Over the course
of the data collection period, vulnerability scan results were
captured for both the Control
and Test virtual machines (VM) weekly. The quantity of security
system vulnerabilities
and risk severity was identified throughout the data collection
period (Figure 3-1).
Tenable Nessus identified 19-Critical, 56-High, 14-Moderate, and
14-Low vulnerabilities
over a seven (7) week period. These values are ingested weekly by
the SCMN for
analysis. Through use of expert sampling, the five (5) DAORs review
the weekly systems
vulnerabilities to perform manual continuous monitoring
assessments.
32
Figure 3-1 Total System Vulnerabilities
The system vulnerability types generated from Tenable Nessus are
Critical, High,
Moderate, and Low shown as Red, Orange, Yellow, and Green
respectively in Figure 3-1.
Each vulnerability type considered both impact and threat but
weighted towards impact
(FIPS PUB 199, 2004). Systems with a Critical vulnerability have a
significant
probability of exploit with a disastrous impact on the system data.
Vulnerabilities with a
High value have a strong probability of exploit with a severe
impact to the system data.
Systems with a Moderate vulnerability have an average probability
of exploit with a
medium impact to the system data. Systems with a Low vulnerability
type have a low
probability of exploit and low impact to the system data.
The vulnerability types categorize the security posture of a system
and identify
the threat vectors to exploit system data. The vulnerability report
is the primary security
record to the DAOR. The values in Table 3-2 shows the quantity of
VM system
vulnerabilities per testing week and type collected throughout the
data collection period.
The table is read by the column Machine and associated
vulnerability types collected for
33
the corresponding Week column. A holistic security assessment of
each VM is
determined by the vulnerability quantity and type collected.
Table 3-2 System Vulnerabilities - Tenable Nessus
The system vulnerabilities collection method was designed to show a
deviation
between the Control and Test VMs. The values under Vulnerability
Type refers to the
quantity of vulnerabilities associated with the severity (Critical,
High, Moderate, and
Low). Vulnerability quantities fluctuate frequently according to
the current security
posture of the AIS and the risk mitigation methods implemented. An
increase in
vulnerability quantity indicates an increase in security risk. A
decrease in vulnerability
quantity indicates a decrease in risk. In week 1, the VMs were
scanned to provide a
baseline of the security posture. Week 2 included patching the
Control VM using only
the Microsoft Windows 10 Update feature. No patching was executed
on the Test VM.
The subsequent weeks 3-7 followed the pattern of patching the
Control VM before the
scheduled weekly scan and excluding patching from the Test
VM.
This practice yields the expected results over the course of the
testing period. The
expected results for the Control VM are the weekly mitigation of
system vulnerabilities
Critical High Moderate Low Control 1 0 2 1 1
Test 1 1 1 1 1 Control 2 1 2 1 1
Test 2 2 4 1 1 Control 3 0 2 1 1
Test 3 2 4 1 1 Control 4 0 2 1 1
Test 4 3 7 1 1 Control 5 2 3 1 1
Test 5 2 3 1 1 Control 6 0 5 1 1
Test 6 3 6 1 1 Control 7 0 7 1 1
Test 7 3 8 1 1
Machine Week Vulnerability Type
34
through patching resulting in a minimal impact on the security
posture of the AIS. The
intent for the Test VM is the lack of weekly patching to increase
security vulnerabilities
to degrade the security posture of the AIS throughout the testing
period. Over the seven
(7) week testing period the vulnerabilities identified on the Test
VM was significantly
different from the Control machine. As shown in Figure 3-2, the
deviation of the scan
results will provide a significant vulnerability range for the SCMN
to perform a
continuous monitoring assessment. During the seven (7) weeks of
data collection the total
vulnerabilities collected for the Control and Test VMs are as
follows:
Control VM: Critical – 3, High – 33, Moderate – 7, Low - 7
Test VM: Critical – 16, High – 23, Moderate – 7, Low – 7
The variation of the vulnerabilities collected for the Control and
Test VMs are
aligned with the degradation assumptions. Based on the minimal risk
impact of the
moderate and low vulnerabilities, the likelihood for the
development of new
vulnerabilities within the moderate and low categories are minimal.
Security
vulnerabilities with the greatest probability to exploit a system
are identified as a high or
critical vulnerability. Tenable Nessus did not identify a rise in
the deployment of
moderate or low vulnerabilities throughout the data collection
period.
35
Figure 3-2 Total System Vulnerabilities Control/Test Virtual
Machine
The increase in vulnerabilities for the Test VM will enable the
SCMN to capture
learning variables to perform a continuous monitoring assessment.
Differentiation of the
vulnerabilities contributes to the data repository for future
execution and potential
expansion of the SCMN. System vulnerabilities are categorized
according to the impact
of the threat and the probability of exploitation. In many cases, a
system vulnerability is
reported multiple times for an identical threat. In error, the
automated vulnerability
assessment tools can report components of a threat and label the
components as unique
threats. This type of reporting is called false positive reporting.
To mitigate this issue in
the data set, the system vulnerabilities were consolidated
according to the origin of the
threat, not the various components of the threat.
The importance of identifying the origin of the threat determines
the best method
to remediate or mitigate the vulnerability. During the seven (7)
weeks of data collection,
each vulnerability report for the Control and Test virtual machines
generated hundreds of
system vulnerabilities that was consolidated weekly to report only
the origin of the
3
23
Control Test
system vulnerability and its severity. The false positive
vulnerabilities were excluded
from both the manual and SCMN assessments.
3.2 Information Assurance Hierarchy
By ODNI regulations, each Federal agency appoints a Chief
Information Security
Officer (CISO) to regulate system security authorizations. The CISO
is a Federal
government employee that develops the security baseline,
thresholds, and objectives
within a federal agency for the authorization or denial of an AIS,
performance of security
tools, and implementation of security processes (NIST 800-137,
2011). To minimize the
CISO function to a single point of failure within the government
agency, AIS
authorization functions are delegated to the Designated Authorizing
Official (DAO) to
manage the day-to-day functions within the cybersecurity risk
management group. In the
quantitative case study, the expert DAOs determined the security
baseline, threshold, and
objectives. Depending on the infrastructure and quantity of system
to authorize and
maintain, the security authorization function is further delegated
to the Designated
Authorizing Officer Representative (DAOR). The multi-tier
delegation of authority CISO
> DAO > DAOR ensures support of the agency security functions
while maintaining
compliance with ODNI regulations.
3.3 Cybersecurity Baseline Development
The foundation of an accurate security assessment is a consistent
baseline. To
validate the accuracy of the SCMN and manual security assessments,
the cybersecurity
baseline was developed. Through the use of expert sampling, the
five (5) Designated
Authorizing Officials (DAO) were tasked to establish the
cybersecurity baseline using the
vulnerability reports generated by Tenable Nessus. In compliance
with the NIST 800-
128, Guide for Security-Focused Configuration Management of
Information Systems, the
37
standard process for developing the cybersecurity baseline for
Federal systems is as
follows:
• Each DAO performed an independent security assessment for the
seven (7) weeks
of system vulnerabilities collected on the Control and Test virtual
machines.
• Independent analysis of each vulnerability considered the system
environment,
threat vector, and probability to exploit the vulnerability.
• The DAOs collaborated to evaluate each independent security
assessment and
determine a consensus for the authorization decision for the weekly
security
vulnerability reports.
• The finalized authorization decisions for the weekly
vulnerability reports are
approved by the DAOs to become the cybersecurity baseline.
• The expert DAOs will determined the continuous monitoring
performance
threshold and objective to test against the baseline.
The importance of the integrity of the cybersecurity baseline will
determine the
performance of the SCMN substantiate the feasibility of the
quantitative case study to
automate and perform an accurate continuous monitoring
cybersecurity assessment. The
baseline was established by expert DAOs. The continuous monitoring
performance of
five (5) expert DAOR and the SCMN were compared against the
established baseline.
3.4 Designated Authorizing Official Representative Manual
Assessment
The Designated Authorizing Official Representative (DAOR) is a
position
appointed by the CISO for the function of security assessment and
authorization of
Automated Information Systems (AIS) (NIST 800-137, 2011). To obtain
a system
authorization, the system owner must adhere to the steps of the
ICD-503 Risk
Management Framework and agency requirements before processing
mission data. The
38
collaborative between the system owner and DAOR produces security
records to
document the security posture of the AIS. The vulnerability scan
report is the primary
security record for the system owner to address vulnerabilities
detrimental to obtaining a
security authorization. The Plan of Action and Milestones
(POA&M) identifies the
system vulnerability, mitigation, and remediation of the AIS to
meet the threshold to
obtain a system authorization.
During the continuous monitoring process, the system owner must
validate
system maintenance of the security posture as indicated in the
ICD-503 RMF. The system
owner submits a DAOR request to initiate the continuous monitoring
process. The
duration of the continuous monitoring process is 90-days from DAOR
acceptance of the
request until the decision to continue or reject authorization of
the system. Throughout
the 90-day window, the DAOR reviews the system vulnerabilities and
provides an
assessment of each vulnerability to determine the likelihood of the
exploit and impact to
the system, connected networks, and data.
Continuous monitoring is a manual and labor-intensive task for the
DAOR that
requires expert applicability of the RMF, system environment,
mission, and classification
of the data to authorize or deny the system. The number of security
vulnerabilities in a
security report can exceed in the thousands across the various
components to form a
system. At any point during continuous monitoring, the DAOR can
determine the
security posture of the AIS is beyond the threshold of compliance
and generate a
POA&M tasking the system owner to remediate or mitigate the
associated security
findings. During this point, the continuous monitoring process is
halted to allow the
system owner to address the security vulnerabilities. Once
completed, the AIS is
rescanned for vulnerabilities, and the system owner must resubmit a
request for
39
continuous monitoring thus restarting the 90-day window to receive
a system
authorization. Systems with a compliant security posture can
average 60-days to receive
continuous monitoring authorization. Systems with severe security
issues may take
upwards to 9-months to receive continuous monitoring
authorization.
The continuous monitoring process requires the system owner to
shutdown
normal operations and data processing to verify the security
posture of the AIS. The
temporary shutdown is accounted for in the system operating budget,
but a prolonged
continuous monitoring effort can negatively affect the budget,
schedule, and resources of
the system owner. Producing an automated continuous monitoring
solution would
expedite security assessments of the system vulnerabilities
allowing ample time for the
system owners to address security issues and obtain a timely
continuous monitoring
authorization.
The Security Continuous Monitoring Neural-Network (SCMN) was
developed
using R-Studio on the Microsoft Windows 10 platform. In Figure 3-3,
the logic of the
SCMN source code follows the continuous monitoring process to
prioritize risks, identify
controls, identifying information, and implement monitoring (NIST
800-137, 2011). Step
1: Prioritize Risks ensures organizational risks are aligned with
the organizational
objectives. Misaligned risks could result in e-waste throughout the
organization. Step 2:
Identify Controls is the understanding of the internal system
controls of the organization
and providing assurance the risks are aligned with step 1. Step 3:
Identify Information is
verifying the effectiveness of the controls being implemented. Step
4: Implement
Monitoring is the maintenance of the security posture to ensure the
integrity of security.
40
Figure 3-3 Continuous Monitoring (Norman Marks, 2011)
The SCMN is a multilayered neural-network that consists of four (4)
primary
processing modules of the neural-network to perform continuous
monitoring assessments.
The importance of the four processing modules aligns with the
continuous monitoring
process while minimizing redundant functions when developing the
neural-network.
Optimization of the SCMN will provide substantial results on a
variety of system
configurations versus producing an inefficient neural-network that
is dependent on a
significant amount of processing power to execute successfully. In
Figure 3-4, the
modules of the SCMN account for the input feature, neurons, bias
nodes, and the output
of the test data. The input feature is the ingest of the scan data
produced from Tenable
Nessus. The neurons are the decision nodes that mimic the
neurological process of the
human brain to process variables for the continuous monitoring
assessment. The bias
nodes provide flexibility to allow for errors to enable the SCMN to
learn and have more
than one output. The output of the SCMN is the final decision to
approve or disapprove
the security authorization based on the continuous monitoring
criteria.
41
3.6 Training the Artificial Neural Network
The SCMN is the supervised learning Recurrent Artificial
Neural-Network
(RNN) design with one (1) input, X1 in Table 3-1, and two (2)
outputs (Y= 0, Y=1). The
recurrent design was selected because the continuous monitoring
process has two (2)
feedback loops to represent the continuous monitoring process. The
arrangement of the
SCMN is aligned with the manual continuous monitoring process
performed by the
DAOR. The input for X1 is the vulnerability scan data in Appendix
A. The Y-value is the
product of the weighted values to determine the authorization
decision with a permanent
fault tolerance for the fixed logic values. The fault tolerance of
the SCMN is the Stuck-at
fault model where the data is stuck-at-0 ( x < = -1) or
stuck-at-1 (x > =1) during defects
(Torres-Huitzil & Girau, 2017). The acceptable percentage of
defects for the SCMN was
The stuck-at fault model is a commonly used binary model for fault
tolerance and is
appropriate to detect faults of the SCMN. The threshold for fault
tolerance is 0.1% and
42
the SCMN was within the threshold throughout the seven (7) weeks of
security
assessments.
The SCMN was supervised using an empirical risk minimization
learning
algorithm in R-Studio to seek the function that best fits the
training dataset (Vapnik,
1992). The foundation of the supervised algorithm is the empirical
risk minimization
function as seen in Equation (3.1) where{w} is the weights of the
neural network and {l}
is the loss function.
Empirical Risk Minimization Function (3.1)
The dataset used for training the SCMN derives from vulnerabilities
produced
from a previously accredited unclassified system within an academic
institution
(Appendix A). The previously accredited system is independent from
the vulnerability
reports produced by the Control and Test VMs. The SCMN is trained
using the training
dataset to perform authorization assessments for the Standalone,
Hybrid, and Enterprise
environments. The accredited system was based on the Microsoft
Windows platform and
produced Tenable Nessus vulnerabilities similar to the
vulnerabilities identified in the
Control and Test virtual machines. The accredited system contained
the following fifteen
(15) vulnerabilities: 2 - Critical, 3 - High, 7- Moderate, 3 – Low
(Appendix A). Use of
the vulnerabilities provided an array of data for the SCMN to
develop the weighted
values to perform an accurate continuous monitoring security
assessment.
To test the design of the SCMN, the initial weights were randomly
assigned
between the range of -1 and +1 to determine the process flow of the
dataset (Al-
43
Hamadani, 2015). The purpose of the range for the weighted values
is to produce
controllable results to determine the functionality of the SCMN as
shown in Figure 3-5.
Figure 3-5 SCMN with Weighted Values
The SCMN intends to produce authorization responses in a binary
format. The
threshold value of the SCMN determines the logical output of the
SCMN as 1 or 0. As
seen in Equation (3.2), the threshold relationship of the SCMN
output is represented
using the binary step function (Al-Hamadani, 2015).
Binary Step Function (3.2)
The binary step function determines the threshold of SCMN output
response. If
< 0, the value of 0 is produced as the authorization decision.
If ≥ 0, the value of 1 is
produced as the authorization decision. Use of the binary step
function is critical to the
success of the SCMN development and implementation. The bias notes
of the SCMN are
44
set to +1 to ensure the weighted values of the neural nodes are
consistent in the
producing output responses with the threshold of the binary step
function.
3.7 Data Validation
Data validation was conducted using the forward propagation and
back
propagation to verify the integrity of the data output of the SCMN.
The learning process
of the SCMN contain the two (2) conditions of forward propagation
and back
propagation. In Equation (3.3), the forward propagation can be
represented as a sigmoid
function:
The sigmoid function outputs numbers between −¥ £ £ ¥ representing
the
probability of security authorization decision as 1 and 0. The SCMN
values of 1 are the
approval of the security authorization, while 0 represents the
disapproval of the security
authorization. The data values for forward propagation are
represented to test each node
to observe the output. Back propagation is the validation of data
using the outputs to
traverse the integrity of the SCMN. In Equation (3.4), the
derivative of the sigmoid
function the two outputs of the SCMN can be validated:
Sigmoid Derivative Function: Back Propagation (3.4)
Once the forward propagation and back propagation are verified, the
SCMN
began to operate as intended and learn from the datasets. In Figure
3-6, the logical
structure developed for the SCMN follows the logic of the DAOR to
perform a manual
45
continuous monitoring assessment. The neural and bias nodes perform
the weighted
actions to determine the output of the SCMN.
Figure 3-6 SCMN Logic Model
Each segment of the logic model represents the process and action
required to complete
the step. The SCMN traverses the following:
• Input Node: Ingest the vulnerability scans from Tenable
Nessus.
• Bias Node: The consolidation of the scan results.
• Neural Node: The consolidated scan results are parsed according
to the severity of
the risk (Critical, High, Moderate, and Low).
• Bias Node: Keyword search in the data repository for
vulnerability type.
• Neural Node: Apply values of reference the vulnerability or the
appropriate
environment (Standalone, Hybrid, Enterprise).
• Bias Node: Assessment of the environment values
• Output Node: Authorization decision for the neural-network.
Throughout the training of the SCMN, the errors produced as a
result of the forward
and back propagation resulted in changes of the weighted values to
perform an accurate
continuous monitoring function. To prevent the SCMN from
memorization of the data,
the dataset categories were reduced to test the robustness of the
SCMN. The conclusion
of the SCMN yields positive results for use in the quantitative
case study.
Ensuring both SCMN and DAOR use the same logic model will ensure
the
guidance for continuous monitoring is consistent. Each DAOR is a
subject matter expert
in performing system security assessments. The experience between
the DAORs ranges
from 5 years to 15 years with an average of two thousand (2000)
systems security
authorizations. The purpose of the manual DAOR assessment is to
demonstrate and
capture the subjective responses in performing a continuous
monitoring assessment. The
manual system security assessment is based on the following
criteria:
1. Each security assessment performed by a DAOR complies with the
ICD-
503 Risk Management Framework.
2. The DAOR tester will remain anonymous and assigned the
identifier
Tester ranging from 1-5.
3. Each weekly security assessment will be performed independently
by the
DAOR and collected weekly.
4. The DAOR will not disclose the results of the security
assessment.
3.8 Summary
Chapter 3 provided the methodology of the SCMN to develop a
cost-effective and
efficient neural-network tool to automate continuous monitoring.
The potential efficiency
47
of the SCMN will determine if an AI implementation is feasible for
other cybersecurity
functions in the Federal government. The development of the SCMN to
perform against
the cybersecurity baseline provided challenges to ensure all
functions to perform an
accurate continuous monitoring assessment was identified within the
recurrent neural-
network. Based on the initial results of the forward and back
propagation, the SCMN is
operating as designed.
Chapter 4 - Results
The purpose of the quantitative case study was to develop a
neural-network to
automate continuous monitoring. The development of the SCMN was
successful in
automating continuous monitoring to perform DAOR system security
assessment
functions. The alignment of the SCMN to the RMF ensures the
governance of the
cybersecurity assessment. The data collection, analysis, and
decisions were based on the
RMF guidance for continuous monitoring. This chapter will show the
data captured and
the study of the data about the performance of the SCMN.
4.1 Descriptive Statistics
The descriptive statistics in Tables (4-1 thru 4-7) shows the
descriptive statistics
for the SCMN and collection of weekly data for the SCMN and the
manual responses
from the five (5) DAORs. To validate the integrity of the
descriptive statistics, the
collection of ten (10) security experts separated into two (2)
groups of five (5) form the
foundation of the quantitative case study by developing the
cybersecurity baseline (DAO)
and performing manual continuous monitoring assessments against the
baseline (DAOR).
The assessment and analysis of the data were repeated for seven (7)
weeks. Each DAOR
was required to assess the weekly vulnerabilities based on the
risk, severity, and the
threat vectors to the AIS environment. During the end of the weekly
assessment, a final
authorization decision was determined per AIS environment
(standalone, hybrid,
enterprise).
The SCMN was completed weekly to assess the system vulnerabilities
identified
for the Control and Test virtual machines and the threat vectors to
the AIS environments
and matched against the DAOR manual responses and cybersecurity
baseline. The
conclusion of each week resulted in responses from all DAORs and
the SCMN. The
49
similarity of data from system vulnerabilities and security
authorizations from both the
manual DAOR responses and automated SCMN response will demonstrate
the alignment
of the SCMN to perform an automated continuous monitoring
assessment. The alignment
between the DAORs, SCMN, and cybersecurity baseline assessments are
shown in Table
4-1. The highlighted assessments represent a misalignment between
the DAORs and
SCMN against the cybersecurity baseline.
Table 4-1 Week 1: Security Assessment
The completion of Week 1 of the quantitative case study yield
positive results in
support of automating a security continuous monitoring assessment.
The vulnerabilities
identified in Week 1 are as follows:
Control VM: Critical – 0, High – 2, Moderate – 1, Low - 1
Test VM: Cr