5
S ecurity bugs are different from other types of faults in software. Tradition- al nonsecurity bugs are usually spec- ification violations; the software was supposed to do something that it didn’t do. Security bugs, however, typically man- ifest themselves as additional behavior— something extra the software does that was not originally intended. This can make security-related vulnerabilities particular- ly hard to find because they are often masked by software doing what it was supposed to. Traditional testing techniques, therefore, are not well equipped to find these kinds of bugs. Why? For one thing, testers are trained to look for missing or incorrect output; they see only the correct behav- ior and neglect to look for other side-effect behaviors that may not be desirable. For instance, the circle on the left in Figure 1 represents the specification — what the software is supposed to do. The circle on the right represents the true func- tionality of the application— what the soft- ware actually does. Developers and testers are painfully aware that these circles nev- er completely overlap. The area on the left represents either incorrect behavior (the software was supposed to do A but did B instead) or missing behavior (the software was supposed to do A and B but did only A). Traditional software testing is well equipped to detect these types of bugs. Security bugs, however, do not fit well into this model. They tend to mani- fest as side effects; for instance, the soft- ware was supposed to do A, and it did, but in the course of doing A, it does B as well. Imagine a media player that flaw- lessly plays any form of digital audio or video, but manages to do so by writing the files out to unencrypted temporary storage. This is a side effect that software pirates would be happy to exploit. It is important that as you verify func- tionality, you also monitor for side effects and their impact on the security of your application. The problem is that these side effects can be subtle and hidden from view. They could manifest as file writes or registry entries, or even more obscurely as a few extra network packets that contain unencrypted, supposedly secure data. Luckily, there are both commercially and freely available tools— such as Mutek’s AppSight (http://www.identify .com/products/appsightsuite.html) and Holodeck Lite (http://se.fit.edu/holodeck/), respectively— that let you monitor these hidden actions. Another option is to write your own customized monitoring solution such as injecting a custom DLL into the running application’s process space. Creating a Plan of Attack Software takes input from many different sources. Users, operating-system kernels, other applications, and filesystems all sup- ply input to applications. You have con- trol over these interfaces, and by careful- ly orchestrating attacks through them, you can uncover many vulnerabilities in the software. Figure 2 is a simple model of software and its interaction with the envi- ronment. This model gives you a way to conceptualize these interactions. The four principal classes of input in Figure 2 are: • Human interface (UI). Implemented as a set of APIs that get input from the key- board, mouse, and other devices. Se- curity concerns from this interface in- clude unauthorized access, privilege escalation, and sabotage. • Filesystem. Provides data stored in ei- ther binary or text format. Often, the filesystem is trusted to store information such as passwords and sensitive data. You must be able to test the way in which this data is stored, retrieved, en- crypted, and managed for security. • API. Operating systems, libraries, and other applications supply inputs and data in the return values of API calls. Most applications rely heavily on other software and operating-system resources to perform their required functions. Testing for Software Security Rethinking security bugs Herbert H. Thompson and James A. Whittaker Herbert is Director of Security Technology for System Integrity LLC (http://www .sisecure.com). James is a professor of com- puter science at the Florida Institute of Technology. Herbert and James are coau- thors of How to Break Software Security (Addison-Wesley). They can be contact- ed at [email protected] and jw@cs .fit.edu, respectively. 24 Dr. Dobb’s Journal, November 2002 http://www.ddj.com

Testing for Software Security

Embed Size (px)

Citation preview

Page 1: Testing for Software Security

Security bugs are different from othertypes of faults in software. Tradition-al nonsecurity bugs are usually spec-ification violations; the software was

supposed to do something that it didn’tdo. Security bugs, however, typically man-ifest themselves as additional behavior—something extra the software does thatwas not originally intended. This can makesecurity-related vulnerabilities particular-ly hard to find because they are oftenmasked by software doing what it wassupposed to.

Traditional testing techniques, therefore,are not well equipped to find these kindsof bugs. Why? For one thing, testers aretrained to look for missing or incorrectoutput; they see only the correct behav-ior and neglect to look for other side-effectbehaviors that may not be desirable.

For instance, the circle on the left inFigure 1 represents the specification—what the software is supposed to do. Thecircle on the right represents the true func-tionality of the application—what the soft-

ware actually does. Developers and testersare painfully aware that these circles nev-er completely overlap. The area on theleft represents either incorrect behavior(the software was supposed to do A butdid B instead) or missing behavior (thesoftware was supposed to do A and B butdid only A). Traditional software testingis well equipped to detect these types ofbugs. Security bugs, however, do not fit

well into this model. They tend to mani-fest as side effects; for instance, the soft-ware was supposed to do A, and it did,but in the course of doing A, it does B aswell. Imagine a media player that flaw-lessly plays any form of digital audio orvideo, but manages to do so by writingthe files out to unencrypted temporarystorage. This is a side effect that softwarepirates would be happy to exploit.

It is important that as you verify func-tionality, you also monitor for side effectsand their impact on the security of yourapplication. The problem is that these sideeffects can be subtle and hidden fromview. They could manifest as file writes orregistry entries, or even more obscurely as

a few extra network packets that containunencrypted, supposedly secure data.

Luckily, there are both commerciallyand freely available tools — such asMutek’s AppSight (http://www.identify.com/products/appsightsuite.html) andHolodeck Lite (http://se.fit.edu/holodeck/),respectively— that let you monitor thesehidden actions. Another option is to writeyour own customized monitoring solutionsuch as injecting a custom DLL into therunning application’s process space.

Creating a Plan of AttackSoftware takes input from many differentsources. Users, operating-system kernels,other applications, and filesystems all sup-ply input to applications. You have con-trol over these interfaces, and by careful-ly orchestrating attacks through them, youcan uncover many vulnerabilities in thesoftware. Figure 2 is a simple model ofsoftware and its interaction with the envi-ronment. This model gives you a way toconceptualize these interactions. The fourprincipal classes of input in Figure 2 are:

• Human interface (UI). Implemented asa set of APIs that get input from the key-board, mouse, and other devices. Se-curity concerns from this interface in-clude unauthorized access, privilegeescalation, and sabotage.

• Filesystem. Provides data stored in ei-ther binary or text format. Often, thefilesystem is trusted to store informationsuch as passwords and sensitive data.You must be able to test the way inwhich this data is stored, retrieved, en-crypted, and managed for security.

• API. Operating systems, libraries, andother applications supply inputs anddata in the return values of API calls.Most applications rely heavily on othersoftware and operating-system resourcesto perform their required functions.

Testing forSoftware Security

Rethinkingsecurity bugs

Herbert H. Thompsonand James A. Whittaker

Herbert is Director of Security Technologyfor System Integrity LLC (http://www.sisecure.com). James is a professor of com-puter science at the Florida Institute ofTechnology. Herbert and James are coau-thors of How to Break Software Security(Addison-Wesley). They can be contact-ed at [email protected] and [email protected], respectively.

24 Dr. Dobb’s Journal, November 2002 http://www.ddj.com

Page 2: Testing for Software Security

Thus, your application is only as secureas the other software it uses and howwell equipped it is at handling bad datathrough these interfaces.

• Operating-system kernel. Provides mem-ory, file pointers, and services such astime and date functions. Any informa-tion that an application uses must passthrough memory at one time or anoth-er. Information that passes throughmemory in an encrypted form is gener-ally safe, but if it is decrypted and storedeven momentarily in memory, then it isat risk of being read by hackers. En-cryption keys, CD keys, passwords, andother sensitive information must even-tually be used in an unencrypted formand its exposure in memory needs tobe protected. Another concern with re-spect to the operating system is stresstesting for low memory and other faultyoperating conditions that may cause anapplication to crash. An application’s tol-erance to environmental stress can pre-vent denial of service and also situationsin which the application may crash be-fore it completes some important task(like encrypting passwords). Once anapplication crashes, it can no longer beresponsible for the state of stored data.If that data is sensitive, then security maybe compromised.

At first glance, it seems as if you couldorganize a plan of attack by looking ateach method of input delivery individu-ally, and then bombard that interface withinput. For security bugs, though, most re-vealing attacks require you to apply in-puts through multiple interfaces. With thisin mind, we scoured bug databases, in-cident reports, advisories, and the like,identifying two broad categories of attacksthat can be used to expose vulnerabili-ties— dependency attacks and design-and-implementation attacks.

Attacking DependenciesApplications rely heavily on their envi-ronment to work properly. They depend

on the OS to provide resources such asmemory and disk space, the filesystem toread and write data, the registry to storeand retrieve information, and on and on.These resources all provide input to thesoftware— not as overtly as human usersdo, but input nonetheless. Like any input,if the software receives a value outside ofits expected range, it can fail.

When failures in the environment occur,error-handling code in the software (if itexists) gets called. Error handlers tend tobe the weak point of an application interms of security. One reason for this isthat failures in the software’s environmentthat exercise these code paths are difficultto produce in a test lab situation. Conse-quently, tests that involve disk errors, mem-ory failures, and network problems are usu-ally only superficially explored. It is duringthese periods that the software is at its mostvulnerable and where carefully conceivedsecurity measures break down. If such sit-uations are ignored and other tests pass,we are left with a dangerous illusion of se-curity. Servers do run out of disk space,network connectivity is sometimes inter-mittent, and file permissions can be im-properly set. Such conditions cannot be ig-nored as part of an overall testing strategy.What’s needed is a way to integrate thesefailures into your tests so that you can eval-uate their impact on the security of theproduct itself and its stored data.

Creating environmental failure scenar-ios can be difficult, usually requiring youto tamper with the application code tosimulate specific failing responses fromthe operating system or some other re-source. This approach isn’t very feasiblein the real world, however, because of theamount of time, effort, and expertise ittakes to simulate just one failure in theenvironment. Even if you did decide touse this approach, the problem is deter-mining where in the code the applicationuses these resources and how to makethe appropriate changes to simulate a realfailure in the environment.

One alternative approach is run-timefault injection: Simulating errors to the ap-plication in a black-box fashion at runtime. This approach is nonintrusive andlets you test production binaries, not justcontrived versions of your applicationsthat have return values hard coded. Thereare several ways to do this; in the exam-ple presented here, we overwrite the firstfew bytes of the actual function to becalled in the process space and insert aJMP statement to our fault injection codein its place. There are other methods thatcan be used as well, such as modifyingthe import address tables; a technique forwhich we have found Jeffrey Richter’s Pro-gramming Applications for Microsoft Win-dows, Fourth Edition (Microsoft Press,1999) to be an excellent reference.

Using these techniques, you can redi-rect a particular system call to your ownimpostor function. One passive use forthis is to simply log events. This can beinformative for the security tester becauseit lets you watch the application for file,memory, and registry activity.

At this point, you are in control of theapplication and can either forward a sys-tem request to the actual OS function ordeny the request by returning any errormessage you choose. This technique is il-lustrated in the first attack.

Block access to libraries. Applicationsrely on external software libraries to getwork done. Operating-system libraries andthird-party DLLs are critical for the appli-cation to function properly. As testers anddevelopers, it is your responsibility to en-sure that failures here do not compromisethe security of your application. By deny-ing a library to load, you have deprivedthe application of some functionality it ex-pected to use. If the application does notreact to this failure by displaying an errormessage, this may be a sign that appro-priate checks are not in place and that thesoftware may be unaware that this codedid not load. If the library in question pro-vides security service, then all bets are off.

You can deny a library to load in Win-dows by intercepting the LoadLibraryExW

(continued from page 24)

26 Dr. Dobb’s Journal, November 2002 http://www.ddj.com

Figure 1: Intended versusimplemented software behavior.

TraditionalFaults

ActualSoftwareFunctionality

IntendedFunctionality

Unintended,Undocumented,or UnknownFunctionality

Figure 2: A look at software's users.

Operating System

Kernel

Application Under TestAPI UI

Filesystem

One alternativeapproach is

run-time faultinjection

Page 3: Testing for Software Security

function. For instance, consider a publi-cized bug with Internet Explorer’s Con-tent Advisor feature (see “Exposing Soft-ware Security Using Runtime FaultInjection” in Proceedings of the ICSEWorkshop on Software Quality, 2002). Ifyou turn the feature on, all web sites thatdon’t have a RASCi rating are blocked bydefault. (The Recreational Software Advi-sory Council, RASCi, rating is assigned toa web site based on its content. This rat-ing system was replaced in 1999, howev-er, with the Internet Content Rating As-sociation, ICRA, rating system.) Listing Oneis the C++ source code of a DLL you caninject into the application to hook thefunction LoadLibraryExW for WindowsXP. Our DLL overwrites the first few bytes

of this function in the process space ofthe application under test. These bytes arereplaced with a JMP statement to the mem-ory address of our imposter function, im-poster_LoadLibraryExW.

The problem with IE’s Content Advisoris that if IE fails to load the library msrat-ing.dll, users can surf the Web unrestrict-ed. Our imposter function checks to seewhether the library that the application isattempting to load is msrating.dll; if so, itblocks the library from being loaded byreturning NULL (indicating failure) to theapplication.

You can uncover clues to library de-pendencies such as this by changing thecode in the imposter function, either toalert you when a specific call is made orlog all such calls and their parameters to

a file. It then takes a little detective workto determine which services the library isproviding to the application and whenthey are used. With a few modificationsto the imposter function, you can then de-termine what would happen if that func-tionality were to be denied. Listing Twois the source of the executable used to in-ject our DLL into the target application’sprocess space.

In addition to LoadLibraryExW, thiscode can easily be modified to interceptother system calls and monitor and/or se-lectively deny them at run time. We havedeveloped a freeware tool called“Holodeck Lite” (available electronicallyat http://se.fit.edu/holodeck/ and fromDDJ; see “Resource Center,” page 5), us-ing techniques similar to those in ListingOne, to help you easily monitor and ob-struct common system calls.Manipulate registry values (Windowsspecific). The problem with the registryis trust. When developers read informa-tion from the registry, they trust that thevalues are accurate and haven’t been tam-pered with maliciously. This is especiallytrue if their code wrote those values tothe registry in the first place. One of themost extreme vulnerabilities is when sen-sitive data, such as passwords, is storedunprotected in the registry.

More complex information can causeproblems too. Take, for example, “try andbuy” software, where users have eitherlimited functionality or a time limit inwhich to try the software, or both. In thesecases, the application can then be un-locked if it is purchased or registered. Inmany cases, the check an applicationmakes to see if users have purchased itor not is to read a registry key at startup.We’ve found that in some of the best cas-es, this key is protected with weak en-cryption; in some of the worst, it’s a sim-ple text value: 1 purchased; 0 trial.Force the application to use cor-rupt/protected files and file names. Alarge application may read from, and writeto, hundreds of files in the process of car-rying out its tasks. It’s the tester’s job tomake sure that applications can handlebad data gracefully, without exposing sen-sitive information or allowing unsafe be-havior.

This attack is carried out by taking a filethat the application uses and changing itin some way the software may not haveanticipated. For a file that contains a se-ries of numerical data that the softwarereads, for instance, you may want to usea text editor and include letters and spe-cial characters. If successful, this attackusually results in denial of service eitherby crashing the application or by bring-ing down the entire system. More creative

(continued on page 32)

(continued from page 26)

28 Dr. Dobb’s Journal, November 2002 http://www.ddj.com

Page 4: Testing for Software Security

changes may force the application to ex-pose data during a crash that users wouldnot normally have access to.Force the application to operate in low-memory/diskspace/network availabil-ity conditions. Depriving applications ofthese resources lets testers understand howrobust their application is under stress. Thedecision of which faults to try and whencan only be determined on a case-by-casebasis. A general rule of thumb, though, isto block a resource when an applicationseems most in need of it. For memory, thismay be during some intense computationthe application is doing. For disk errors,look for file writes/reads by the applica-tion, then start pounding it with faults.These faults can be simulated relativelyeasily by modifying the code in ListingOne to intercept other system functions,such as CreateFile.

Attacking Design and ImplementationIt’s difficult to identify all the subtle secu-rity implications of choices made duringthe design phase. Looking at a 200-pagespecification and asking “Is it secure?” willbe met with blank looks, even by the mostexperienced developers. Even if the de-sign is secure, the choices made by thedevelopment team during implementationcan have a major impact on the securityof the product. Here we present some at-tacks that have been effective at exposingthese types of bugs.

Force all error messages. This attackserves two purposes. The first is to seehow robust the application is by tryingvalues that should result in error messagesand see how many are handled properly,improperly, or not at all. The second is tomake sure that error messages do not re-veal unintended information to a would-be intruder; for example, during authen-tication, having one error message thatappears when an incorrect user name isentered and having a different error ap-pear when a valid user name is enteredbut with an incorrect password. At thispoint, the attacker then knows that they

have a correct user name, which meansthere is now only one string value to at-tack— the password.Seek out unprotected test APIs. Com-plex, large-scale applications are often dif-ficult to test effectively by relying on theAPIs extended for normal users alone.Sometimes there are multiple builds a day,each of which has to go through somesuite of verification tests. To meet this de-mand, many applications include hooks

that are used by custom test harnesses.These hooks and corresponding test APIsoften bypass normal security checks doneby the application for the sake of ease ofuse and efficiency. They are added fortesters by developers with the intentionof removing them before the software isreleased. The problem, though, is thatthese test APIs become so integrated intothe code and the testing process that whenthe time comes for the software to be re-leased, managers are reluctant to removethem for fear of destabilizing the code. Itis critical to find these hooks and ensurethat if they were to make it out into thefield, they could not be used to open upvulnerabilities in the application. Overflow input buffers. The first thingthat comes to many peoples’ minds whenthey hear the term “software security” isthe dreaded buffer overflow. For this rea-

son, it is important to test an application’sability to handle long strings in input fields.This attack is especially effective whenlong strings are entered into fields thathave an assumed, but often not enforced,length such as ZIP codes and state names.

API calls have been notorious for un-constrained inputs. As opposed to a GUIwhere you can filter inputs as they are en-tered, API parameters must be dealt withinternally and checks must be done to en-sure that values are appropriate beforethey are used. The most vulnerable APIstend to be those that are seldom used orsupport legacy functionality.Connect to all ports. Sometimes appli-cations open custom ports on machinesto connect with remote servers. Reasonsfor this vary from creating maintenancechannels to automatic updates or possi-bly as a relic from test automation. Thereare many documented cases (see http://www.ntbugtraq.com/) where these portsare left open and unsecured. It is impor-tant that the same scrutiny that’s been giv-en to the communications through thestandard ports (Telnet, ftp, and so on) begiven to these application-specific portsand the data that flows through them.

ConclusionSoftware security testing must go beyondtraditional testing if we ever hope to re-lease secure code with confidence. In thisarticle, we have discussed a fault modelthat describes a paradigm shift from tra-ditional bugs to security vulnerabilities,and outlined some of the attacks testerscan use to better expose vulnerabilitiesbefore release. These attacks are only partof a complete security-testing methodol-ogy. Research into security vulnerabilities,their symptoms, and habits has only justbegun.

AcknowledgmentsThanks to Rahul Chaturvedi for providingcode excerpts from Holodeck and to At-tila Ondi, Ibrahim El-Far, and Scott Chasefor their input on this article.

DDJ

Listing One#include "stdafx.h"#include <windows.h>typedef HMODULE (WINAPI *loadlibrary_t) (LPCWSTR, HANDLE, DWORD);loadlibrary_t real_LoadLibraryExW;DWORD dwAddr;/* Our imposter function for the real LoadLibraryExW. All it does is check if the incoming filename is msrating.dll and either returns NULL and sets an appropriate error, or lets the call go through to our saved header instructions of the real function which then jump to the real function in the appropriate location. */HMODULE WINAPI imposter_LoadLibraryExW(LPCWSTR lpFileName,

HANDLE hFile, DWORD dwFlags){

if (!_wcsicmp(lpFileName, L"msrating.dll")){

SetLastError(ERROR_FILE_NOT_FOUND);return NULL;

}else

{return real_LoadLibraryExW(lpFileName, hFile, dwFlags);

}}BOOL APIENTRY DllMain( HANDLE hModule, DWORD ul_reason_for_call,

LPVOID lpReserved)

{switch (ul_reason_for_call){case DLL_PROCESS_ATTACH:// Allocate memory for copying the first few instructions of the target// function. Since the granularity on VirtuallAlloc is a page, might as // well allocate 4096 bytesreal_LoadLibraryExW = (loadlibrary_t) VirtualAlloc(NULL, 4096,

MEM_COMMIT,PAGE_EXECUTE_READWRITE);// Copy first two instructions of LoadLibraryExW (which we know add up// to 7 bytes - we need 5 for our jump).memcpy((void *) real_LoadLibraryExW, (void *)LoadLibraryExW, 7);

(continued on page 34)

(continued from page 28)

32 Dr. Dobb’s Journal, November 2002 http://www.ddj.com

Software securitytesting must go

beyond traditionaltesting

Page 5: Testing for Software Security

// Writes a jump instruction out right after the copied instructions. // The jump is a relative near jump to the 8th byte of LoadLibraryExW.PBYTE pbCode = (PBYTE) real_LoadLibraryExW + 7;

// Write opcode for jump near and move (write) pointer forward*(pbCode++) = 0xe9;

// Write out address of where to jump to using a double word pointer. // That way, compiler takes care to put it in big endian convention.PDWORD pvdwAddr = (PDWORD) pbCode;

// Write out address - the +3 = -4 +7 (for the offset into the function)*pvdwAddr = (DWORD) LoadLibraryExW - (DWORD) pbCode + 3;

// Move (write) pointer forward the length of the address.pbCode+=4; DWORD dwOld, dwTemp;

// Set the page with LoadLibraryExW to writeableVirtualProtect((LPVOID) LoadLibraryExW, 4096,

PAGE_EXECUTE_READWRITE, &dwOld);// Write out the jumppbCode = (PBYTE) LoadLibraryExW;

// Write opcode for jump near to the beginning to LoadLibraryExW*((PBYTE) LoadLibraryExW) = 0xe9;

// Compiler gymnastics to move forward by *1* byte and not 4 to get// the exact address where to write the target address for the jump to.pvdwAddr = (PDWORD) (pbCode + 1); dwAddr = (DWORD) pvdwAddr;

// Write the address*pvdwAddr = (DWORD) imposter_LoadLibraryExW - (DWORD) LoadLibraryExW - 5;

// Set the old protection back. This is very important for some Win32// functions. They refuse to work with writeable protection enabled.VirtualProtect((LPVOID) LoadLibraryExW, 4096, dwOld, &dwTemp);

break;}

return TRUE;}

Listing Two#include "stdafx.h"#include <windows.h>

/* This program uses one of the simplest injection techniques out there. It

utilizes the fact that parameters and calling convention for LoadLibrary are the same as the thread function that is suplied to CreateThread/CreateRemoteThread. It uses that API to call LoadLibrary in the target process and load the desired DLL.*/int main(int argc, char* argv[]){

DWORD dwTemp;LPVOID pvDllName;

if (argc < 3){

printf("Usage: inject commandline dllname.dll\n");return 0;

}

// Setup the required structures and start the processPROCESS_INFORMATION pi = {0};STARTUPINFO si = {0}; si.cb = sizeof(si);if (!CreateProcess(NULL, argv[1], NULL, NULL, false, NULL,

NULL, NULL, &si, &pi))goto error;

// Allocate memory for the name of the DLL to be loadedif (!(pvDllName = VirtualAllocEx(pi.hProcess, NULL, strlen(argv[2]),

MEM_COMMIT, PAGE_EXECUTE_READWRITE)))goto error;

// Write out the name of the target DLLif (!WriteProcessMemory(pi.hProcess, pvDllName, argv[2],

strlen(argv[2]), &dwTemp))goto error;

// Technically this will execute LoadLibrary in the target process with // name of the DLL as the first parameter. This relies on the fact that// that kernel32.dll will NOT be relocated. Assuming that it won't be, then// then address of LoadLibraryA in the target process is the same as oursif (!CreateRemoteThread(pi.hProcess, NULL, NULL, (LPTHREAD_START_ROUTINE) LoadLibraryA, pvDllName, NULL, &dwTemp))

goto error;return 0;

error:if (pi.hProcess)

TerminateProcess(pi.hProcess, 0);printf("Error in injection!\n");return -1;

}

DDJ

(continued from page 32)

34 Dr. Dobb’s Journal, November 2002 http://www.ddj.com