24
1 1 Introduction..................................................... 1 2 The Sun Network File System ........................ 3 2.1 Participating Protocols .............................. 3 2.2 Implementation Details ............................ 7 2.2.1 Implementing NFS through VFS ..10 2.2.2 NFS Operation Example ................ 12 2.3 Statelessness and Performance ............... 13 2.4 Alternatives ............................................. 17 2.4.1 RFS .................................................. 17 2.4.2 AFS .................................................. 17 2.4.3 OSF/DFS ........................................ 18 2.4.4 SMB ................................................. 19 2.5 Outlook on NFS v4.................................. 19 2.6 Evaluation................................................ 22 Abstract We will introduce you to the Sun Network File System (NFS). Before getting into the implementation details, we will outline the benefits of network transparency, explain the possibilities that open to the user and discuss how NFS enables this idiom. We then classify the NFS protocol related to the common ISO/OSI reference model and have a look at both the enabling technologies and important NFS-based services. Originally, NFS was developed for UNIX-based machines so we provide more detailed information on the implementation in UNIX operating systems. As NFS is a stateless protocol, there arise some issues that will also be discussed. This includes security and performance questions. Additionally, we briefly characterise and evaluate some alternatives to NFS.(mh,mk) Keywords: AFS, DFS, Mount, network transparency, NFS, NIS, NLM, RFS, RPC, SMB, VFS. 1 Introduction Working in a networked environment opens the need for everyone accessing data from everywhere within that particular network. Typically, a networked environment consists of at least tens (if not hundreds) of computers that run under different operating systems and that differ in the technical equipment and in the purposes they are intended to be used for. It is clear that the probability of changes within this system depends on its size. But the user expects the system as a whole to appear unchanged every time he uses it. This also means that the file system structure he accesses must neither depend on where the data is really located, nor on the computer architecture. That is, he wants the network to be transparent. The Sun Network File System (NFS) is a distributed file system that provides the functionality for transparent access to remote computers [2, p.93pp]. It lets the user mount remote file systems as if they The Sun Network File System Martin Herbort, Martin Karlsch Communication Networks Seminar WS 03/04 Hasso-Plattner-Institute for Software Systems Engineering Postbox 90 04 60, 14440 Potsdam, Germany {martin.herbort, martin.karlsch}@student.hpi.uni-potsdam.de

The Sun Network File System

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

1

1 Introduction.....................................................1 2 The Sun Network File System........................3

2.1 Participating Protocols..............................3 2.2 Implementation Details ............................7

2.2.1 Implementing NFS through VFS ..10 2.2.2 NFS Operation Example................12

2.3 Statelessness and Performance...............13 2.4 Alternatives .............................................17

2.4.1 RFS ..................................................17 2.4.2 AFS..................................................17 2.4.3 OSF/DFS ........................................18 2.4.4 SMB.................................................19

2.5 Outlook on NFS v4..................................19 2.6 Evaluation................................................22

Abstract

We will introduce you to the Sun Network File System (NFS). Before getting into the implementation details, we will outline the benefits of network transparency, explain the possibilities that open to the user and discuss how NFS enables this idiom. We then classify the NFS protocol related to the common ISO/OSI reference model and have a look at both the enabling technologies and important NFS-based services. Originally, NFS was developed for UNIX-based machines so we provide more detailed information on the implementation in UNIX operating systems. As NFS is a stateless protocol, there arise some issues that will also be discussed. This includes security and performance questions. Additionally, we briefly characterise and

evaluate some alternatives to NFS.(mh,mk)

Keywords: AFS, DFS, Mount, network transparency, NFS, NIS, NLM, RFS, RPC, SMB, VFS.

1 Introduction Working in a networked

environment opens the need for everyone accessing data from everywhere within that particular network. Typically, a networked environment consists of at least tens (if not hundreds) of computers that run under different operating systems and that differ in the technical equipment and in the purposes they are intended to be used for. It is clear that the probability of changes within this system depends on its size. But the user expects the system as a whole to appear unchanged every time he uses it. This also means that the file system structure he accesses must neither depend on where the data is really located, nor on the computer architecture. That is, he wants the network to be transparent.

The Sun Network File System (NFS) is a distributed file system that provides the functionality for transparent access to remote computers [2, p.93pp]. It lets the user mount remote file systems as if they

The Sun Network File System

Martin Herbort, Martin Karlsch Communication Networks Seminar WS 03/04

Hasso-Plattner-Institute for Software Systems Engineering Postbox 90 04 60, 14440 Potsdam, Germany

{martin.herbort, martin.karlsch}@student.hpi.uni-potsdam.de

2

were stored on the local machine. NFS is a file access protocol. Instead of transferring a required file to a local disk, as it is necessary for file transfer protocols like FTP, data can be viewed and manipulated in-place, i.e. on the remote machine effectively containing that file.

NFS mainly relies on the Sun Remote Procedure Call (RPC) protocol [1, p.109]. This implies a client-server relationship between participating computers. Machines that export their file systems to the network are called NFS-servers. Machines that mount file systems exported by NFS-servers (import) are called NFS-clients [3, p.5]. A machine can be NFS-server and NFS-client at the same time. A computer running NFS cannot distinguish remote file systems from local ones [1, p.1].

Every operation on a file system mounted via NFS is resolved to a set of RPC calls. We will introduce RPC and explain other underlying protocols in a later paragraph (§2.1). The operations defined by the NFS protocol “can be divided into file, directory, symbolic link and special device operations” [2, p. 120]. Due to different file system semantics in different computer architectures, most NFS implementations only work with a subset of these operations. On the other hand, interoperability between any two implementations is a must. See below but also 2.1 and 2.5.

A considerable part of this paper deals with the implementation of NFS (2.2). Although version 3 (from June 1995) is not the most recently developed version, it is the base for this elaboration. Version 4 from April 2003 is still used rarely (e.g. implementation state is

experimental in the upcoming LINUX kernel 2.6).

It is important for client-server scenarios, whether clients and/or servers have to keep state when using a particular protocol or not. NFS is called a stateless protocol because a particular NFS operation in general does not depend on proceeding or succeeding NFS operations [2, p.122]. The term “in general” indicates that there are some stateful operations. In fact, caching of recently accessed data and locking of files are such stateful operations [3, p.3, p.9]. But all operations that build the core functionality (i.e. READ and WRITE) are kept stateless. There is a paragraph dedicated to the statelessness of NFS (2.3). It covers the problems that arise as well as how the NFS protocol deals with them.

As described more precisely in the evaluation paragraph (2.6), we conclude that NFS is a good and easy-to-use solution to achieve network transparency within Local Area Networks (LANs), that do not differ much with respect to the system architecture used. In highly heterogeneous networks, it is difficult to reach interoperability. This is because different client implementations of a stateless protocol like NFS may make different assumptions related to server capabilities. Furthermore, some operations may need to be “emulated” that are not supported originally. The broadly used NFS version 3 suffers from bad performance when operating via the internet. Higher overall performance is one of the design goals of NFS version 4 [4, p.9]. Although mass storage is cheap today, it is worth to notice that NFS can also be used to realise clients without

3

built-in mass storage (diskless clients). Some figures show aspects of NFS

operation using FMC modelling techniques. Although it is assumed that the reader of this paper is familiar with the fundamental concepts of FMC, we refer to [19] for further information.(mh)

2 The Sun Network File System

2.1 Participating Protocols To classify the protocols described

below, the following figure shows the NFS protocol stack related to the ISO/OSI layers 1 to 7.

The Sun Network File System itself is

an application layer protocol. As mentioned above, NFS mainly resides on the Sun RPC (Sun Microsystems Remote Procedure Call) protocol. It is also referred to as ONC/RPC [2, p. 150]. RPC is used as the carrier of requests to a server where they are resolved to procedures (for details on RPC technology see the appropriate

elaboration). NFS is implemented as a set of RPC-procedures each of whom works on either an object of type “file” (e.g. reading, writing files, changing file attributes) or of type “file system” (e.g. file lookups, static information on file system). As NFS depends on high throughput, the Sun designers decided to create NFS as a RPC protocol. This is because RPC was considered as simple and highly performant [5, p.6]. It corresponds to the communication layer within the ISO/OSI reference model.

Interoperability across operating system borders was a requirement, too. Different architectures potentially use different byte orders (big/little endian). Also, architecture specific compilers could have different behavior when assigning data to memory segments. Though, there was the need for a standardized data presentation format. XDR (eXternal Data Representation) is such a format. As the name already indicates, it corresponds to the presentation layer within the ISO/OSI reference model. It standardizes the way data is encoded. XDR was initially developed by Sun Microsystems and is used by NFS on presentation layer [2, p.13]. XDR data formats are specified in their own description language, the “Remote Procedure Call Language” (RPCL). All data types used by NFS and all procedures it provides are described in RPCL. This description language resembles the C programming language what facilitates the comprehension of the NFS procedures, their parameters and their return values. (mh)

After we have described how RPC and XDR work, we will now discuss the

Figure 1: NFS protocol stack

4

most important functions which are defined through the NFS core protocol. They have to be offered and made callable through RPC by a server which implements the NFS protocol. For an in detail description refer to [3].

• NULL: Actually the simplest function. It does nothing and is only used for testing server response and measuring the time needed for that.

• GET/SETATTR: Used for getting/setting attributes of a file system object.

• LOOKUP: Searches a directory for a given filename and returns a corresponding file handle when found.

• ACCEES1: Determines the access privileges a user has to a specified file system object.

• CREATE: This procedure is used for creating a regular file.

• MKDIR: Is used for creating directories.

• RMDIR: Represents the counterpart to MKDIR which means it removes the directory which is passed as an argument.

• READ/WRITE: If a client wants to read from or write to a file, these functions are used. They have to be called with a valid filehandle. For a read operation the offset in the file and the number of bytes to read have to be passed to the function. As result, the read data is returned. The write

1 Introduced in NFS v3 to make handling of systems which implement file security through Access Control Lists easier.[6, p2]

operation additionally needs the actual data to write as an argument and returns if the operation succeeded or failed.

• COMMIT2: Commits data which is still in the server cache to stable storage.

• REMOVE: If the client wants to delete a file on the server, it invokes the REMOVE procedure with the name of the file to delete.

• RENAME: As the name already implies, this function is used for renaming files and directories.

• FSSTAT: When the clients needs to retrieve volatile information about the file system state, this function should be used.

• FSINFO: For non volatile information about the file system state or general information about the NFS v3 server implementation, a client can call this procedure3.

Additionally there exist functions for creating and removing hard and soft symbolic links and functions like READDIRPLUS4 which combine other operations.

Every operation has to return an appropriated error message if the operations fails. E.g. imagine a REMOVE call is issued on a non existent file. The serve would return a “file not found” constant as the result for the RPC. If a 2 Also introduced in NFS v3 to allow synchronous file writes. 3 We use the term procedure and function as equivalent and don’t differentiate between the meanings like it’s for example done in the context of programming languages. 4 Introduced in NFS v3 to eliminate the LOOKUP calls when scanning a directory. It returns attributes and file handles at the same time. See [3]

5

server can not implement a function it is also possible to return a “not implemented” message. This could happen for example on file systems which do not support symbolic links when calling SYMLINK.

Another important criterion which has to be met by the functions is that they are idempotent. That means, a RPC can be repeated as many times as the client wants, without causing any harm e.g. damage the file system. Also the return values should be the same. Idempotency will be explained in detail in 2.3.(mk)

As we will show in the next part, accessing data over NFS requires the clients to obtain an initial filehandle. For this, the Mount protocol is used. E.g. on UNIX systems, the command “mount jurassic:/export/test” is used on clients to obtain an NFS filehandle for the directory “/export/test” on computer ”jurassic”. The Mount protocol is one of many services that may run on a UNIX system and provides certain file system functionality to its clients. Every service is assigned to a port. This mapping typically is dynamic. That is, a portmapper service assigns a free port number to a service just when it starts up. This allows a more flexible use of the fixed set of ports. In that case, the portmapper is the only well-known service and has port number 111. The following figure shows the process of obtaining the initial filehandle described above. Time flows from top to bottom. The directory /export/data is the exported NFS file system. Hence, the initial filehandle is the filehandle representing this directory.

For bigger networks, it is difficult to keep access rights consistent, because several clients from different domains may have the same name. The NIS (Network Information Service) protocol, also known as YP (Yellow Pages) protocol, is another application layer protocol that is used for UID/GID mapping consistency [5, p.231] and naming of files. When using NIS-based security, many clients connect to a single database to gain access rights for NFS file systems.

A fundamental feature of nearly every file system is concurrent access to a single file. To avoid data inconsistencies, the idea of file locking is used. File locking algorithms manage multiple concurrent access to single files. NFS version 3 does not implement file locking. Though, file locking within NFS is based on another application layer protocol, the NFS Lock

Figure 2: Obtaining the initial filehandle. Source: [5, p.256].

6

Manager protocol (NLM) [1, p.194]. Like the NFS protocol itself, NLM is defined as a set of RPC procedures. NLM supports advisory locking. That is, any reader or writer of a file should acquire a lock on that file, but it is not enforced. An application following this rule is called cooperating, otherwise it is called noncooperating. Windows operating systems typically support only mandatory locking, i.e. every write or read system call implicitly results in a sequence of lock – read/write – unlock. That is, Windows applications are always cooperating.

“The UNIX operating system has a tradition of supporting only advisory locking” [5, p.281]. As NFS was originally designed for UNIX systems, NLM implements an advisory locking rule. Hence, there is the possibility of data corruption, i.e. if a file locked by a (cooperating) Windows application is updated by a (noncooperating) UNIX application [5, p.281]. There are exclusive locks, also called write locks, preventing any other access to the locked file and nonexclusive, also called read locks, that permit clients to have their own nonexclusive lock on the same file.

Locking, like caching, is a stateful mechanism. For more detailed information about NFS and statelessness see §2.3. To provide correct file system semantics, NFS servers must know which clients hold which locks. On the other hand, the clients must know, on which servers they hold locks for which files. There are two typical scenarios, a file locking protocol must provide solutions for [5, p.278].

The first one is loss of server state. An

NFS server running the NLM protocol maintains a data structure for records representing locks that have been granted to clients. “For performance reasons, this state is generally maintained in volatile storage, that is, the server’s memory” [5, p.278]. See §2.3 for details. The following remarks require the lock state to be stored in stable storage.

In case of a server-side crash/reboot, all clients that held locks on that particular server have to be notified about the crash, reboot respectively. As the server has a table of associated clients (monitored hosts), it can easily inform them about the new situation, because the table of monitored hosts is stored on stable storage. This notification is done by the Network Status Monitor.

The second one of the scenarios mentioned above is loss of client state. When a client crashes/reboots all locks it held (maybe on multiple servers) have to be freed, so that no file is unnecessarily locked. Therefore, if the crashed client recovers, its status monitor notifies every server in the table of associated servers (monitored hosts).

After a loss of server state, there is a defined time period called grace period during which only requests to reestablish locks are granted [1, p.194]. As requests for new locks are never granted during this period, the requests for reestablishing locks implicitly have a higher priority. The reason for this is described in the following:

Assumed, client A holds an exclusive lock on file x at server S. S crashes and recovers. S’ status monitor notifies all clients in the list of monitored hosts about the reboot, but meanwhile, client B

7

successfully requests a nonexclusive lock on file x. Now, A is notified about the reboot and tries to reclaim its lock on file x. This fails due to B’s read lock. Although A originally possessed the lock earlier than B, it cannot reclaim it. To avoid this unfair behavior, a grace period is used. (mh,mk)

2.2 Implementation Details We will now focus on the

implementation of NFS and the issues that arise. When you try to implement a protocol like NFS you have to consider a certain number of issues e.g.: How is compatibility with older Versions achieved? How does integration in the file system of the operating system work? How to handle server crashes on the client? These are by far not all questions which needed to be answered.

The easiest problem to solve is multiple version support. The RPC protocol has support for versioning a service[3]. Server and client will use the highest version both sides support. The implementer should provide full backwards compatibility, when possible. Because NFS v3 was designed with good compatibility to NFS v2 in mind, the multiple version support can be added at very low costs5. [6].

The next problem is permission checking. It has to be checked if the client has the rights to access a file or directory or to perform other operations. NFS does not provide an own security schema. It 5 NFS Version 3 only defines a revision to NFS Version 2; it does not provide a new model. Because of this, NFS Version 3 resembles NFS Version 2 in design assumptions, file system and consistency model, and method of recovering from server crashes. [6, chapter 4]

has to rely on standard operating system protection mechanisms (AUTH_UNIX,AUTH_DES,AUTH_KERB). The main problem with the UNIX style credential checking is that server and client have to share a common database of usernames6 and groups or the server has to offer another way of username mapping. Also with a stateless service like NFS, the permissions whether the user is allowed to read or write a file must be checked on every RPC7. This does not preserve the standard UNIX semantics, because normally, a check is only performed once on opening. The implementers have to workaround these problems.

On most operating systems, a particular user (on UNIX, the UID 0) has access to all files, no matter what permission and ownership he has.

This superuser permission may not be allowed on the server, since anyone who can become superuser on his client could gain access to all remote files.[3, p.99] Extra mappings must be provided here. Therefore, NFS servers typically map UID 0 to UID -2, what makes our superuser become a nobody user [5, p.232]. Typically, there is a file “/etc/exports” that contains information on access rights for the clients.

Also the RPCs have some sort of security checking. Every RPC is validated before it is executed. This is done in the RPC layer and can not be influenced by NFS. In [3] and [8] different available 6 If we are technically correct we should talk about mapping username on server and client to the same uid and groups to the same gid. 7 Every RPC contains the user uid and a list of gid the user belongs to.

8

security flavours are explained. One of the main reasons for the

development of NFS v3 was the need for better performance. The major bottleneck was caused through the inefficient write procedures. Because of the statelessness of the protocol (see 2.3) and the definition of the WRITE RPC in NFS v2, the client had to wait, until the data was written to stable storage before he could perform other actions(synchronous write throughput problem [6]). Normally it is only allowed to answer a client request after it is fulfilled. For example consider a WRITE RPC. After the server has received the call, it has to flush the data from the client to stable storage before answering the request. To improve performance, NFS v3 introduces asynchronous writes accompanied by the COMMIT procedure. The COMMIT call is needed to retain the statelessness. After an asynchronous write call, the server returns the control to the client immediately and caches the data to write. If the client decides to close8 the file, it has to send a COMMIT. This ensures that everything is stored safely. The server must not reply until a safe state is reached. Asynchronous writes are most effective for large files. A client can send many WRITE requests, and then send a single COMMIT to flush the entire file to disk when it closes the file. This allows the server to do a single large write, which most file systems handle much more efficiently than series of small writes. For very large files, the server can flush data in the background so that most of it will already be on disk when the 8 As the protocol is inherently stateless and no close operation exists, close is meant in this case only for the state kept on the client side.

COMMIT request arrives. [6, chapter 4.3.2]. It is important to mention that the server and respectively the clients are free to support asynchronous writes or not. For example in the case that the client is not able to buffer enough data to support server crash recovery with asynchronous writes. This can happen because of memory constraints. The client can always enforce synchronous writes with a special flag passed to the WRITE

requests. Also, the server is allowed to flush the data immediately. The behaviour depends on the implementation.

The concept of committed writes is also consistent to the design philosophy of simple server and powerful clients behind NFS. As the protocol is stateless, the client has to keep an in-memory image of uncommitted data. If no such image is kept, the data would be lost if the server crashes before a successful commit. To enforce data consistency, each reply for WRITE and COMMIT has a write verifier attached [6]. The verifier is a simple number which has to be checked by the client on a COMMIT request. (The number is a unique value which has to be changed by the server after a crash.) All numbers returned by the WRITE calls have to be equal to the number returned by the COMMIT request. If that is not the case, the client has to assume that the server has crashed during one of the last operation and all uncommitted data has to be resent. The client retains an image of all uncommitted data which makes resending possible. The number is normally generated at boot time of the

9

server9. (mk) Another problem to consider is that a

single computer can be NFS client and server at the same time. This may lead to a problem called mountpoint crossing. When a client walks along the hierarchy of a mounted NFS file system (exported by server A) using LOOKUP requests, it could pass a mountpoint, that server A has another file system mounted on (e.g. exported by server B). When the client needs a filehandle for a file within that “exported exported” file system, server A could simply transmit the filehandle (related to server B) that it uses to access this file. But the client will continue contacting A for file access, although the filehandle it obtained from A is related to server B. An NFS server cannot distinguish reliably between these two types of filehandles [5, p.228]. The filehandle passed to the client cannot be tagged with additional data, because NFS version 3 filehandles are limited in size, although they are flexible. Therefore, that “tagging” will not work indefinitely. Also, server A could maintain a filehandle mapping table, that assigns the appropriate server to each filehandle appearing in a client’s request. This strategy implies additional state information that has to be stored, recovered and transferred in particular scenarios (e.g. server crash). Additionally, the client itself can easily mount directories from server B, provided it has sufficient access rights. Therefore, the problem of mountpoint crossing is solved by mounting the exported file system of 9 Servers use normally their boot time as write verifier. It is unique on every boot.

server B to the client’s local file system. When a client exports a file system to

a server and that server in return exports a file system to the client, the exported file hierarchies can be nested one into the other. That is called a namespace cycle. To avoid infinite file hierarchy traversal, NFS servers hide those parts of an exported file system to a client that point back to the local file system of that particular client.

Not directly an implementation detail but worth to mention is the possibility to create networks with diskless clients. On some operating systems (including SunOS 4.0 and Solaris), diskless clients are supported through NFS [1, p.132pp]. Diskless clients are computers without a built-in mass storage device. To provide this functionality, the operating system’s memory manager must do its work in a way that allows a mapping from the content of the swap device to files. Furthermore, an NFS file system must be mountable as root directory. Setting up and configuring a diskless client requires several steps that depend on processor and platform architecture. Diskless clients need a server connection to be able to work. That is, the server must contain a copy of the appropriate operating system for every CPU-type of the clients. Additionally, platform-specific executables must be held on the server for every platform, the clients are based on.(mh)

10

2.2.1 Implementing NFS through VFS

On UNIX based operating systems, the NFS protocol is implemented with help of VFS (Virtual File System or sometimes also Virtual File System Switch [11]). It was introduced by Sun around 1985 [12] and first implemented in SunUNIX and BSD [13]. Today a VSF like file system is implemented by most UNIX kernels and also the most popular UNIX clone Linux [13]. The purpose of VFS is to make other file systems (non native to the system) to appear as a single one [1, p.113]. A single interface is offered to accommodate diverse file system types cleanly. [14, p.2]. All file system specific functions and dependencies are hidden.

On UNIX systems there is a single interface to operate on files. It is named the syscall layer and servers as hook point for VFS. It translates the system calls to the appropriate calls for the underlying file system. The most important fact is that the UNIX file system semantics are preserved weather you operate on ext3, NFS or even FAT10. Every system which implements the VFS interface can be easily plugged into Unix file hierarchy.

With the help of VFS a two level transparency is added. Level one is that to the user and the system it appears that everything is attached to the local file system and the real nature/location is hidden. As mentioned before this is achieved by the generic set of operations which are offered and described later. 10 On primitive file systems like FAT not all operations can be supported natively e.g. symbolic links. But it is still possible to to use FAT via VFS.

Level two is the abstraction from the underlying architecture of the NFS server type achieved by a special file system object (vnode)[12]. It hides the actual physical representation of the data.

VFS divides two types of operations. Actions that operate on the file system such as getting the amount of free space are called VFS operations and actions that operate on file or directories e.g. creating a directory or writing to a file, vnode operations. The vnode interface translates the operation to the underlying file system call. The implementation is responsible for this translation such as returning the timestamp of the file in the VFS data format.

For every open file or directory a vnode is created. Open means in this case in active use. For example a vnode of an open local file points to an inode11. Every vnode contains a set of attributes and operations which can be applied to the node. The most important attributes and operations are:

Generic operations:

• vop_getattr: retrieves an attribute. • vop_setattr: sets an attribute.

Directory only operations:

• vop_mkdir: creates a directory. 11 Inode are the base entity for the UNIX file system. The difference between a vnode and an inode is where it's located and when it's valid. Inodes are located on disk and are always valid because they contain information that is always needed such as ownership and protection. Vnodes are located in the operating system's memory, and only exist when a file is opened. However, just one vnode exists for every physical file that is opened.

11

• vop_rmdir: removes a directory. • vop_create: creates a file. • vop_remove: removes a file. • vop_symlink: creates a symbolic link.

File only operations

• vop_getpages: reades bytes from a file. • vop_putpages: writes bytes to a file.

Attributes

• owneruserID : identification number of

the owning user. • ownergroupID: identification number

of the owning group. • filesize: size of the file12. • accesstime: time at which the last

access to the file/directory occurred. • modifytime: time at which the

file/directory was changed the last time.

A valid NFS implementation which should be used as a VFS plug-in has to implement most of the vnode operations. In Figure 3 the relationship between NFS and VFS is shown. There are many existing implementations of the vnode/VFS interface. [12&13].

From the preceding descriptions, it is fairly clear how the basic Unix system calls map into NFS RPC calls. It is important to note that the NFS RPC protocol and the vnode interface are two different things. The vnode interface defines a set of operating system services that are used to access all file systems, 12 It should be clear that for directories the filesize is zero.

NFS or local. Vnodes simply generalize the interface to file objects. There are many routines in the vnode interface that correspond directly to procedures in the NFS protocol, but the vnode interface also contains implementations of operating system services such as mapping file blocks and buffer cache management.[1, p.111] The NFS RPC protocol is a specific realization of the vnode interface.

Because VFS is specific to UNIX we

have to mention that a similar way exists to integrate NFS into Windows NT or higher. Most NFS implementations for Windows are able to map a NFS directory to a drive letter selected by the user. Because the NTFS file system supports soft and hard symbolic links it is also possible to integrate a remote directory seamlessly into the Windows file system. [15]. To integrate NFS into

Figure 3 VFS-NFS relationship on a UNIX system.

12

Windows transparently also a mapping of UNIX user IDs to a Windows user account has to occur. Windows uses Access Control Lists for permission checking. ACL’s allow a more fine grained permission control than the UNIX rwx schema. For a good explanation we refer to [15]. With ACL’s the rwx schema can be easily emulated on NFS Windows servers and clients. As the NFS v3 protocol does not support ACL, Windows users are restricted to an emulated UNIX rwx schema.

An annoying fact is that only commercial NFS implementations for Windows exist [16] Because these implementations are proprietary we cannot provide a discussion of the internal functionality.(mk)

2.2.2 NFS Operation Example

We will now provide an example how a dialog between a NFS server and a client may look like. After we have covered the NFS protocol and some implementation details this should give you a good understanding how NFS works.

Assumed we are on a UNIX system and execute the following sequence of commands.

mount ourserver:/export/vol0 /mnt

dd if=/mnt/home/data bs=32k count=1

of=/dev/null

The mount command will mount a remote file system located on the server

ourserver. Then the first 32KByte from the remote file data a read.

This will result in the following RPC sequence. → PORTMAP C GETPORT (MOUNT)

← PORTMAP R GETPORT

→ MOUNT C Null

← MOUNT R Null

→ MOUNT C Mount /export/vol0

← MOUNT R Mount OK

→ PORTMAP C GETPORT (NFS)

← PORTMAP R GETPORT port=2049

→ NULL

← NULL

→ FSINFO FH=0222

← FSINFO OK

→ GETATTR FH=0222

← GETATTR OK

→ LOOKUP FH=0222 home

← LOOKUP OK FH=ED4B

→ LOOKUP FH=ED4B data

← LOOKUP OK FH=0223

→ ACCESS FH=0223(read)

← ACCESS OK (read)

→ READ FH=0223 at 0 for 32768

← READ OK (32768 bytes)

At the beginning the portmapper

13

negotiates a port for the mount protocol. After the port is obtained the mount protocol mounts /export/vol1. Now a port for the core NFS protocol is retrieved. Over the communication port the first called procedure is NULL. The file system info and the attributes are now ascertained. Finally the directory home is looked up and the file data, too. In the following ACCESS call the rights are checked and with READ the first 32KByte are read.(mk)

2.3 Statelessness and Performance

Statelessness in a client-server architecture means that each request from a client to a server is stand-alone, that is no information on the clients’ state has to be kept by the server [5, p.87pp]. The NFS protocol can be called stateless insofar that it does not require keeping any state. But for performance reasons, both most client and server implementations keep some state information.

One of the benefits of the statelessness of the NFS protocol is simple server recovery. If the server crashes while communicating with a client, the client will just have to retransmit the last request. Assumed, that NFS was a completely stateful protocol, the server’s state would have to be rebuilt or operations requested by clients would simply have to fail.

Statelessness becomes difficult because NFS e.g. does not specify OPEN or CLOSE semantics on the server. As it is possible for POSIX compliant clients to

remove open files, there is the possibility of a certain type of data inconsistency called Last Close problem [5, p.227]. This is solved as follows:

Assumed, there are two processes represented by process IDs 1234 and 3421 on one NFS client working on file a.txt. Both 1234 and 3421 have opened a.txt. Now, 1234 removes a.txt. This remove effectively is a rename operation that follows certain naming conventions. 3421 can continue working on a.txt because the filehandle for a.txt is still valid. After the last reference to a.txt has been deleted, the NFS client code physically removes the renamed version of a.txt. Figure 4 illustrates the Last Close problem.

Another aspect of statelessness is the idea of idempotent operations. “An idempotent operation is one that returns the same result if it is repeated” [5, p.88]. Given a READ request, that is repeated with the same arguments. It always returns the same results. This is clear because no data is changed. But what about a REMOVE request? Assumed, that one such request

Figure 4: The Last Close Problem: Removing opened files may be allowed by POSIX compliant clients. Source: [5, p.227].

14

was executed correctly by the server, but its response to the client does not arrive due to packet loss. The response would have timed-out on the client. Then, the client would retransmit the REMOVE request. Now, everything works correctly, but the client would get an “operation failed” message, because the file mentioned in the initial request does not exist. To avoid such behavior, NFS servers implement a duplicate request cache [5, p.235pp]. Caching is not compatible with the notion of statelessness. Every RPC call has a unique transaction ID. The cache’s entries are such RPC calls indexed by their transaction IDs. The entries also include the initially generated reply. The server has to consider specified caching policy:

If the cache entry indicates a request currently being processed, nothing particular has to be done.

If the cache entry indicates that the server replied to the request recently, the request is ignored. It is assumed, that the request and the reply crossed each other on the network. Furthermore, the client will retransmit the request again, if that assumption was wrong.

If the cache entry indicates that the server replied, but not recently, the cached reply is sent.

When using a duplicate request cache, even REMOVE requests are idempotent. As mentioned earlier, the NFS protocol does not require the server to keep that form of state. But in doing so, client implementations are kept less complicated than they would be, if they had to handle duplicate requests themselves.

As NFS servers have to keep only

little state on their clients, it is easy to setup a highly available network of NFS servers (HA-NFS, high-availability NFS [5, p.89pp]). Every server can resume another’s work, provided all state information (caches, locks) can be transferred quickly. This requires the duplicate request cache and file locking information be stored on stable (secondary) storage. As mentioned in §2.1, the NFS Lock Manager generally holds that information in volatile memory. This does not meet the requirements of HA-NFS servers. But “rather than rely on stable storage for recording lock state (and affecting server performance), a more practical scheme is to handle the failover as a fast crash and recovery [to the standby server] so that clients will reclaim their locks” [5, p.90].

The overall performance of NFS is influenced by a number of factors. Of course, the hardware components (e.g. memory, CPU, network interfaces) of NFS servers and clients play a considerable role. But, when benchmarking NFS, the most important factor is the operation mix (workload) of the clients. Figure 5 shows typical workloads.

The type and size of the caches have Figure 5: Typical NFS workloads.

Source: [5, p.418].

15

great impact on the performance. User B’s workload shows that the ratio of READ requests is much smaller then in A’s workload. This may indicate a bigger cache on B’s machine.

There are several benchmarks for NFS file systems. The SFS 2.0 (System File Server) is the official benchmark for NFS version 3 [5, p.430]. It was developed by SPEC (Standard Performance Evaluation Corporation). It is also used to determine the performance of NFS cluster servers. In general, the most common NFS benchmarks are based on a specified set of NFS operations to test e.g. server load vs. response time.

NFS was originally designed for high-bandwidth and low-latency networks like LANs. As UDP (User Datagram Protocol, connectionless) performs well within such an environment, it was chosen for the transport layer protocol instead of TCP (Transmission Control Protocol, connection oriented). WANs (Wide Area Networks) like the Internet have a low bandwidth and high latency. That is one of the main reasons why the Internet is based on TCP. Typically, TCP clients and servers monitor the link quality and adjust the transmission rate. UDP communication partners send packets of the same size, even if the network is congested (see elaboration “Congestion Control Algorithms” for details on dealing with congested networks). TCP increases the overall performance if it is used for wide area networks, that are typically “lossy”. That is, single packets get lost often. If single packets get lost using a TCP connection, only these specific packets need to be retransmitted rather than the

whole operation. This means a performance improvement. The impact on NFS if TCP is used at transport layer highly depends on the workload, that is the frequency of operations of different types. Though, a disadvantageous workload may be responsible for a bad performance if NFS is used over TCP. TCP is an inherently stateful protocol. Though, using stateless NFS over stateful TCP seems to be a contradiction. But remember that the need of TCP for keeping the state is hidden to higher level protocols like NFS because of the OSI architecture.

For WANs it is recommended to use NFS over TCP, see e.g. [1, p.384]. As TCP implies some overhead if NFS is mainly used in LANs (Local Area Networks) and because specific NFS client or server implementations may not support TCP, NFS over UDP is still state of the art.

The requirement for high-speed networks is typical for all file access protocols. Therefore, using NFS over a 56Kbps modem connection is not very effective. The usage of a file transfer protocol would be more appropriate! For clients with a very limited network connection, it is easier to transfer a file once, modify it and transfer it back to the file transfer server. The reason for this is that the overhead implied by file access protocol requests are no longer negligible due to the small bandwidth.

WebNFS is an extension to the NFS protocol that facilitates using NFS file systems over the internet and improves the performance. Instead of connecting to a portmapper to determine the port for the Mount daemon that will then transmit an initial filehandle, WebNFS

16

servers provide a public filehandle. Servers that implement the WebNFS extensions mark directories as public, associating them with the public filehandle. Clients specify an NFS URL to connect to exported resources.

E.g. nfs://gamma:2010/x/y indicates the resource y in the directory x of server gamma, whose NFS server application listens to port 2010. WebNFS implementations typically use TCP on transport layer because the transmission is more reliable compared to UDP and because TCP is “handled more kindly by network firewalls” [5, p.440]. Transmission overhead is reduced because the WebNFS protocol defines a multicomponent LOOKUP request, that is, a LOOKUP request can contain a complete pathname instead of just one part. But even WebNFS cannot completely solve performance issues when using NFS over TCP: NFS depends on other protocols that may not have well-known ports, making their usage difficult because of the firewall’s port policy. Figure 6 illustrates this situation. In NFS version 4, the port for NFS is the only port to know when clients want to access remote file systems via NFS. The reason for this is that NFS version 4 itself implements features like locking and mounting. This is shown in the figure by the separate client-server system at the bottom.

NFS version 4 integrates much functionality that, in version 3, is integrated in other protocols. Therefore, NFS version 4 will increase performance also when used over the Internet (see §2.5 for more details).

Other steps taken to speed up NFS version 3 follow the read ahead and write behind strategy. These strategies rely on asynchronous requests. Clients that begin to read a file tend to read the whole file. To speed up server response time, a read-ahead strategy is used. The server assumes that the client will later request another part of the file and caches the whole file content. Succeeding read requests can be answered more quickly.

Corresponding to this, a write-behind strategy is used to speed up server response time after write requests. In NFS version 3, write requests are synchronous. This causes the client to block until the data is written to server and the operation returns. Waiting for the disk to complete the write operation takes long and therefore, write requests

Figure 6: Connecting to an NFS server through a firewall (FMC static structure).

17

are expensive. When using write-behind, those requests are asynchronous, i.e. the operation immediately returns to the client. The server will flush the data to disk later. The disadvantage is that, when the client sends more NFS requests than the server can handle, requests dropped by the server must be retransmitted. This implies lower throughput in heavy-load situations.

2.4 Alternatives

2.4.1 RFS

In the 1980’s, the Remote File Sharing (RFS) protocol was a competitor to NFS [5, p.361pp]. It was part of AT&T’s UNIX System V release 3 and aimed to “preserve full UNIX file system semantics for remote files” [5. p.363]. Therefore, it only supports UNIX clients, limiting the operation of RFS to pure UNIX networks. This limitation and the fact that RFS is a stateful protocol is the reason for that problems like the Last Close problem (see §2.3), that NFS cannot solve cleanly, do not occur. RFS, like NFS version 4, directly supports file locking. The remote access is extended to devices like tape drives. The main difference to NFS is that RFS is based on a remote system call paradigm. Every system call on the client (open, close, read, write, …) is sent via RPC to the server’s system call interface. There is no set of specified procedures like in NFS. As UNIX system calls depend on the context of the user process, a context that corresponds to the original context on the client has to be set up on the server. This results in “poor

performance” [5. p.4]. When mounting an NFS exported file system, the server name and pathname must be provided. RFS uses a name server, to advertise a file system with a resource name. Therefore, when mounting an RFS file system, only the resource name is necessary. An example:

NFS: mount server:/home/jane

/mnt

RFS: mount –d HOMEJANE /mnt

Traversing file system structures may cross into an RFS file system. Then, the remaining part of the pathname is sent to the RFS server as a whole. The evaluation of such a multi-component pathname is difficult because, e.g. the usage of “..” may lead back into the client’s local file system. Therefore, NFS evaluates pathnames one-component-at-a-time. This requires a new NFS request for each component of the pathname, but local paths are detected more reliably.

RFS uses only rudimentary security algorithms: There are tables for UID/GID mapping and password-protected server connects. NFS makes use of the Network Information Service (NIS, see §2.1) and with version 4 integrates extended security flavors like Kerberos version 5.(mh)

2.4.2 AFS

AFS (Andrew File System) is another

18

alternative to NFS [5, p.364pp]. It was originally designed to distribute file systems across the campus of the Carnegie-Mellon University. It is based on proprietary RPC mechanism called Rx. AFS, like NFS, supports UNIX file system semantics. Clients access files in a global namespace, like in RFS. Within NFS, global namespaces require the usage of an additional naming service, typically NIS (see above). One of AFS’s main features is the aggressive caching policy. All data a client accesses is transferred to a cache on the client, including attributes and symbolic links. Therefore, AFS servers scale well when the number of clients heavily increases. “The Andrew [file] system is targeted to span over 5000 workstations” [18, p.550]. When data changes on the server, a callback message is sent to all clients that hold a cached copy of that data, informing them of this inconsistency. A client that writes its modified cached data back to the server overwrites the changes of any other client before it. This is similar to the lost update problem known from database systems. AFS always uses the integrated Kerberos security system to authenticate users and ACLs (Access Control Lists) to authorize file access to authenticated users. NFS version 3 “hardly provides any security systems itself, but instead implements standardized interfaces that allow different existing security systems to be used, such as, for example, Kerberos” [17, p.644]. AFS is the origin of the Coda (Constant data availability) file system. That file system allows clients to be temporally disconnected from the file servers without a loss of accessibility to

the files [17, p.605]. This functionality is provided by the caching strategies described above. Especially mobile users can benefit from this support [17, p.575].(mh)

2.4.3 OSF/DFS

The DFS file system, as a commercial implementation of AFS, is an enhancement to AFS [5, p.369]. It has been developed by the Open Software Foundation (OSF) as part of the Distributed Computing Environment (DCE). DFS addresses the problem of cache inconsistency with a server-based token scheme. A token manager grants tokens to clients that want to perform file operations of different types. Tokens represent access modes to data. Tokens are defined by classes and types. E.g. a token from class data can be of write or read type. Tokens of different classes do not conflict, tokens of the same class do. When a client wants to write to a file, it tries to acquire a data write token. If another client already holds such a token, there is a conflict. The token manager detects those conflicts and notifies all clients with conflicting tokens (revocation request). They will flush the data back to the server before sending a response to the server’s message. Tokens are associated with a lifetime value. When it expires, the server does not have to send a revocation request to the owning client, because the client knows that it has to write the data back in time, thus avoiding unnecessary network traffic. As in NFS,

19

there exists a grace period after a server crash during which only requests for reclaiming tokens are allowed. Any request for new tokens is denied.

As DFS is directly derived from AFS, its advantages and disadvantages persist within DFS.(mh)

2.4.4 SMB

SMB (Server Message Block) is a file access protocol native to DOS and Windows operating systems. One of the design goals was the preservation of DOS/FAT file system semantics. There is an enhanced version of SMB called CIFS (Common Internet File System). The CIFS file system is native to most of the Windows series operating systems. Therefore, setting up and administering file sharing in a Windows network is much easier with CIFS than with NFS. The Samba server is a program suite that includes CIFS-based software for UNIX servers. It provides file and print services to PC clients. To achieve this, it is sufficient to install Samba on a UNIX server, rather than to install a software on multiple clients, as it is necessary with PCNFSD. PCNFSD is a client-side extension to NFS that provides authentication and printing functionality especially to DOS clients.(mh)

2.5 Outlook on NFS v4 In this paper we mainly described

NFS up to the version 3. The reason is that version 3 is far more wide spread

and used than the new version 4. Although implementations for every major operating system exist. That is not the case for the new version. Actually only an experimental implementation for FreeBSD is available which based on RFC3530[4]. A working draft of the version 4 protocol specification is also submitted to the Internet Engineering Steering Group for consideration as a Proposed Standard[7]. Version 4 is inherently different from all previous versions. The problems which are addressed are:

• The improvement of the

functionality over the internet. The handling of firewalls should be improved, the performance in case of low bandwidth and high latency should be improved and servers should scale better on large number of clients.

• A stronger security checking should be provided. It should be ensured that the client fulfils at least a server defined minimal security contract.

• A better multi platform support should be added. NFS v3 was relatively close tied to the UNIX semantics.

How are these goals achieved? The biggest similarities with previous versions are that version 4 is still based on Remote Procedure Calls and also the External Data Representation mechanisms. Everything else has drastically changed. To satisfy the

20

growing need of security, the RPCSEC_GSS framework [8] is used to extend the basic RPC security. With the use of RPCSEC_GSS, various mechanisms are provided to offer authentication, integrity, and privacy to the NFS version 4 protocol. Kerberos V5 is used as described in [9] to provide one security framework primarily for intranets. Because Kerberos does not work well over the internet [4, chapter 11.2.1] the LIPKEY mechanism described in [10] is also offered. With the use of RPCSEC_GSS, other mechanisms may also be specified and used for NFS version 4 security. [7, chapter 1.4.1]. Because NFS is based on RPCs, the best way to achieve a better security is to make RPC even safer. This is achieved by the use of the mentioned RPCSEC_GSS framework.

One of the major changes directly to NFS is the remove of ancillary protocols. In NFS Versions 2 and 3, the Mount protocol was used to obtain the initial file handle, while file locking was supported via the Network Lock Manager protocol. NFS Version 4 is a single protocol that uses a well-defined port, which, coupled to the use of TCP, allows NFS to easily transit firewalls to enable support for the Internet. As in WebNFS, the use of initialized file handles obviates the need for a separate Mount protocol [3]. Locking has been fully integrated into the protocol – which was also required to enable mandatory locking. The NFS v4 locking uses a new concept which adds significant state (and concomitant error recovery complexity) to the protocol [7] and will be described separately later. This may be the main reason the stateless

concept of the previous version was dropped. The new version is stateful!

As one major goal was to increase the speed over the internet the number of RPCs per operation had to be reduced You can see in 2.2.2 that NFS version 3 needs many calls to perform a simple operation. NFS Version 4 introduces the COMPOUND operation. Related operations are grouped into a single RPC. The results of the grouped operations are also returned as a group from the COMPOUND call. As a result the number of needed calls is drastically reduced but the COMPOUND call also creates new problems which must be solved by the implementers. The grouped operations are not anymore atomic so no assumptions can be made weather conflicting operations are grouped in one COMPOUND call. The RFC suggests grouping only related calls.

As NFS version 4 is stateful a logical consequence was to introduce calls for opening and closing files. Namely OPEN and CLOSE. Accompanying to these procedures the concept of delegation was added to the protocol. If only a single user accesss a file opening and closing are delegated completely to his client. In the case of more than one user, state changes are propagated to the clients via callback. How callbacks and delegation works exactly is explained in [4,7].

To improve the secure locking capabilities the Lock Manger was directly integrated into the NFS core protocol and enriched with new and needed features. The file locking is now lease based. A lease is a time-bounded grant of control of the state of a file, through a lock or delegation, from the server to the client.

21

During a lease interval a server may not grant conflicting control to another client. A lease confers on the client the right to assume that a lock granted by the server will remain valid for a fixed (server-specified) interval and is subject to renewal by the client. The client is responsible for contacting the server to refresh the lease to maintain the lock. [7, chapter 9.1]. All lease expirations are treated as an error in the communication or in one of its participants. If a client fails to renew a lease the lock is removed and can be acquired by another client. In the situation of a server crash the server waits after reboot for a specified interval to allow all clients which claim locks to renew their leases. Only after the interval ends new locks can be acquired. The new locking system includes also mandatory locking. The reason is to increase interoperability of NFS. Microsoft’s Windows operating system supports mandatory locking as an important feature but not all UNIX versions do. Mandatory means that I/O operations on locked files will block until the lock is released (a record lock) - a clear improvement over the advisory locking which the NLM provided. It allowed synchronisation for cooperating applications but did not block other applications that wrote to the same file.

A similar new concept is share reservation. It can be used to deny other clients to open a file. We refer to [4] for more information.

The next improvement to mention is the support of Access Control Lists base on the Window NT model [16]. ACLs allow a much more fine grained control about file and directories and the rights a

user has to access them. For an in detail explanation of ACLs look at [15].

In version 2 and 3 of NFS users and groups were identified by a 32bit integer value. This schema is replaced by a string representation composed by username or group name and domain name e.g. [email protected]. This removes the need for development of a new user/group registry by leveraging the global domain name registry and delegating user/group naming task to it. The advantage of this solution is clear: no separate mapping between user names and the representing integer values is required and client and server have not to agree on an equal assignment of these values.

To make NFS v4 more firewall friendly13 the only option for the underlying communication layer was TCP/IP. NFS v4 does not work anymore over the UDP protocol. This is a logical consequence since UDP is a stateless protocol as version 2 and 3 are and version 4 is stateful as TCP/IP is, so it’s much more appropriate. Another step to better firewall support is the already mentioned elimination of the portmapper. The NFS server runs now on a mandatory port which is 2049.

As you can see NFS v4 comes with many new features and not much is left from the previous version. This has a good and a bad side. NFS 4 will be faster and more easy to user over WAN’s because the newly introduced COMPOUND RPC which reduces the traffic drastically and the better firewall 13 Actually version 2 or 3 could not be used over a firewall without great efforts.

22

support. Also the security is improved by securing the RPC even more. The bad side is that the needed effort to implement the protocol has increased dramatically. That implies that more bugs can and will be made during the implementation and it will take some time until the implementations reach a stable state. Another issue is the incompatibility with older versions. Administrators have to forget the most things they know about the usage of NFS so a migration to the new version can be very costly on larger or enterprise scale networks.(mk)

2.6 Evaluation We think NFS is a stable and good

solution for small up to large scale local area networks. With NFS you make a set of independent file systems to appear as a single one, without great efforts. As stated in 2.5 stable implementations for the most important operating systems exist. Since the NFS v3 RFC[3] was submitted in June 1995, enough time has passed to let all of the available implementation reach a very reliable and stable state. Also the tight integration in UNIX operating systems through VFS, as described in 2.2.1 adds much value. All network related details are completely hidden from the user by this approach.

Two of the major drawbacks are the lack of speed and the firewall problem if NFS is used in a WAN. We explained in 2.3 what causes these problems. These were two of the reasons for the development of NFS v4. Other problems addressed by version 4 are described in

the following. Administrators have to consider the

user and group management which can become very hard if the network grows larger and no solution like NIS is used.

Today we have also an increased need for security. As NFS v3 has no own security management it has to rely on the safety of its underlying protocols and the operating systems security mechanisms. A flaw in these protocols makes NFS vulnerable, too. This has to be considered in a scenario where safety is critical.

One reason why we think NFS is a good solution is the support of multiple platforms, however there are also a couple of incompatibilities introduced when a network consist of many different machines. We described earlier that NFS works with smart clients and simple servers. That leads sometimes to different behaviours on different client implementations. The authors noted a speed decrease when many architectures and clients are mixed in a single NFS network. This is not the case if only one architecture and client is in use.

All of these problems are addressed in the new revision of the protocol. But as stated in 2.5 NFS v4 is still under development and can not be used in critical environments where industrial strength stability is needed. The main reason is the increased complexity of the new protocol which is much higher that for the previous versions. It will still take some time until NFS v4 can fully replace version 3. The incompatibility between the versions makes it even worse. If some functionality offered by the upcoming version 4, like mandatory locking, is already needed today a couple of stable

23

alternatives to NFS exist as described in 2.4.

Our final recommendation is that NFS v3 is a good solution for up to enterprise scale local area networks. The participating machines should at least use the same client and server software. The servers should be backed by a NIS server or an equal user management system.(mk)

Bibliography

[1] Hal Stern, Mike Eisler, Ricardo Labiaga; Managing NFS and NIS; 2nd ed., 2001, O’Reilly

[2] Hal Stern; NFS und NIS: Managing von UNIX-Netzwerken – NFS und NIS; 1st ed., 1993, O’Reilly

[3] B. Callaghan, B. Pawlowski, P. Staubach; NFS Version 3 Protocol Specification: RFC 1813; June 1995, Sun Microsystems Inc.

[4] S. Shepler, B. Callaghan, D. Robinson; Network File System (NFS) version 4 Protocol: RFC 3530; April 2003, Sun Microsystems Inc.

[5] B. Callaghan; NFS Illustrated; 2nd ed., 2000, Addison-Wesley

[6] B. Pawlowski, C. Juszczak, P. Staubach, C. Smith, D. Lebel, D. Hitz

NFS Version 3 Design and Implementation

Proceedings of the USENIX Summer 1994 Technical Conference.

http://www.netapp.com/ftp/NFSv3_Rev_3.pdf

[7] B. Pawlowski, S. Shepler, C. Beame, B. Callaghan, M. Eisler, D. Noveck, D. Robinson, R. Thurlow

The NFS Version 4 Protocol Proceedings of the 2nd International

SANE Conference http://www.nluug.nl/events/sane2000/

papers/pawlowski.pdf [8] M. Eisler, A. Chiu, L. Ling RPCSEC_GSS Protocol Specification: RFC

2203 September 1997 [9] J. Linn The Kerberos Version 5 GSS-API

Mechanism: RFC 1964 June 1996, OpenVision Technologies [10] M. Eisler LIPKEY - A Low Infrastructure Public Key

Mechanism Using SPKM :RFC 2847 June 2000, The Internet Society, [11] Michael K. Johnson A tour of the Linux VFS 1996, Redhat http://www.tldp.org/LDP/khg/HyperN

ews/get/fs/vfstour.html [12] S.R. Kleiman Vnodes: An Architecture for Multiple File

System Types in Sun UNIX 1986, USENIX Summer [13] Peter J. Braam Presentation on the Linux Virtual File

System April 1998, Code Documentation Project http://www.coda.cs.cmu.edu/doc/talks/

linuxvfs/ [14] Duke System and Architecture Sun’s NFS & VFS

24

http://kos.enix.org/pub/vfssun.pdf [15] David A. Solomon, Mark Russinovich Inside Microsoft Windows 2000 (Microsoft

Programming Series), 3rd Book & CDr Edition, 2001, Microsoft Press

[16] Christopher Sedore Operating Systems: Integrating Unix And

Windows NT Using NFS Syracuse University http://www.networkcomputing.com/608

/608work1.html [17] A. S. Tanenbaum, M. van Steen;

Distributed Systems, Principles and Paradigms; 2002, Prentice Hall

[18] A. Silberschatz, P. B. Galvin; Operating Systems Concepts; 4th ed., 1994, Addison-Wesley

[19] FMC Modelling Concepts website; http://fmc.hpi.uni-potsdam.de; 2003, Hasso-Plattner-Institute;