50
1 | Page SAIKAT SARKAR

DocumentTA

Embed Size (px)

Citation preview

1 | P a g e

SAIKAT SARKAR

TABLE OF CONTENTS .

2 | P a g e

SL.NO. TOPIC PAPER CODE PAGE NO.

01 Basics of .NET 61 3-8

02 E- Commerce 62 9-14

03 Advanced Computer Networks

63 15-24

04 Computer Ethics and Cyber Lab

64 25-34

SAIKAT SARKAR

3 | P a g e

Subject: Basics of .NET

Subject Code: BSIT - 61

Assignment TA (Compulsory)

1) What does .NET framework comprised off?

ANS: The .NET Framework is Microsoft's platform for building applications that have visually stunning user experiences, seamless and secure communication, and the ability to model a range of business processes. The .NET Framework consists of:

Common Language Runtime - provides an abstraction layer over the operating system

Base Class Libraries - pre-built code for common low-level programming tasks Development frameworks and technologies - reusable, customizable solutions for larger programming tasks

2) Which are the platform technologies supported by .NET framework?

ANS: The platform technologies mainly supported by .NET framework are ADO.NET, internet technologies and interface designing.

3) How the windows programming is different from .NET programming?

ANS: In windows programming the application program calls the windows API function directly. The application runs on the windows environment i.e operating system itself. These type of application are called unmanaged or unsafe application.

In .NET programming the application program call .NET Base Class Library function which will communicate with the operating system. The application runs in .NET Runtime environment. These type of application are called managed or safe application. The .net Runtime starts code execution, manages thread, provides services, manages memory etc. The .NET Base Classes are fully object oriented. It provides all the functionalities of traditional windows API along with functionalities in new areas like accessing database, internet connection and web services.

4) What is the function of CTS? Explain the classification of types in CTS with a diagram.

ANS: The Common Type Specification (CTS) performs the following functions:

SAIKAT SARKAR

4 | P a g e

Establishes a common framework that enables cross language integration, type safety and high performance code execution.

Provides an object-oriented model.

defines rules that a language must follow, so that different language can interact with other language.

The classification for types of CTS

5) What are assemblies? What are static and dynamic assemblies?

ANS: Assemblies are the building blocks of .NET Framework applications; they form the fundamental unit of deployment, version control, reuse, activation scoping, and security permissions. An assembly is a collection of types and resources that are built to work together and form a logical unit of functionality. An assembly provides the common language runtime with the information it needs to be aware of type implementations. To the runtime, a type does not exist outside the context of an assembly. Static assemblies can include .NET Framework types (interfaces and classes), as well as resources for the assembly (bitmaps, JPEG files, resource files, and so on). Static assemblies are stored on disk in PE files. You can also use the .NET Framework to create dynamic assemblies, which are run directly from memory and are not saved to disk before execution. You can save dynamic assemblies to disk after they have executed.

6) Explain general structure of c#?

SAIKAT SARKAR

5 | P a g e

ANS: C# programs can consist of one or more files. Each file can contain one or more namespaces. A namespace can contain types such as classes, structs, interfaces, enumerations, and delegates, in addition to other namespaces. The following is the skeleton of a C# program that contains all of these elements.// A skeleton of a C# program using System;namespace MyNamespace1 {

class MyClass1 {}struct MyStruct {}interface IMyInterface {}delegate int MyDelegate();enum MyEnum {} namespace MyNamespace2 {}class MyClass2 {

public static void Main(string[] args) {}

}}

7) How do namespaces and types in c# have unique names? Give examples.

ANS: Namespaces are C# program elements designed to help you organize your programs. They also provide assistance in avoiding name clashes between two sets of code. Implementing Namespaces in your own code is a good habit because it is likely to save you from problems later when you want to reuse some of your code. For example, if you created a class named Console, you would need to put it in your own namespace to ensure that there wasn't any confusion about when the System. Console class should be used or when your class should be used. Generally, it would be a bad idea to create a class named

SAIKAT SARKAR

6 | P a g e

Console, but in many cases your classes will be named the same as classes in either the .NET Framework Class Library or a third party library and namespaces help you avoid the problems that identical class names would cause. Namespaces don't correspond to file or directory names. If naming directories and files to correspond to namespaces helps you organize your code, then you may do so, but it is not required. For ex- mynamespace1, mynamspace2 etc

8) What is delegate? What is the use of it? Give example.

ANS: A delegate is a type that references a method. Once a delegate is assigned a method, it behaves exactly like that method. The delegate method can be used like any other method, with parameters and a return value. A delegate is extremely important for C# as it is one of the four entities that can be placed in a namespace. This makes it shareable among classes. Delegates are fully object oriented as they entirely enclose or encapsulate an object instance and a method. A delegate defines a class and extends System. Delegate. It can call any function as long as the methods signature matches the delegates. This makes delegates ideal for anonymous invocation. The methods signature only includes the return type and the parameter list. If two delegates have the same parameter and return type, that is, they share the same signature; we consider them as different delegates Public delegate int Perform Calculation (int x, int y);

9) Write a program to demonstrate use of enums in C#?

ANS: === Example program that uses enums (C#) ===Using System;Class Program{

Enum Importance{

None,Trivial,Regular,Important,Critical

};static void Main(){

// 1.Importance i1 = Importance. Critical;

SAIKAT SARKAR

7 | P a g e

// 2.If (i1 == Importance. Trivial){

Console.WriteLine("Not true");}Else if (i1 == Importance. Critical){

Console.WriteLine ("True");}

}}Output of the programTrue

10)What is the use of attributes in C# programs?

ANS: The advantage of using attributes resides in the fact that the information that it contains is inserted into the assembly. This information can then be consumed at various times for all sorts of purposes:

An attribute can be consumed by the compiler. The System.ObsoleteAttribute attribute that we have just described is a good example of how an attribute is used by the compiler; certain standard attributes which are only destined for the compiler are not stored in the assembly. For example, the Serialization Attribute attribute does not directly mark a type but rather tells the compiler that type can be serialized. Consequently, the compiler sets certain flags on the concerned type which will be consumed by the CLR during execution such attributes are also named pseudo-attributes.

An attribute can be consumed by the CLR during execution. For example the .NET Framework offers the System.ThreadStaticAttribute attribute. When a static field is marked with this attribute the CLR makes sure that during the execution, there is only one version of this field per thread.

An attribute can be consumed by a debugger during execution. Hence, the System.Diagnostics.DebuggerDisplayAttribute attribute allows personalizing the display of an element of the code (the state of an object for example) during debugging.

An attribute can be consumed by a tool, for example, the .NET framework offers the System.Runtime.InteropServices.ComVisibleAttribute attribute. When a class is marked with this attribute, the tlbexp.exe tool generates a file which will allow this class to be consumed as if it was a COM object.

SAIKAT SARKAR

8 | P a g e

An attribute can be consumed by your own code during execution by using the reflection mechanism to access the information. For example, it can be interesting to use such attributes to validate the value of fields in your classes. Such a field must be within a certain range. Another reference field must not be null. A string field can be at most 100 characters. Because of the reflection mechanism, it is easy to write code to validate the state of any marked fields. A little later, we will show you such an example where you can consume attributes by your own code.

An attribute can be consumed by a user which analyses an assembly with a tool such as ildasm.exe or Reflector. Hence you could imagine an attribute which would associate a character string to an element of your code. This string being contained in the assembly, it is then possible to consult these comments without needing to access source code.

SAIKAT SARKAR

9 | P a g e

Subject: E-Commerce

Subject Code: BSIT - 62

Assignment TA (Compulsory)

1. What are the categories of operations under E-Commerce? Explain.

ANS: The following categories of operations came under e-commerce. Transactions between a supplier/a shopkeeper and a buyer or between two

companies over a public network like the service provider network (like ISP). With suitable encryption of data and security for transaction, entire operation of selling/buying and settlement of accounts can be automated.

Transactions with the trading partners or between the officers of the company located at different locations.

Information gathering needed for market research.

Information processing for decision making at different levels of management.

Information manipulation for operations and supply chain management.

Maintenance of records needed for legal purposes, including taxation, legal suits etc.

Transactions for information distributions to different retailers, customers etc. including Advertising, sales and marketing.

2. What is the role of encryption in E-Commerce? Explain.

ANS: Messaging services offer solutions for communicating non-formatted (unstructured) data-letters, memos, reports – as well as formatted (structured) data such as purchase orders, shipping notices, and invoices. It supports both synchronous (immediate) and asynchronous (delayed) message delivery and processing. It is not associated with any particular communication protocol. No preprocessing is necessary, although there is an increasing need for programs to interpret the message. Messaging is well suited for both client-server and peer-to-peer computing models.The main disadvantages of messaging are the new types of applications it enables – which appear to be more complex, especially to traditional programmers – and the jungle

SAIKAT SARKAR

10 | P a g e

of standards it involves. Also, security, privacy, and confidentiality through data encryption and authentication techniques are important issues that need to be resolved.

3. Explain the architecture frame work of electronic commerce?

ANS: The electronic commerce application architecture consists of six layers of functionality, or functionality, or services: (1) applications; (2) brokerage services, data or transaction management; (3) interface and support layers; (4) secure messaging, security, and electronic document interchange; (5) middleware and structured document interchange; and (6) network infrastructure and basic communications services.

Electronic Commerce Application Services: The application services layer of e-commerce will be comprised of existing and future applications built on the innate architecture. Three district classes of electronic commerce applications can be distinguished; customer-to-business, business-to-business, and intra-organization.

Information Brokerage and Management: The information brokerage and management layer provides service integration through the notion of information brokerages, the development of which is necessitated by the increasing information resource fragmentation. We use the notion of information brokerage to represent an intermediary who provides service integration between customers and information providers, given some constraint such as a low price, fast service, or profit maximization for a client.Information brokerage does more than just searching. It addresses the issue of adding value to the information that is retrieved. For instance, in foreign exchange trading, information is retrieved about the latest currency exchange rates in order to hedge currency holdings to minimize risk and maximize profit. With multiple transactions being the norm in the real world, service integration becomes critical.

Interface and Support Services: Interface and support services, will provide interfaces for electronic commerce applications such as interactive catalogs and will support directory services – functions necessary for information search and access. Interactive catalogs are the customized interface to consumer applications such as home shopping.An interactive catalog is an extension of the paper-based catalog and incorporates additional features such as sophisticated graphics and video to make the advertising more attractive.

Secure Messaging and Structured Document Interchange Services: The importance of the fourth layer, secured messaging, is clear. Broadly defined, messaging is the software that sits between the network infrastructure and the clients or electronic commerce applications, masking the peculiarities of the environment. In

SAIKAT SARKAR

11 | P a g e

general, messaging products are not applications that solve problems; they are more enablers of the applications that solve problems.Messaging services offer solutions for communicating non-formatted (unstructured) data-letters, memos, reports – as well as formatted (structured) data such as purchase orders, shipping notices, and invoices. It supports both synchronous (immediate) and asynchronous (delayed) message delivery and processing. It is not associated with any particular communication protocol. No preprocessing is necessary, although there is an increasing need for programs to interpret the message. Messaging is well suited for both client-server and peer-to-peer computing models.

Middleware Services: Middleware is a relatively new concept that emerged only recently. With the growth of networks, client-server technology, and all other forms of communicating between / among unlike platforms, the problems of getting all the pieces to work together grew. In simple terms, middleware is the ultimate mediator between diverse software programs that enables them talk to one another.Another reason for middleware is the computing shift from application centric to data centric. To achieve data –centric computing, middleware services focus on three elements: transparency, transaction security and management, and distributed object management and services.

Transparency: Transparency implies that users should be unaware that they are accessing multiple systems. Transparency is essential for dealing with higher-level issues than physical media and interconnection that the underlying network infrastructure is in charge of. The ideal picture is one of a “virtual{“ network: a collection of work-group, departmental, enterprises, and interenterprise LANs that appears to the end user o r client application to be a seamless and easily accessed whole. Transparency is accomplished using middleware that facilitates a distributed computing environment. The goal is form the applications to send a request to the middleware layer, which then satisfies the request any way it can, using remote information.

4. List the OMC’s [order Management Cycle] generic steps.

ANS: OMC [Order Management Cycle] has the following generic steps: Order Planning and Order Generation Cost Estimation and Pricing Order Receipt and Entry Order Selection and Prioritization Order Scheduling Order Fulfillment and Delivery Order Billing and Account / Payment Management Post-sales Service

SAIKAT SARKAR

12 | P a g e

5. Explain mercantile models from the merchant’s perspective?

ANS: The order-to-delivery cycle from the merchant’s perspective has been managed with an eye toward standardization and cost.To achieve a better understanding, it is necessary to examine the order management cycle (OMC) that encapsulates the more traditional order-to-delivery cycle. OMC has the following generic steps.

i. Order Planning and Order Generation The business process begins long before an actual order is placed by the customer.The first step is order planning. Order planning leads into order generation. Orders are generated in number of ways in the e-commerce environment. The sales force broadcasts ads (direct marketing), sends personalized e-mail to customers (cold calls), or creates a WWW page.

ii. Cost Estimation and Pricing Pricing is the bridge between customer needs and company capabilities. Pricing at the individual order level depends on understanding, the value to the customer that is generated by each order, evaluating the cost of filling each order; and instituting a system that enables the company to price each order based on its valued and cost. Although order-based pricing is difficult work that requires meticulous thinking and deliberate execution, the potential for greater profits is simply worth the effort.

iii. Order Receipt and Entry After an acceptable price quote, the customer enters the order receipt and entry phase of OMC. Traditionally, this was under the purview of departments variously titled customer service, order entry, the inside sales desk, or customer liaison. These departments are staffed by customer service representatives, usually either very experienced, long-term employees or totally inexperienced trainees. In either case, these representatives are in constant contact with customers.

iv. Order Selection and Prioritization Customer service representatives are also often responsible for choosing which orders to accept and which to decline. In fact, not all customer orders are created equal; some are simply better for the business than others.Another completely ignored issue concerns the importance of order selection and prioritization.Companies that put effort into order selection and link it to their business strategy stand to make more money.

v. Order Scheduling During the ordering scheduling phase the prioritized orders get slotted into an actual production or operational sequence. This task is difficult because the different functional departments – sales, marketing, customer service, operations, or production-may have conflicting goals.Communication between the functions is often nonexistent, with customer service reporting to sales and physically separated from production scheduling, which reports to manufacturing or operations. The result is lack of interdepartmental coordination.

vi. Order Fulfillment and Delivery During the order fulfillment and delivery phase the actual provision of the product or service is made. While the details vary from industry to industry, in almost every

SAIKAT SARKAR

13 | P a g e

company this step has become increasingly complex. Often, order fulfillment involves multiple functions and locations. The more complicated the task the more coordination required across the organization.

vii. Order Billing and Account / Payment Management After the order has been fulfilled and delivered, billing is typically handled by the finance staffs, who view their job as getting the bill out efficiently and collecting quickly.

viii. Post-sales Service This phase plays an increasingly important role in all elements of a company’s profit equation: customer value, price, and cost. Depending on the specifics of the business, it can include such elements as physical installation of a product, repair and maintenance, customer training, equipment upgrading and disposal. Because of the information conveyed and intimacy involved, post sales service can affect customer satisfaction and company profitability for years.

6. What are the three type’s electronic tokens? Explain.

ANS: Electronic tokens are of three types:

Cash or real-time. Transactions are settled with the exchange of electronic currency. An example of on-line currency exchange is electronic cash (e-cash)

Debit or prepaid. Users pay in advance for the privilege of getting information. Examples of prepaid payment mechanisms are stored in smart cards and electronic purses that store electronic money.

Credit or postpaid. The server authenticates the customers and verifies with the bank that funds are adequate before purchase. Examples of postpaid mechanisms are credit / debit cards and electronic checks.

7. What is e-cash? Give the properties of e-cash.

ANS: Electronic cash (e-cash) is a new concept in on-line payment systems because it combines computerized convenience with security and privacy that improve on paper cash. Its versatility opens up a host of new markets and applications. E-cash presents some interesting characteristics that should make it an attractive alternative for payment over the Internet.E-cash focuses on replacing cash as the principal payment vehicle in consumer-oriented electronic payments.

The predominance of cash indicates an opportunity for innovative business practice that revamps the purchasing process where consumers are heavy users of cash. To really displace cash, the electronic payment systems need to have some qualities of cash that current credit and debit cards lack. For example, cash is negotiable, meaning it can be given or traded to someone else. Cash is legal tender, meaning the payee is obligated to take it. Cash is a bearer instrument, meaning that possession is prima facie proof of ownership. Also, cash can be held and used by anyone even those who don’t have a

SAIKAT SARKAR

14 | P a g e

bank account, and cash places no risk on the part of the acceptor that the medium of exchange may not be good.Properties of e-cash: E-cash must have the following four properties

Monetary value Interoperability Retrievability Security

E-cash must have a monetary value; it must be backed by either cash (currency), bank-authorized credit, or a bank-certified cashier’s check. When e-cash created by one bank is accepted by others, reconciliation must occur without any problems. Stated another way, e-cash without proper bank certification carries the risk that when deposited, it might be returned for insufficient funds. E-cash must be interoperable – that is, exchangeable as payment for other e-cash, paper cash, goods or services, lines of credit, deposits in banking accounts, bank notes or obligations, electronic benefits transfers, and the like. E-cash must be storable and retrievable. The cash could be stored on a remote computer’s memory, in smart cards, or in other easily transported standard or special-purpose devices. Because it might be easy to create counterfeit cash that is stored in a computer, it might be preferable to store cash on a dedicated device that cannot be altered. This device should have a suitable interface to facilitate personal authentication using passwords or other means and a display so that the user can view the card’s contents. E-cash should not be easy to copy or tamper with while being exchanged; this includes preventing or detecting duplication and double-spending. Counterfeiting poses a particular problem, since a counterfeiter may, in the Internet environment, be anywhere in the world and consequently be difficult to catch without appropriate international agreements. Detection is essential in order to audit whether prevention is working. Then there is the tricky issue of double spending (DFN88). For instance, one could use e-cash simultaneously to buy something in Japan, India, and England. Preventing double-spending from occurring is extremely difficult if multiple banks are involved in the transaction. For this reason, most systems rely on post-fact detection and punishment.

SAIKAT SARKAR

15 | P a g e

Subject: Advanced Computer Networking

Subject Code: BSIT – 63

Assignment TA (Compulsory)

1. What is DNS? Why is DNS required? What is the basis to choose the domain to an organization?

ANS: DNS, the Domain Name System is a distributed hierarchical naming system for computers, services, or any resource connected to the Internet or a private network. It associates various information’s with domain names assigned to each of the participants.This is required because domain names are alphabetic, as they're easier to remember. The Internet however, is really based on IP addresses. Every time we use a domain name, therefore, a DNS service must translate the name into the corresponding IP address. For example, the domain name www.example.com might translate to 198.105.232.4. The basics of choosing domain to an organization by attaching random names to IP address and managing them is too nontrivial. So, a structured approach is needed.

Best way is to employ the postal addressing system.o Countryo Stateo Districto Taluko Cityo Street

Internet is divided into 200 Domains at Top level

Each top-level domain is further divided into sub domain.

Each sub domain is further divided into one or more levels of sub domains.

Top level domain can be split into two major classes.o Generic - generic domain names include

Om, int, mil, gov, org, net, edu...... biz, info, name (recent addition 2000 Nov) aero, coop, museums (new ones)

Country - each country has one entry, in, ae, us, jp etc

Top level domain should be unambiguous and non-contentious.

2. What are the different components of Internet cloud? How does WWW is connected with Internet cloud? Explain.

SAIKAT SARKAR

16 | P a g e

ANS: A cloud client consists of computer hardware and/or computer software that relies on cloud computing for application delivery, or that is specifically designed for delivery of cloud services and that, in either case, is essentially useless without it. Examples include some computers, phones and other devices, operating systems and browsersCloud application services or "Software as a Service (SaaS)" deliver software as a service over the Internet, eliminating the need to install and run the application on the customer's own computers and simplifying maintenance and support. Key characteristics include

Network-based access to, and management of, commercially available (i.e., not custom) software

Activities that are managed from central locations rather than at each customer's site, enabling customers to access applications remotely via the Web

Application delivery that typically is closer to a one-to-many model (single instance, multi-tenant architecture) than to a one-to-one model, including architecture, pricing, partnering, and management characteristics

Centralized feature updating, which obviates the need for downloadable patches and upgrades.

Cloud platform services or "Platform as a Service (PaaS)" deliver a computing platform and/or solution stack as a service, often consuming cloud infrastructure and sustaining cloud applications. It facilitates deployment of applications without the cost and complexity of buying and managing the underlying hardware and software layers. Cloud infrastructure services or "Infrastructure as a Service (IaaS)" delivers computer infrastructure, typically a platform virtualization environment as a service. Rather than purchasing servers, software, data center space or network equipment, clients instead buy those resources as a fully outsourced service. The service is typically billed on a utility computing basis and amount of resources consumed (and therefore the cost) will typically reflect the level of activity. It is an evolution of virtual private server offerings. The servers layer consists of computer hardware and/or computer software products that are specifically designed for the delivery of cloud services, including multi-core processors, cloud-specific operating systems and combined offerings.

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope that are linked by a broad array of electronic and optical networking technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail. Most traditional communications media, such as telephone and television services, are reshaped or redefined using the technologies of the Internet, giving rise to services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper publishing has been reshaped into Web sites, blogging, and web feeds. The Internet has enabled or accelerated the creation of new forms of human interactions through instant messaging, Internet forums, and social

SAIKAT SARKAR

17 | P a g e

networking sites.

3. What are the advantages of good routing protocol? Explain one of the routing protocols in detail.

ANS: The main objectives of the network layer are to deliver the packets to the destination. The delivery of packets is often accomplished using either a connection-oriented or a connectionless network service. In a connection-oriented approach, the network layer protocol first makes a connection with the network layer protocol at the remote site before sending a packet. When the connection is established, a sequence of packets from the same source to the same destination can be sent one after another. In this case, there is a relationship between packets. They are sent on the same path where they follow each other. A packet is logically connected to the packet traveling before it and to packet traveling after it. When all packets of a message have been delivered, the connection is terminated. In a connection oriented approach, the decision about the route of a sequence of packets with the same source and destination addresses can be made only once, when the connection is established. The network device will not compute the route again and again for each arriving packet. In a connectionless situation, the network protocol treats each packet independently, with each packet having no relationship to any other packet. The packets in a message may not travel the same path to their destination. The internet protocol (IP) is a connectionless protocol. It handles each packet transfer in a separate way. This means each packet travel through different networks before settling to their destination network. Thus the packets move through heterogeneous networks using connection less IP protocol.

DIRECT AND INDIRECT ROUTING There exits two approaches for the final delivery of the IP packets. In the Direct delivery, the final destination of the packet is a host connected to the same physical network as the deliverer (Figure 1). Direct delivery occurs when the source and destination of the packet are located on the same physical network or if the delivery is between the last router and the destination host.The sender can easily determine if the delivery is direct. It can extract the network address of the destination packet (Mask all the bits of the Host address) and compare this address with the addresses of the networks to which it is connected. If a match is found, then the delivery is direct. In direct delivery, the sender uses the destination IP address to find the destination physical address. The IP software then delivers the destination IP address with the destination physical address to the data link layer for actual delivery. In practical sense a protocol called address resolution protocol (ARP) dynamically maps an IP address to the corresponding physical address. It is to be noted that the IP address is a FOUR byte code where as the Physical address is a SIX byte code. The Physical address is also called as MAC address, Ethernet address and hardware address. When the network part of the IP address

SAIKAT SARKAR

18 | P a g e

does not match with the network address to which the host is connected, the packet is delivered indirectly. In an indirect delivery, the packet goes from router to router until it reaches the one connected to the same physical network as its final destination. Note that a delivery always involves one direct delivery but zero or more indirect deliveries. Note also that the last delivery is always a direct delivery. In an indirect delivery, the sender uses the destination IP address and a routing table to find the IP address of the next router to which the packet should be delivered. The sender then uses the ARP protocol to find the physical address of the next router. Note that in direct delivery, the address mapping is between the IP address of the final destination and the physical address of the final destination. In an indirect delivery, the address mapping is between the IP address of the next router and the physical address of the next router. Routing tables are used in the routers. The routing table contains the list of IP addresses of neighboring routers. When a router has received a packet to be forwarded, it looks at this table to find the route to the final destination. However, this simple solution is impossible today in an Internetwork such as the Internet because the number of entries in the routing table make table lookups inefficient. Several techniques can make the size of the routing table manageable and handle such issues as security.

4. What is streaming? Give some examples of streaming. What are the challenges in designing multimedia networking?

ANS: Streaming. In a streaming stored audio/video application, a client begins playout of the audio/video of few seconds after it begins receiving the file from the server. This means that the client will be playing out audio/video from one location in the file while it is receiving later parts of the file from the server. This technique, known as streaming, avoids having to download the entire file (and incurring a potentially long delay) before beginning playout. There are many streaming multimedia products, such as RealPlayer, QuickTime and Media Player.

Examples areStreaming stored audio/video,Streaming live audio/videoReal-time interactive audio/video.

Packet LossConsider one of the UDP segments generated by our Internet phone application. The UDP segment is encapsulated in an IP datagram. As the datagram wanders through the network, it passes through buffers (that is, queues) in the routers in order to access outbound links. It is possible that one or more of the buffers in the route from sender to receiver is full and cannot admit the IP datagram. In this case, the IP datagram is discarded, never to arrive at the receiving application.

SAIKAT SARKAR

19 | P a g e

End-to-End DelayEnd-to-end delay is the accumulation of transmission, processing, and queuing delays in routers; propagation delays in the links; and end-system processing delays. For highly interactive audio applications, such as Internet phone

5. What is the purpose of E-mail? What are the tools provided in the E-mail? Mention different E-mail –service providers and their special features.

ANS: Electronic mail is the most widely used tool in the present world for fast and reliable communication. It is based on RFC 822.

E-mail system supports five basic functions.

1) Composition: Helps in creating message and answers, supports many functions such as insertion of address after extraction from the original message during replying etc.

2) Transfer: Causes movement of message to the destination. Connection establishments and passage of message is done here.

3) Reporting: Do involve in reporting the origin of email whether it is delivered, lost or abandoned.

4) Disposition: Do involve in invoking certain tools to enable reading email message which come as attachment.

Ex: Abode to read a pdf file attachment.

5) Disposition: Involves, Reading, discarding, savings, replying, forwarding etc.

Additional features of E-mail system

Forwarding: forward email to another email ID

Mail box: storing/retrieving email

Mailing list: Send copies to the entire email list.

Other functions: CC: carbon copy

BCC: Blind copy

High priority

Yahoo, Gmail, Hotmail, AOL etc

6. How does UBL work? Explain the various steps of server side operation. Give an example.

SAIKAT SARKAR

20 | P a g e

ANS: XML is only the foundation on which additional standards can be defined to achieve the goal of true interoperability. The Universal Business Language (UBL) initiative is the next step in achieving this goal. The UBL effort addresses this problem by building on the work of the ebXML initiative. EbXML is a joint project of UN/CEFACT, the world body responsible for international Electronic Data Interchange (EDI), and the Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit consortium dedicated to the open development of XML languages. UBL is organized as an OASIS Technical Committee to guarantee a rigorous, open process for the standardization of the XML business language. The development of UBL within OASIS also helps ensure a fit with other essential ebXML Specifications.Server Side OperationUpon clicking a URL, the server side offers the following operations.

Accepts a TCP connection from a client.

Get the name of the file requested disk.

Get the file from the disk.

Return the file to the client.

Release the TCP connection

Problems with this type is the disk access with every request

SCSI disk have a disc access time of 5 ms. so it permits 200 disks access per second

It is still lower if the files are larger.

To overcome this, the web server maintains a large cache space which holds ‘n’ most recent files. Whenever a request comes, the server first look into caches and respond appropriately.

To make the server faster, multithreading is adapted.

There exists different concepts and design in one design. The server has a front end module and k processing modules (threads). The processing modules have access to the cache. The front end module accepts input request and pass it to one of the module. The processing module verifies the cache and responds if the file exists else it invokes disk search and caches the file and also send the file to the client. At any instant of time‘t’ out of k modules, K-X modules may be few to take requests, X modules may be in the queue waiting for disk access and cache search. If the number of disks is enhanced then it is possible to enhance the speed.

SAIKAT SARKAR

21 | P a g e

1. Each Module does the following.

Resolve the name of the Web page requested. E.g.: http:// www.cisco.com

2. There is no file name here. Default is index .html.

3. Perform access control on the client check to see if there are any restrictions.

4. Perform access control on the web page. Access restrictions on the page itself.

5. Check the cache.

6. Fetch the requested page.

CACHEFront end- - - - - - - K ProcessesK - ModuleThreadsIn comingRequest. Out goingReply

7. Determine MIME type

8. Take care of miscellaneous address ends. (Building User profile, Satisfaction.)

9. Return the reply to the client.

SAIKAT SARKAR

22 | P a g e

10. Make an entry in the server log.

If too many requests come in each second, the CPU will not be able to handle the processing load, irrespective of no of disks in parallel. The solution is to add more machine with replicated disks. This is called server form. A front end still accepts the request and sprays them to all CPUs rather than multiple threads to reduce the load on that machine. Individual machines are again Multithreaded with Multiple disks.

It is to be seen that cache is local to each machine. TCP connection should terminate at processing node and not at front end.

7. What are the criteria consider to develop a routing protocol? Explain the OSPF routing protocol in detail?

ANS: There exits two approaches for the final delivery of the IP packets. In the Direct delivery, the final destination of the packet is a host connected to the same physical network as the deliverer (Figure 1). Direct delivery occurs when the source and destination of the packet are located on the same physical network or if the delivery is between the last router and the destination host. The sender can easily determine if the delivery is direct. It can extract the network address of the destination packet (Mask all the bits of the Host address) and compare this address with the addresses of the networks to which it is connected. If a match is found, then the delivery is direct. In direct delivery, the sender uses the destination IP address to find the destination physical address. The IP software then delivers the destination IP address with the destination physical address to the data link layer for actual delivery. In practical sense a protocol called address resolution protocol (ARP) dynamically maps an IP address to the corresponding physical address. It is to be noted that the IP address is a FOUR byte code where as the Physical address is a SIX byte code. The Physical address is also called as MAC address, Ethernet address and hardware address. When the network part of the IP address does not match with the network address to which the host is connected, the packet is delivered indirectly. In an

SAIKAT SARKAR

23 | P a g e

indirect delivery, the packet goes from router to router until it reaches the one connected to the same physical network as its final destination.

Note that a delivery always involves one direct delivery but zero or more indirect deliveries.

Note also that the last delivery is always a direct delivery. In an indirect delivery, the sender uses the destination IP address and a routing table to find the IP address of the next router to which the packet should be delivered.

The sender then uses the ARP protocol to find the physical address of the next router. Note that in direct delivery, the address mapping is between the IP address of the final destination and the physical address of the final destination.

In an indirect delivery, the address mapping is between the IP address of the next router and the physical address of the next router.

Routing tables are used in the routers. The routing table contains the list of IP addresses of neighboring routers. When a router has received a packet to be forwarded, it looks at this table to find the route to the final destination. However, this simple solution is impossible today in an Internetwork such as the Internet because the number of entries in the routing table make table lookups inefficient. Several techniques can make the size of the routing table manageable and handle such issues as security.

OPEN SHORTEST PATH FIRST (OSPF) Open Shortest Path First (OSPF) is a routing protocol developed for Internet Protocol (IP) networks by the Interior Gateway Protocol (IGP) working group of the Internet Engineering Task Force (IETF). The working group was formed in 1988 to design an IGP based on the Shortest Path First (SPF) algorithm for use in the Internet. Similar to the Interior Gateway Routing Protocol (IGRP), OSPF was created because in the mid-1980s, the Routing Information Protocol (RIP) was increasingly incapable of serving large, heterogeneous internetworks. This chapter examines the OSPF routing environment, underlying routing algorithm, and general protocol components. OSPF was derived from several research efforts, including Bolt, Breakneck, and Newman’s (BBN’s) SPF algorithm developed in 1978 for the ARPANET (a landmark packet-switching network developed in the early 1970s by BBN), Dr. Radia Perlman’s research on fault-tolerant broadcasting of routing information (1988), BBN’s work on area routing (1986), and an early version of OSI’s Intermediate System-to- Intermediate System (IS-IS) routing protocol. OSPF has two primary characteristics. The first is that the protocol is open, which means that it is in the public domain. The OSPF specification is published as Request for Comments (RFC) 1247. The second principal characteristic is that OSPF is based on the SPF algorithm, which sometimes is referred to as the Dijkstra algorithm, named for the person credited with its creation. OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs) to all other routers within the same hierarchical area. Information on attached interfaces, metrics used, and other variables is included in OSPF LSAs. As OSPF routers accumulate link-state information, they use the SPF algorithm to calculate the shortest path to each node. As a

SAIKAT SARKAR

24 | P a g e

link-state routing protocol, OSPF contrasts with RIP and IGRP, which are distance-vector routing protocols. Routers running the distance-vector algorithm send all or a portion of their routing tables in routing-update messages to their neighbors.

7. Why is BGP needed? Explain com 1BGP used in place of the 1GP?

ANS: The Border Gateway Protocol (BGP) is the protocol backing the core routing decisions on the Internet. It maintains a table of IP networks or 'prefixes' which designate network reach ability among autonomous systems (AS). It is described as a path vector protocol. BGP does not use traditional Interior Gateway Protocol (IGP) metrics, but makes routing decisions based on path, network policies and/or rule sets. For this reason, it is more appropriately termed a reach ability protocol rather than routing protocol.BGP was created to replace the Exterior Gateway Protocol (EGP) routing protocol to allow fully decentralized routing in order to allow the removal of the NSFNet Internet backbone network. This allowed the Internet to become a truly decentralized system. Since 1994, version four of the BGP has been in use on the Internet. All previous versions are now obsolete. The major enhancement in version 4 was support of Classless Inter-Domain Routing and use of route aggregation to decrease the size of routing tables. Since January 2006, version 4 is codified in RFC 4271, which went through more than 20 drafts based on the earlier RFC 1771 version 4. RFC 4271 version corrected a number of errors, clarified ambiguities and brought the RFC much closer to industry practices. Most Internet users do not use BGP directly. Since most Internet service providers must use BGP to establish routing between one another (especially if they are multihued), it is one of the most important protocols of the Internet. Compare this with Signaling System 7 (SS7), which is the inter-provider core call setup protocol on the PSTN. Very large private IP networks use BGP internally. An example would be the joining of a number of large Open Shortest Path First (OSPF) networks where OSPF by itself would not scale to size. Another reason to use BGP is multihoming a network for better redundancy either to multiple access points of a single ISP (RFC 1998) or to multiple ISPs. BGP neighbors, or peers, are established by manual configuration between routers to create a TCP session on port 179. A BGP speaker will periodically send 19-byte keep-alive messages to maintain the connection (every 60 seconds by default). Among routing protocols, BGP is unique in using TCP as its transport protocol. When BGP is running inside an autonomous system (AS), it is referred to as Internal BGP (IBGP or Interior Border Gateway Protocol). When it runs between autonomous systems, it is called External BGP (EBGP or Exterior Border Gateway Protocol). Routers on the boundary of one AS exchanging information with another AS are called border or edge routers. In the Cisco operating system, IBGP routes have an administrative distance of 200, which is less preferred than either external BGP or any interior routing protocol. Other router implementations also prefer EBGP to IGPs, and IGPs to IBGP.

SAIKAT SARKAR

25 | P a g e

Subject: Computer Ethics and Cyber Laws

Subject Code: BSIT – 64

Assignment TA (Compulsory)

1. Why is computer ethics defined? Explain its evolution. What is computer crime? Explain different computer crimes.

ANS: Computer ethics is the analysis of the nature and social impact of computer technology and the formulation and justification of the policies for the ethical use of such technology. Computer ethics examine the ethical issues surrounding computer usage and the connection between ethics and technology. It includes consideration of both personal and social policies for ethical use of computer technology. The goal is to understand the impact of computing technology upon human values, minimize the damage that technology can do to human values and to identify ways to use computer technology to advance human values. The term computer ethics was coined in the mid 1970s by Walter Manor to refer to that field of applied professional ethics dealing with ethical problems aggravated, transformed or created by human technology. (James H Moor, 1997). The computer revolution is occurring in two stages. The first stage was that of “technology introduction” in which computer technology was developed and refined. The second stage is of “technological permeation” in which technology gets integrated into everyday human activities. Thus evolution of computer ethics is tied to the wide range of philosophical theories and methodologies, which is rooted in the understanding of the technological revolution from introduction to permeation.In the era of computer “viruses” and spying by “hacking”, computer security is a topic of concern in the field of computer ethics. Computer crime can be can be looked into from five different aspects:

1. Privacy and confidentiality

2. Integrity – Ensuring that data and programs are not modified without proper authority

3. Unimpaired service

4. Consistency: Ensuring that data and behavior we see today will be the same tomorrow.

5. Controlling access to resources.

SAIKAT SARKAR

26 | P a g e

Malicious kinds of software or programmed threats provide a significant challenge to computer security. These include “viruses” which cannot run on their own but are inserted into other computer programs. “Worms” are software that are programmed to move from machine to machine across networks and could consist of parts of themselves running on different machines; “Trojan horses” are programs which tend to appear as a sort of program but actually end up doing damage behind the scenes; “logic bombs” check for particular favorable conditions and then execute when such conditions arise and “bacteria” or “rabbits” are programs which are programmed to multiply rapidly nod fill up the computer’s memory. Computer crimes are normally committed by personnel who have the permission to use the system. An “hacker” is one who enters the system without authorization. It is done to steal data, commit vandalism or merely explore the system to see how it works and see what is contained. However every act of hacking is harmful as it involves the unauthorized entry into a system thus in fact trespassing into a person’s private domain. Even if the hacker did indeed make no changes, a computer’s owner must run through a costly and time-consuming investigation of the compromised system (Spafford, 1992).

2. What are the professional responsibilities? Discus the impact of Globalization on computer ethics.

ANS: Computer professionals have specialized knowledge and have positions with authority and access to various forms of confidential information. Hence they are in a position to significantly impact various aspects of people’s lives. Along with such power to change the world comes the duty to exercise the power responsibly (Gotterbarn, 2001). Computer professionals find themselves in a variety of professional relationships with other people (Johnson, 1994) including:

Employer – employeeClient _ professionalProfessional _ professionalSociety _ professional

These relationships involve a diversity of interests and sometimes these interests can come into conflict with each other. Responsible computer professionals, therefore, will be aware of possible conflicts of interests and try to avoid them. Professional organizations like the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE) have established code of ethics, curriculum guidelines and accreditation requirements to

SAIKAT SARKAR

27 | P a g e

help computer professions understand and manage ethical responsibilities. They have also adopted Code of Ethics for their members which include “general moral imperatives” such as avoiding harm to others” and to be honest and trustworthy, “specific professional responsibilities” like acquiring and maintaining professional competence and knowing and respecting existing laws pertaining to professional work. The IEEE Code of Ethics includes such principles as avoiding real or perceived conflicts of interest whenever possible. Computer ethics is rapidly evolving into a broader and even more important field, which might reasonably be called “global information ethics”. Global networks like the internet and especially the World Wide Web are connecting people all over the globe. Efforts are on to develop mutually agreed standards of conduct and efforts to advance and defend human values. Some of the global issues being debated are:

Global laws: Over two hundred countries are already interconnected by the internet. Given this situation, what is the effect and impact of the law of one particular country on the rest of the world? Issues regarding freedom of speech, protection of intellectual property, invasion of privacy vary from country to country. The framing of common laws pertaining to such issues to ensure compliance by all the countries is one of the foremost questions being debated.

Global cyber business : Technology is growing rapidly to enable electronic privacy and security on the internet to safely conduct international business transactions. With such advanced technology in place, there will be a rapid expansion of global cyber business. Nations with a technological infrastructure already in place will enjoy rapid economic growth, while rest of the world lags behind. This disparity in levels of technology will fuel a political and economic fallout, which could further widen the gap between the rich and the poor.

Global education : Inexpensive access to the global information net for the rich and the poor alike is necessary for everyone to have access to daily news from a free press, free texts, documents, political, religious and social practices of peoples everywhere. However the impact of this sudden and global education on different communities, cultures and religious practices are likely to be profound. The impact on lesser known universities would be felt as older well-established universities begin offering degrees and knowledge modules over the internet.

3. What do you mean by Professional ethics? Explain the code of ethics. What are the basic features of Internet? What is Cyber crime? What is an Electronic Warfare?

SAIKAT SARKAR

28 | P a g e

ANS: Professional ethics: A code fulfilling its function will change the approach of many to the internet and provide a counter pressure against the tendency to behave unethically. It would also help many to realize that their behavior may be unethical. The standards expected and the realization of one’s conduct will make them pause and think about the consequences of their actions. This is summed up in the preamble to the Software Engineering Code of Ethics and Professional Practice: “These principles should influence internet developers / users to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be effected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments, concern for the health safety and welfare of public is primary; that is “public interest” is central to the code.” (Donald Gotternbarn, 1994).

Code of ethics: Code of ethics are more apparitional. They are mission statements emphasizing the professional objectives and vision. The degree of enforcement possible is dependent on the type of code. Code of ethics, which is primarily apparitional, uses no more that light coercion. Codes of conduct violations generally carry sanctions ranging from warning to exclusion from the professional bodies. Violations of the codes of practice may lead to legal action on the grounds of malpractice or negligence. The type of code used to guide behavior affects the type of enforcement. The hierarchy of codes parallels the three levels of ethical obligations owed by professionals. The first level identified is a set of ethical values, such as integrity and justice, which professionals share with other human beings by virtue of their shared humanity. Code statements at this level are statements of aspiration that provide vision and objectives. The second level obliges professionals to more challenging obligations than those required at the first level. At the second level, by virtue of their role as professionals and their special skills, they owe a higher degree of care to those affected by their work. Every type of professional shares this second level of ethical obligation. Code statements at this level express the obligations of all professionals and professional attitudes. They do not describe specific behavior details, but they clearly indicate professional responsibilities. The third and deeper level comprises several obligations that derive directly from elements unique to the particular professional practice. Code elements at this level assert more specific behavioral responsibilities that are more closely related to the state of art within the particular profession. The range of statements is from more general apparitional statement to specific and measurable requirements. Professional code of ethics needs to address all three of these levels.

Cyber crime: In an influential research work, Prof. Ulrich Sieber observed that; “the vulnerability of today’s information society in view of computer crimes is still not sufficiently realized: Businesses, administrations and society depend to a high degree on the efficiency

SAIKAT SARKAR

29 | P a g e

and security of modern information technology. In the business community, for example, most of the monetary transactions are administered by computers in form of deposit money. Electronic commerce depends on safe systems for money transactions in computer networks. A company’s entire production frequently depends on the functioning of its data-processing system. Many businesses store their most valuable company secrets electronically. Marine, air, and space control systems, as well as medical supervision rely to a great extent on modern computer systems. Computers and the internet also play an increasing role in the education and leisure of minors. International computer networks are the nerves of the economy, the public sector and society. The security of these computer and communication systems and their protection against computer crime is therefore of essential importance.

Electronic warfare: In the meantime, the possibilities of computer manipulations have also been recognized in the military sector. “Strategic Information Warfare” has become a form of potential warfare of its own. This type of warfare is primarily directed to paralyze or manipulate the adversary’s computer systems. The dependency of military systems on modern information systems became evident in 1995 when a “tiger-team” of the US Air Force succeeded in sending seven ships of the US Navy to a wrong destination due to manipulations via computer networks. Thus broadly speaking, the following specified nature of offences are recognized by respective nation states in their legislations. The list by no means is to be construed as exhaustive but only illustrative.

3. Brief the evolution of Computer Ethics. What are the three levels of computer ethics? Explain

ANS: The computer revolution is occurring in two stages. The first stage was that of “technology introduction” in which computer technology was developed and refined. The second stage is of “technological permeation” in which technology gets integrated into everyday human activities. Thus evolution of computer ethics is tied to the wide range of philosophical theories and methodologies, which is rooted in the understanding of the technological revolution from introduction to permeation. In the 1940s and 1950s computer ethics as a field of study had its roots in the new field of research called “cybernetics”-the science of information feedback systems undertaken by Professor Norbert Weiner? The concepts of cybernetics led Weiner to draw some remarkable ethical conclusions about the technology that is now called information and communication technology. He foresaw revolutionary social and ethical consequences with the advance of technology. In his view the integration of computer technology into society would eventually constitute the remaking of society, which he termed as the “second industrial revolution”. He predicted that workers would have to adjust to radical changes in the workplace, governments would need to establish new laws and regulations, industries would need to create new policies and practices,

SAIKAT SARKAR

30 | P a g e

professional organizations would need to develop new code of conduct for their members, sociologists and psychologists would need to study and understand new social and psychological phenomena and philosophers would need to rethink and redefine old social and ethical concepts. In the 1960s Don Parker of SRI Inc. began to examine the unethical and illegal uses of computers by computer professions. He published “Rules of Ethics in Information Processing” and headed the development of the first code of professional conduct for the association of computing machinery. The 1970s saw Walter Maner coin the term “Computer Ethics” to refer to that field of inquiry dealing with ethical problems aggravated, transformed by computer technology. He disseminated his starter Kit in computer ethics, which contained curriculum materials and guidelines to develop and teach computer ethics. By the 1980s a number of social and ethical consequences of information technology were becoming public issues in America and Europe. Issues like computer enabled crime, disasters caused by computer failures, invasions of privacy through computer databases etc become the order of the day. This led to an explosion of activities in the field of computer ethics. The 1990s heralded the beginning of the second generation of computer ethics. Past experience led to the situation, which helped to build and elaborate the conceptual foundation while developing the frameworks within which practical action can occur, thus reducing the unforeseen effects of information technology application.

Computer ethics questions can be raised and studied at various levels. Each level is vital to the overall goal of protecting and advancing human values.

On the most basic level, computer ethics tries to sensitize people to the fact that computer technology has social and ethical consequences. Newspapers, magazines and TV news programs have highlighted the topic of computer ethics by reporting on events relating to computer viruses, software ownership law suits, and computer aided bank robbery, computer malfunction, computerized weapons etc. These articles have helped to sensitize the public at large to the fact that computer technology can threaten human values as well as advance them. The second level consists of someone who takes interest in computer ethics cases, collects examples, clarifies them, looks for similarities and differences, reads related works, attends relevant events to make preliminary assessments and after comparing them, suggests possible analyses. The third level of computer ethics referred to as ‘theoretical’ computer ethics applies scholarly theories from philosophy, social science, law etc. to computer ethics cases and concepts in order to deepen the understanding of issues. All three level of analysis are important to the goal of advancing and defending human values. (James H Moor, 1997)

SAIKAT SARKAR

31 | P a g e

4. What are the codes of ethics? Compare its significance with present day scenario.

ANS: Professional societies have used the codes to serve the following functions:

1. Inspiration : Codes function as an inspiration for “positive stimulus” for ethical conduct. Codes also serve to inspire confidence in the customer or user.

2. Guidance: Historically, there has been a transition away from regulatory codes designed to penalize divergent behavior and internal dissent, towards codes which help someone determine a course of action through moral judgement.

3. Education : Codes serve to educate, both prospective and existing members of the profession about their shared commitment to undertake a certain quality of work and the responsibility for the well being of the customer and user of the developed product. Codes also serve to educate managers of groups of professionals and those who make rules and laws related to the profession about expected behavior. Codes also indirectly educate the public at large about what the professionals consider to be a minimal acceptable ethical practice in the field, even if practiced by a non-professional.

4. Support for positive action : Codes provide a level of support for the professional who decides to take positive action. They provide an environment in which it will be easier, than it would otherwise be, to resist pressure to do what the professional would rather not do. The code could be used as a counter pressure against the urging by others to have the professional to act in ways inconsistent with an ethical pattern of behavior.

5. Deterrence and discipline : Codes can serve as a formal basis for action against a professional. The code defines a reasonable expectation for all practitioners. A failure to meet these expectations could serve as a formal basis in some organizations to revoke membership or suspend license to practice.

SAIKAT SARKAR

32 | P a g e

6. Enhance the profession’s public image : Codes serve to educate multiple constituencies about the ethical obligations and the responsibilities of the professional. They educate the professionals about what they should expect from themselves and what they should expect from their colleagues. Codes also serve to educate society about its rights, about what society has a right to expect from the practicing professional.

A code fulfilling its function will change the approach of many to the internet and provide a counter pressure against the tendency to behave unethically. It would also help many to realize that their behavior may be unethical. The standards expected and the realization of one’s conduct will make them pause and think about the consequences of their actions. This is summed up in the preamble to the Software Engineering Code of Ethics and Professional Practice: “These principles should influence internet developers / users to consider broadly who is affected by their work; to examine if they and their colleagues are treating other human beings with due respect; to consider how the public, if reasonably well informed, would view their decisions; to analyze how the least empowered will be effected by their decisions; and to consider whether their acts would be judged worthy of the ideal professional working as a software engineer. In all these judgments, concern for the health safety and welfare of public is primary; that is “public interest” is central to the code.” (Donald Gotternbarn, 1994).

5. State and Discuss primary assumptions of a legal system. What are the basic features of Internet? Explain

ANS: Any legal system is premised upon the following primary assumptions as a foundation. They are:

a) Sovereigntyb) Territorial Enforcementc) Notion of propertyd) Paper based transactionse) Real relationships

a) Sovereignty : Law making power is a matter of sovereign prerogative. As a result, the writ of sovereign authority runs throughout wherever sovereign power exercises authority. Beyond its authority, which is always? Attributed to determinate geographical boundaries, the sovereign cannot regulate a subject matter through legal intervention. However, in the cyber context, geography is a matter of history, in the sense that barriers in terms of distance and geographical boundaries do not make much sense.

SAIKAT SARKAR

33 | P a g e

b) Enforcement: Any law in real world context can only be subjected to predetermined territorial enforcement. In other words, the territory over which the sovereign authority exercises power without any qualification or impediment will be able to enforce the law. However, this proposition carries some exceptions. It is a normal practice in the case of criminal law, the sovereign authority enjoins extra -territorial jurisdiction as well. It is to indicate that even if the crime is committed beyond the limits of territory, the sovereign authority will be able to initiate prosecution, provided if the custody of the person is fetched. Towards this end, it is a normal practice to invoke extradition proceedings (which reflect mutual understanding and undertaking to co-operate with each other nation in cases of crime commission). However, serious impediment in this respect is, the proceedings must comply with the principle of ‘double criminality’. It means that, in both the countries, the alleged act of commission must have been criminalized. In the context of cyber law, there are only twelve countries in the globe, where relevant laws have been enacted. But when it comes to Civil law, say in the case of international contracts, pertinent principles of Private International Law are invoked to address these issues. When it comes to cyber context, territory does not hold any meaning. Connectivity without any constraint is the strength of cyber world.

c) Notion of Property: The obtaining premise (though of late subjected to marginal change) of the legal response considers ‘property’ as tangible and physical. With the advent of intellectual property, undoubtedly, this concept or understanding of ‘property’ has undergone change. In the cyber context, ‘property’ in the form of digitized services or goods poses serious challenges to this legal understanding. Similarly, ‘domain names’ raise fundamental questions vis-à-vis the legal understanding of what constitutes ‘property’.

d) Real Relationships: Quite often, legal response considers relationships, which are real world oriented. In view of connectivity, pace and accuracy as to transmission, in the cyber context, these relationships acquire unique distinction of virtual character. In the broad ambit of trade and commerce, it is the commercial transaction in the form of contracts, which constitutes the foundation of legal relationship. Hence, if the relationships are virtual, what should be the premise of contract law, which is basically facilitating in nature? Even with regard to other activities, which are potentially vulnerable to proscription, what kind of legal regulation is required to be structured?

e) Paper Based Transactions: Obtaining legal response considers and encourages people to create and constitute legally binding relationships on the basis of paper-based transactions. No doubt, the definition of ‘document’ as is obtaining under Section 3 of Indian Evidence Act, 1872 takes within its fold material other than paper also, still popularly the practice covers only paper based transactions. But in the

SAIKAT SARKAR

34 | P a g e

cyber context, it is the digital or electronic record, which forms the basis of electronic transactions. As a result of which transactions will be on the basis of electronic records.

In the light of these seemingly non-applicable foundations, the legal system originating from a particular sovereign country has to face complex challenges in formalizing the structure of legal response. However, the inherent complexity did not deter select countries in making an attempt in this regard. From the obtaining patterns it can be understood that, substantial numbers of these countries have apparently considered the following benchmarks in structuring the relevant legal response. Application of the existing laws duly modified to suit the medium of cyber context with an appropriate regulatory authority monitoring the process and adjudicating the rights and liabilities of respective stakeholders; respective legislations enacted by the concerned sovereign states with a deliberate attempt to encourage and facilitate international co-operation to enforce these laws. The IP provides for ‘telepresence’ or geographically extended sharing of scattered resources. An internet user may employ her internet link to access computers, retrieve information, or control various types of apparatus from around the world. These electronic connections are entirely transparent to the user. Access to internet resources is provided via a system of request and reply; when an online user attempts to access information or services on the network, his/her local computer requests such access from the remote server computer where the addressee is housed. The remote machine may grant or deny the request, based on its programmed criteria, only if the request is granted does the server tender the information to the user’s machine. These features make available a vast array of interconnected information, including computerized digitalized text and graphics and sound. A crop of private internet access providers has developed to offer network access and facilities to such customers outside the research community. Consequently, although the academic and scientific research community remains an important part of the internet community as a whole, private and commercial traffic has become a dominant force in the development and growth of the ‘electronic frontiers’. In particular, the network offers novel opportunities for transactions involving information based goods and services.

SAIKAT SARKAR