11
Benchmarking the digital divide using a multi-level outranking framework: Evidence from EBRD countries of operation Marijana Petrović a, , Nataša Bojković a , Ivan Anić b , Dalibor Petrović a a Faculty of Transport and Trafc Engineering, University of Belgrade, Serbia b Faculty of Mathematics, University of Belgrade, Serbia abstract article info Available online 27 July 2012 Keywords: Digital divide Benchmarking Multi-level outranking ELECTRE EBRD countries The paper proposes an innovative procedure for benchmarking the digital divide. The study demonstrates the potential of an outranking approach as an alternative to the commonly used ranking models based on Composite Indices (CIs). To fulll the objectives of the benchmarking process, an ELECTRE-based (ELimination Et Choix Traduisant la REalité; Elimination and Choice Corresponding to Reality) multi-level outranking (ELECTRE MLO) method is developed. The proposed approach improves on previous methods of benchmarking the digital divide in two ways: by classifying countries into hierarchical levels of performance and by identifying corresponding benchmarks for less successful ones. The method is applied to 29 EBRD (European Bank for Recon- struction and Development) countries of operation. The results are visualized in the form of a relation tree, thus providing clear insights for policy-makers regarding how countries stand relative to each other. © 2012 Elsevier Inc. All rights reserved. 1. Introduction The intensive development of information communication tech- nologies (ICTs) not only has created powerful opportunities for so- cial and economic development, but it has also introduced new divisions and inequalities seen as the digital divide. There are differ- ent levels and perspectives for assessments of the digital divide (Bertot, 2003; Billon, Marco, & Lera-Lopez, 2009; Chen & Wellman, 2004; Cilan, Bolat, & Coşkun, 2009; Norris, 2001; Sciadas, 2004, etc.). In this paper, we focus on cross-country benchmarking the dig- ital divide by exploring ICT readiness, intensity and ICT capability or skills, which conforms to the OECD denition of the digital divide (OECD 2001, p. 5). There are two main objectives for a cross-country benchmarking of the digital divide. The rst is to examine where countries stand relative to each other; the second is to examine the exemplars of the best practices: these are benchmarks. It is important to stress that there is a difference between the terms benchmarking and benchmark. While benchmarking is a specic comparison process (Spendolini, 1992), the term benchmark refers to a reference point for measurement, something that serves as a best-in-class standard (Baker, 2009; McNair & Leibried, 1994; Noce & McKeown, 2008; Petrović, Gospić, Pejčić-Tarle, & Bogojević, 2011). Jansen, De Vries, and Van Schaik (2010) argued that the purpose of benchmarking is not simply the comparison itself, but that benchmarking is a means to an end of learning from each other to improve. In keeping with this, the ideas behind the benchmarking of the digital divide in our study are to understand how countries are positioned relative to each other and to nd suitable benchmark countries among the more successful ones. The aim is not only to determine the best practice, but also to nd the relevant practice(corresponding benchmark countries) as well. As to practical implications of benchmarking the digital divide, the inevitable issues are the preconditions and possibilities of pol- icy transfer. The point is that the differences between countries (e.g., geographical, social, economic, political, or ideological) can call into question the contributions that the benchmarking pro- cess can make to policy making. Moreover, learning from each other in a policy context, has not yet been sufciently elaborated, as is extensively discussed in the literature (Bauer, 2010; Dolowitz, 2003; Dolowitz & Marsh, 1996, 2000; Evans, 2004; James & Lodge, 2003; Lundvall & Tomlinson, 2001; Rose, 1991). Bauer (2010, p. 19) made a useful observation, stressing that experiences in other leading countries may illustrate the adjacent possibleof what can be accomplished rather than provide specic executable goals for another country. In a process of sharing experiences (or learning from each other), different outcomes are possible; policy transfer ranges from copying to simple in- spiration an idea stimulating fresh thinking that helps address a pol- icy problem in the adopting country (Evans, 2004, p. 37). In the area of communications policy, inspiration is a frequent form of cross-national learning (Bauer, 2010, p.11). In other words, one must be aware that the benchmarking itself, although thoroughly addressing perfor- mance evaluation, cannot assure the actual implementation of policy ideas. Our study does not explicitly explore whether an actual policy transfer between countries can be achieved, but it does not ignore Government Information Quarterly 29 (2012) 597607 Corresponding author at: Jurija Gagarina 206/74, Belgrade, Serbia. E-mail addresses: [email protected] (M. Petrović), [email protected] (N. Bojković), [email protected] (I. Anić), [email protected] (D. Petrović). 0740-624X/$ see front matter © 2012 Elsevier Inc. All rights reserved. doi:10.1016/j.giq.2012.05.008 Contents lists available at SciVerse ScienceDirect Government Information Quarterly journal homepage: www.elsevier.com/locate/govinf

Benchmarking the digital divide using a multi-level outranking framework: Evidence from EBRD countries of operation

  • Upload
    dalibor

  • View
    216

  • Download
    4

Embed Size (px)

Citation preview

Government Information Quarterly 29 (2012) 597–607

Contents lists available at SciVerse ScienceDirect

Government Information Quarterly

j ourna l homepage: www.e lsev ie r .com/ locate /gov inf

Benchmarking the digital divide using a multi-level outranking framework: Evidencefrom EBRD countries of operation

Marijana Petrović a,⁎, Nataša Bojković a, Ivan Anić b, Dalibor Petrović a

a Faculty of Transport and Traffic Engineering, University of Belgrade, Serbiab Faculty of Mathematics, University of Belgrade, Serbia

⁎ Corresponding author at: Jurija Gagarina 206/74, BeE-mail addresses: [email protected] (M. Petrović

(N. Bojković), [email protected] (I. Anić), d.petrovic@s

0740-624X/$ – see front matter © 2012 Elsevier Inc. Alldoi:10.1016/j.giq.2012.05.008

a b s t r a c t

a r t i c l e i n f o

Available online 27 July 2012

Keywords:Digital divideBenchmarkingMulti-level outrankingELECTREEBRD countries

The paper proposes an innovative procedure for benchmarking the digital divide. The study demonstrates thepotential of an outranking approach as an alternative to the commonly used rankingmodels based on CompositeIndices (CIs). To fulfill the objectives of the benchmarking process, an ELECTRE-based (ELimination Et ChoixTraduisant la REalité; Elimination and Choice Corresponding to Reality) multi-level outranking (ELECTREMLO)method is developed. The proposed approach improves on previous methods of benchmarking the digitaldivide in two ways: by classifying countries into hierarchical levels of performance and by identifyingcorresponding benchmarks for less successful ones. Themethod is applied to 29 EBRD (European Bank for Recon-struction and Development) countries of operation. The results are visualized in the form of a relation tree, thusproviding clear insights for policy-makers regarding how countries stand relative to each other.

© 2012 Elsevier Inc. All rights reserved.

1. Introduction

The intensive development of information communication tech-nologies (ICTs) not only has created powerful opportunities for so-cial and economic development, but it has also introduced newdivisions and inequalities seen as the digital divide. There are differ-ent levels and perspectives for assessments of the digital divide(Bertot, 2003; Billon, Marco, & Lera-Lopez, 2009; Chen & Wellman,2004; Cilan, Bolat, & Coşkun, 2009; Norris, 2001; Sciadas, 2004,etc.). In this paper, we focus on cross-country benchmarking the dig-ital divide by exploring ICT readiness, intensity and ICT capability orskills, which conforms to the OECD definition of the digital divide(OECD 2001, p. 5).

There are two main objectives for a cross-country benchmarkingof the digital divide. The first is to examine where countries standrelative to each other; the second is to examine the exemplars ofthe best practices: these are benchmarks. It is important to stressthat there is a difference between the terms benchmarking andbenchmark. While benchmarking is a specific comparison process(Spendolini, 1992), the term benchmark refers to a reference pointfor measurement, something that serves as a best-in-class standard(Baker, 2009; McNair & Leibried, 1994; Noce & McKeown, 2008;Petrović, Gospić, Pejčić-Tarle, & Bogojević, 2011). Jansen, De Vries,and Van Schaik (2010) argued that the purpose of benchmarking isnot simply the comparison itself, but that benchmarking is a meansto an end of learning from each other to improve. In keeping with

lgrade, Serbia.), [email protected] (D. Petrović).

rights reserved.

this, the ideas behind the benchmarking of the digital divide in ourstudy are to understand how countries are positioned relative toeach other and to find suitable benchmark countries among themore successful ones. The aim is not only to determine the ‘bestpractice’, but also to find the ‘relevant practice’ (‘correspondingbenchmark countries’) as well.

As to practical implications of benchmarking the digital divide,the inevitable issues are the preconditions and possibilities of pol-icy transfer. The point is that the differences between countries(e.g., geographical, social, economic, political, or ideological) cancall into question the contributions that the benchmarking pro-cess can make to policy making. Moreover, learning from eachother in a policy context, has not yet been sufficiently elaborated, as isextensively discussed in the literature (Bauer, 2010; Dolowitz, 2003;Dolowitz & Marsh, 1996, 2000; Evans, 2004; James & Lodge, 2003;Lundvall & Tomlinson, 2001; Rose, 1991). Bauer (2010, p. 19) made auseful observation, stressing that experiences in other leading countriesmay illustrate the “adjacent possible” of what can be accomplishedrather than provide specific executable goals for another country. In aprocess of sharing experiences (or learning from each other), differentoutcomes are possible; policy transfer ranges fromcopying to simple in-spiration— an idea stimulating fresh thinking that helps address a pol-icy problem in the adopting country (Evans, 2004, p. 37). In the area ofcommunications policy, inspiration is a frequent form of cross-nationallearning (Bauer, 2010, p.11). In other words, one must be aware thatthe benchmarking itself, although thoroughly addressing perfor-mance evaluation, cannot assure the actual implementation of policyideas.

Our study does not explicitly explore whether an actual policytransfer between countries can be achieved, but it does not ignore

598 M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

this issue. Namely, the position of this study is that the selection ofappropriate benchmarking partners (countries in the benchmarkingprocess) significantly contributes to the possibility of learning fromeach other, regardless of the differences. Following both this ideaand the premise that international organizations encourage the ex-change of ideas among their members (Rose, 1991), we decided tofocus on the EBRD's countries of operation. Since these countriesshare the same policy goals and tend to have similar legislativeframeworks, they may also experience similar implementation prob-lems. The gaps between countries' performance indicators can identi-fy fields of action, while the EBRD can serve as an institutionalsupport for cross-national learning.

Regarding the methodological support for benchmarking the digitaldivide, there are two questions of interest: the selection of the appropri-ate indicators and the analytical tools suitable for their integration intoan overall score. The most common approach for performance evalua-tion and comparison is the construction of Composite Indices (CIs)based on international ICT data sources.

This study promotes the idea of introducing alternative analyticalsupport for the benchmarking of the digital divide, based on anoutranking approach. The strength of the proposed approach lies in itsability to provide more information on the countries' current status re-garding the digital divide, beyond a simple ranking according to ICTadoption. By establishing hierarchical pair-wise relations betweencountries, we can identify several classes (i.e., performance levels).Based on these relations, we can also track possible correspondingbenchmarks for the outperformed countries.

The paper is organized as follows: a brief literature review ispresented in the next section. The proposed approach is describedin Section 3, and the subsequent section contains its application in acase study. The findings and a discussion of the results, including pol-icy implications and limitations, are the topic of Section 5, which isfollowed by concluding remarks.

2. Background and context of this study

2.1. Digital divide within and between countries

The directions for research in the assessment of the digital divideare manifold. Sciadas (2004, p. 4) distinguishes three research do-mains of interest to policy makers: themagnitude of the digital divide,its evolution – whether it is closing or widening over time –, and atwhat speed this is occurring. Billon et al. (2009, p. 599) expandedthis list with studies on the determinants of ICT diffusion. Norris(2001, p. 1–29) offered a tiered model of the digital divide. One ofthese tiers defines the international aspect of the divide as the dis-crepancy of ICT access between developed and developing nations,while the other tier defines the divide in local, socioeconomic terms,referring to the information haves and have-nots within a givencountry or community. Sciadas (2004, p.4) and Chen and Wellman(2004) distinguished two main approaches to the assessment of thedigital divide in the literature: one examines divides internal to acountry, and the other involves cross-country comparisons. Similarly,Cilan et al. (2009, p. 99) defined two primary dimensions of the dig-ital divide: domestic and international.

The measurement of internal country digital divides focuses on thelevel of ICT access and use to highlight the gaps between groups ofpeople, whether these people are grouped by socio-economic status,geographic location or other characteristics. These divides are inves-tigated as differences in access to and use of ICT as a consequence ofwealth (e.g., James, 2009), education, and age (e.g., Gauld, Goldfinch,& Horsburgh, 2010), gender (e.g., Jackson et al., 2008), race(e.g., Hoffman, Novak, & Schlosser, 2000), culture (e.g., Erumban &Jong, 2006), or rural habitats (e.g., Akca, Sayili, & Esengun, 2007).

Cross-country comparisons of the digital divide, which are in the focusof our paper, rely on performance evaluations based on comparative

rankings. The new model of governance in the EU (“open method ofcoordination”—OMC) gave an impetus to this approach by promotingbenchmarking as a policy tool (Papaioannou, 2007). The aim was toreplace the regulatory pressures with cooperation and an exchangeof experience through cross-national learning (Bauer, 2010). Thisshift affected ICT policy as well, and cross-country benchmarking be-came an indispensable part of information society Agendas in EU.Non-EU countries (at the time) also relied on benchmarking within theeEurope+and eSEE Initiatives. The commitment to benchmarkingthe digital divide became global after a World Summit on Informa-tion Society—WSIS. Soon after, many international organizations en-gaged in the process of devising suitable indicators and analyticalsupport for tracking progress of countries towards an informationsociety, i.e., cross-national benchmarking of the digital divide.

2.2. Methodological challenges of benchmarking the digital divide acrosscountries

There are two main challenges in benchmarking countries in do-main of the digital divide. One is to define the appropriate indicatorset that will acknowledge the multidimensionality of the digital di-vide and extent beyond access to technology (Bertot, 2003). Theother is to find a way to use them in cross-country comparisons.

2.2.1. Indicators for the assessment of the digital divideThe selection of an adequate indicator set is a separate issue of

the digital divide assessment that is important for both policymakers and statistical agencies (Selhofer & Mayringer, 2001).Many efforts have been and still are being made to develop interna-tionally comparable and reliable ICT statistics for measuring the in-formation society.

However, the problem is that the more indicators that are used,the lower the number of countries that can be included in the analy-sis. Vicente and Lopez (2006) see this as a trade-off between breadthand depth in the selection of indicators. The consequence is thatmany international comparisons often use proxy indicators. For ex-ample, the skills indicators proposed by the International Telecommu-nications Union (ITU) in the latest publication on measuring theinformation society (ITU, 2010) rely on proxy indicators such as theAdult literacy rate. However, it can be argued that indicators on com-puter and language skills would be muchmore appropriate in a policycontext (Evans & Yen, 2005; Ferro, Helbig, & Gil-Garcia, 2011).

2.2.2. Analytical support for benchmarking the digital divide across countriesThemost widely used approach to measure and analyze the digital

divide is to construct a composite index using indicators. Various in-dices have been developed by international organizations, as well asby scholars and practitioners.

ITU launched several indices. The first one was Mobile/Internetindex, followed by Digital Access Index (DAI), Digital OpportunityIndex (DOI) and ICT Opportunity Index (ICT-OI). The most recent,called the ICT development Index (IDI), was developed by mergingthe DOI and ICT-OI. IDI and consists of 11 indicators grouped by thethree sub-indices: access, use, and skills (ITU, 2010). In addition tothe ITU, other international organizations addressed the question ofbenchmarking countries based on composite indices. The UNDP de-veloped the Technology Achievement Index (TAI); ORBICOM, theInternational Network of UNESCO Chairs in Communications, pro-posed the Infostate Index; the World Economic Forum publishedthe Network Readiness Index (NRI); the United Nations Conferenceon Trade and Development released the ICT Diffusion Index.

Many scholars have also dealt with the question of a single ICTindex. One of the first attempts was made by Ricci (2000), whoconstructed an ‘ICT adoption scale’ (ICTAS), an aggregation of ele-mentary indicators, for digital technologies in EU countries. Selhoferand Mayringer (2001) developed a methodology for benchmarking

599M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

the development of the information society in European countries.They addressed the identification of different dimensions of digitali-zation and the development of composite indices. Some methodolog-ical limits of their study are discussed by Corrocher and Ordanini(2002), who proposed a synthetic index of digitalization as a frame-work for benchmarking the digital divide across countries. Theyaddressed the multidimensionality of the digital divide by combiningmultiple socio-economic factors (markets, diffusion, infrastructures,human resources, competitiveness, and competition) into a singlemeasure. To measure social inequalities in the adoption of ICT in theEU, Selhofer and Husing (2002) developed the Digital Divide Index(DDIX). DDIX consists of two sets of variables: independent (socio-economic) factors (gender, age, income and education) and depen-dent (ICT) factors (computer and internet users, both in general andat home). Philipa, Hamdi, Lorraine, and Ruffing (2003) reviewedand evaluated the extant works to measure ICT development usingdifferent sources (UNDP, UNIDO, OECD, and ITU) and developed ICTDiffusion Index. The Index combines quantitative connectivity andaccess indices and qualitative variables for policy indicators. Theframework was used for a benchmarking of the diffusion of ICTcapabilities across 160–200 countries over the 1995–2001 period.Archibugi and Coco (2004) developed the Indicator of TechnologicalCapabilities — ArCo that comprises indicators of three main dimen-sions of ICT development: the creation of technology, technologicalinfrastructure and the development of human skills. ArCo was calcu-lated for 162 developed and developing countries. Al-mutawkkil,Heshmati, and Hwang (2009) combined different indicators fortelecommunications and broadcasting into the TelecommunicationIndex (TI) and compared it with ArCo, the DAI and the Human Devel-opment Index. The main argument of their study was that a paramet-ric index approach may be preferable to non-parametric indices.Hanafizadeh, Saghaei, and Hanafizadeh (2009) argued that the mostimportant factor for measuring and analyzing the divide betweencountries is ICT infrastructure and access; thus, they devised anindex for the cross-country analysis of ICT infrastructure and access.They compared their results with well-known indices such as theDAI, the DOI, and the NRI. Emrouznejad, Cabanda, and Gholami(2010) derived an alternative to the ICT-OI, but their only innovationwas in the domain of aggregation: they used the same indicators thatthe ITU used for ICT-OI and aggregated them with the DEA (Data En-velopment Analysis). The result was another index highly correlatedwith ICT-OI.

Most, though not all, authors use composite indices in the assess-ment of a cross-country digital divide. There are a number of studiesthat exploit different techniques, such as Quantile Regression(Dewan, Ganley, & Kraemer, 2005), Factor and Cluster analysis(Vicente & Lopez, 2006, 2010; Trkman, Jerman-Blazic, & Turk,2008), Multivariate Analysis of Variance (Cilan et al., 2009), and Ca-nonical Correlation Analysis (Billon et al., 2009). However, all ofthese studies were primarily focused on examining the determi-nants and causes of the digital divide across countries. Dewan et al.(2005) examined a panel of 40 countries from 1985 to 2001 and dis-tinguished between the factors that are widening the divide andthose that serve to narrow the divide. Vicente and Lopez (2006)identified two factors for the development of the information societyin EU-15: the first is related to ICT infrastructure and diffusion, andthe second is related to e-government and internet access costs. Inthe later study (Vicente & López, 2010), they focused on factors ofbroadband diffusion in Eastern European Countries and found thatcross-country differences can be explained by ICT investments andmarket competition. Trkman et al. (2008) discovered three underly-ing factors that can explain the differences in broadband develop-ment between EU-25 countries in 2004: (1) enablers and means;(2) usage of information services; and (3) the ICT sector environ-ment. Cilan et al. (2009) examined which information society indi-cators within the coverage of the study affected membership in the

EU. Billon et al. (2009) studied the relationships between severalICTs and a set of explanatory variables, such as economic, demo-graphic and institutional variables.

To conclude, the benchmarking of the digital divide is dominated bythe CIs approach, resulting in single numerical values for the countries'performances. Benchmarking is a process in which countries evaluatetheir performance to learn from each other. This raises an importantpolicy issue: how useful would ranking countries according to CIsvalues be, i.e., how can these findings inform policy-makers on deliver-ing best practices? The quantitative gaps, expressed in terms of CIsvalues, do not provide enough information on substantial differencesbetween countries. The ranking of countries according to CIs impliesthat every better-ranked country is a potential benchmark (‘relevantpractice’) for the outperformed ones, whichmay lead to unrealistic pol-icy conclusions and unachievable policy targets. Conversely, alternativeapproaches based on various econometric techniques, although provid-ing better insights about countries' performances and the underlyingcausality, lack guidance with regard to the exemplars of best practices.With an increased focus on the need for revealing suitable benchmarks,an alternative method is proposed in this study.

2.3. The purpose and scope of the study

The main objective of our study is to develop a sophisticatedbenchmarking scheme that can be used as a policy tool. The study hastwo main positions and, therefore, two research ‘directions’. The first isthat classifying countries into performance groups is a more sensible ap-proach than CIs rankings (Vicente & Lopez, 2006; Wegman & Oppe,2010). The second is that the question of what country ‘to look up to’needs to be addressed more thoroughly, and instead of ‘learning fromthe best,’ we focus on tracking corresponding benchmark countries — aconcept known as ‘stepwise benchmarking’ (Estrada, Song, Kim, Namn,& Kang, 2009) or seeking an ‘adjacent possible’ (Bauer, 2010).

To address these questions, we developed an innovative procedurefor benchmarking the digital divide, as described in the following sec-tion. An Empirical example is provided for the EBRD countries usingITU indicators that constitute the IDI (an ICT development Index devel-oped by the ITU). We selected these indicators because they arerecommended by the ITU as a result of years of work on developingan adequate list of indicators for measuring the information society. Inaddition, IDI is the latest global index for measuring the digital divide,which made it suitable for our comparison. As we discussed above,some of these indicators are questionable (the skills indicators), butwe did not problematize them in this research, as our main goal wasto create an alternative methodology for benchmarking the digital di-vide and compare the resultswith the IDI values. To ensure comparabil-ity, it was necessary to use the same indicator set.

3. Methodology

3.1. The outranking approach for the assessment of the digital divide

The main argument of this paper is that more comprehensive com-parisons between countries are needed to identify meaningful refer-ences in the benchmarking process. We decided on an outrankingapproach, which has yet to be used in the field of digital divide assess-ment. The suitability of this decision aid tool lies in the construction ofbinary relationships between alternatives/countries. These relation-ships indicate not only the dominance of one alternative over anotherbut also the notion of incomparability as well, which is an equally im-portant feature.

For the purpose of benchmarking, we developed a novelELECTRE MLO (Electre Multi-Level Outranking) method, which of-fers a completely new ranking scheme. Unlike the basic ELECTRE Imethod (Roy, 1968), which classifies the alternatives/countries asgood or bad (core/non-core countries), the ELECTRE MLO computes

600 M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

hierarchical levels of performance. The mathematical formalizationof the ELECTRE MLO method is built into the ELECTRE programcode, and specifically designed software (“ELECTRE MLO”) was de-veloped for the calculation.

In addition, one adaptation of the ELECTRE is introduced. Respect-ing the fact that the skills indicators (used in this study and describedin Section 4.1) are the proxy ones, a variable Discordance index isproposed. Specifically, the threshold value for the Discordance indexis higher for the indicators belonging to the access and use domainand lower for the skills indicators. This means that the model favorsperformance in the access and use domain which is also in accor-dance with the ITU guidance on the importance of digital divide indi-cators (ITU, 2010).

3.2. ELECTREMulti-Level Outranking approach-mathematical formalization

The aim of the proposed ELECTRE MLO approach is to produce aspecific grouping of the selected countries, based on the dominancerelationships between them. The goal of the concept is not only tocreate a hierarchical positioning of the countries according to theirperformance levels, but to also identify those that are comparablewith respect to ICT adoption (and potential for adoption) and thosethat are not. In the benchmarking process, this approach has a practi-cal use: countries can track their corresponding benchmarks, i.e., find‘relevant practice exemplars’ more easily. The formalization of theidea is described in the following paragraph.

As in ELECTRE I, the outranking relations are used to obtain thesubset of the most preferable alternatives, here seen as countriesand denoted as A1, A2… An (core subset). To test the relationshipsamong the alternatives, Concordance and Discordance indices andthreshold values are used. The Concordance index, Cij, is defined byCij ¼ ∑

k∈Kþij

ωk = ∑k∈Kþ

ij ∪K−ij

ωk,1 while the Discordance index, dij, is de-

fined as dij ¼ maxk∈K−

ij

ajk−aik� �

= Ikj j� �.2 Higher values of the Concordance

index and lower values of the Discordance index imply an acceptanceof the hypothesis that ‘Ai is better than Aj’. An alternative outranks an-other alternative overall if its Concordance index lies above a selectedthreshold value (denoted p) and its Discordance index lies below thethreshold value (denoted q)3 For every pair of alternatives (coun-tries), Concordance and Discordance indices can be computed, thusforming Concordance and Discordance matrices.

To develop the ELECTREMLO, the outranking/dominance relationsare further used to define qualitative performance levels. For this pur-pose, an innovative procedure is devised, as described in the follow-ing paragraph.

The logic behind the performance-level construction lies in itsspecific iterative procedure, which involves the repeated identificationof preferable countries. Namely, in the first stage, the countries in thecore subset are identified. These are the first-level countries, and theyare not out-dominated. Subsequently, the countries from the core

1 Where Cij is a Concordance index for the hypothesis that ‘Ai is better than Aj’; ωk isthe weight of the kth criterion; m is the number of criteria (K1, K2…Km) and the sets Kij

+,Kij− identify criteria where the performance score of the alternative Ai is higher (Kij

+),lower (Kij

−) or equal (Kij=) to the performance score of the alternative Aj. This index dif-

fers from the Concordance index of the original ELECTRE I method. It is modified as in-troduced by Anić and Larichev (1996) and is used here to ensure the strict orderrelations between alternatives that allow the construction of ELECTRE MLO.

2 Where: dijis the Discordance index for the hypothesis that ‘Ai is better than Aj’, ajk,aik are the respective scores of alternatives Aj and Ai with respect to criterion k, and |Ik|is the scaled-score range of criterion k.

3 The alternative Ai outranks the alternative Aj if, for the threshold values, 0bqbpb1fulfills Cij≥p, dij≤q. Simultaneously, the opposite should not true, e.g., Cji≥p, dji≤q, isnot fulfilled.

subset are excluded and the method is applied again to the remainingcountries. The new core subset consists of the second-level countries.These countries are outperformed only by some of the countries fromthe first level. The procedure is repeated until the core is a non-emptyset. The number of iterations corresponds to the number of levels. Themathematical expression of the construction of the levels is describedas follows.

The definition of the levels is inductive. The first level L1 consists ofthe alternatives from the core subset. An alternative Ai that is outsidethe core subset belongs to a level Lk if the condition in Eq. (1) is fulfilled.

max t ∃Aj∈Lt� �

Ai≺Aj

���o¼ k−1

nð1Þ

In other words, an alternative (country) Ai will appear at the levelk if there is an alternative Aj at the k-1 level such that Ai≺Aj whilesuch an alternative does not exist at the previous (higher) levels.

This improvement to the outranking relations, (i.e., the introduc-tion of the levels in a qualitative way) has several advantages overthe classical determination of the levels based on a quantitativeevaluation.

First, the proposed concept avoids the concerns about classifica-tion to a certain level in situations where the index values (or otherquantitative measures) are close.

Another significant feature is related to the improvement of an al-ternative. In a quantitative evaluation, the knowledge about what al-ternatives perform better does not allow decision-makers to select apath to follow. Specifically, although an alternative Aj may performbetter then Ai, it may be that Ai is still dramatically better accordingto a certain number of criteria, i.e., these alternatives have overly dif-ferent development paths; hence, they are incomparable.

The ELECTREMLOgroups the countries into hierarchical levels, consti-tuting a relation tree. The obtained relation tree allows decision-makersto observe any alternative (country) that is not in the core and identifypossible paths on a route to the core. In the benchmarking process, thisis an efficient way to obtain potential development policies from whichthe decision-maker can choose.

4. An empirical study—EBRD countries of operation

Our empirical study covers 29 EBRD countries of operation. TheEBRD is a financial institution established with the mission of provid-ing assistance in the transition towards an open, market-orientedeconomy. Affiliation with the EBRD is a basis for cooperation on issuesof common interest such as ICT development. These countries jointlyparticipate in cross-country studies and projects financed by theEBRD. Moreover, the latest EBRD report on telecommunications(EBRD, 2008) calls for the sharing of experience between countriesat different performance levels.

4.1. Indicator set

For the purpose of this study, the most recent set of indicatorsrecommended for benchmarking the digital divide is adopted.These are also the indicators that were used to construct the ICT De-velopment Index—IDI (ITU, 2010). The indicator set consists of 11 indi-cators covering: ICT access (Fixed telephone lines per 100 inhabitants—FTL;Mobile cellular telephone subscriptions per 100 inhabitants—MCS; In-ternational internet bandwidth (bit/s) per internet user—IIB; Proportionof householdswith a computer—PHC; Proportion of householdswith internetaccess at home-PHI); ICT use (internet users per 100 inhabitants—IU;Fixed broadband internet subscribers per 100 inhabitants—FBB; Mobilebroadband subscriptions per 100 inhabitants—MBB); and skills (Adult liter-acy rate—ALR; Gross secondary enrolment ratio—GSE; Gross tertiary enrol-ment ratio—GTE). The indicators and their raw values for our sample of

Table 1Indicators and row values for EBRD countries of operation.

Country Abr. FTL MCS IIB PHC PHI IU FBB MBB GSE GTE ALR

Albania ALB 10.90 99.90 1956 12.00 8.80 23.90 2.00 0.00 82.00 36.70 99.00Armenia ARM 20.30 100.00 1257 9.50 6.10 6.20 0.20 0.00 88.10 36.00 99.50Azerbaijan AZE 15.00 75.00 4189 14.60 13.90 28.00 0.70 0.00 105.60 15.80 99.50Belarus BLR 38.40 84.00 2332 28.50 15.60 32.10 4.90 0.00 89.30 70.60 99.70Bosnia and Herzeg. BIH 27.30 84.30 1778 28.30 12.60 34.70 5.00 0.00 90.20 34.30 96.70Bulgaria BGR 28.80 138.30 108,449 28.60 25.30 34.90 11.10 16.80 107.10 51.90 98.30Croatia HRV 42.50 133.00 31,488 52.90 45.30 50.60 11.90 20.70 94.90 49.60 98.70Czech Republic CZE 21.90 133.50 35,146 52.40 45.90 58.40 17.10 13.10 94.90 59.30 99.00Estonia EST 37.10 188.20 191,418 59.60 58.10 66.20 23.70 14.90 100.40 65.60 99.80Georgia GEO 14.30 64.00 5358 15.40 3.00 23.80 2.20 9.50 90.00 34.30 99.50Hungary HUN 30.90 122.10 10,216 58.80 48.40 58.70 17.50 15.50 96.00 72.80 98.90Kazakhstan KAZ 22.30 96.10 6444 18.40 17.00 11.00 4.30 0.00 92.00 46.90 99.60Kyrgyzstan KGZ 9.10 62.70 702 2.50 2.00 15.70 0.10 0.00 86.00 44.30 99.60Latvia LVA 28.50 98.90 16,692 56.70 52.80 60.60 8.90 6.40 119.20 69.60 99.80Lithuania LTU 23.60 151.20 17,927 52.00 50.90 55.00 17.80 3.50 98.50 79.20 99.70Moldova MDA 30.70 66.70 6087 8.00 6.00 23.40 3.20 5.20 83.10 39.90 99.20Mongolia MNG 6.20 37.80 994 14.00 2.90 12.50 0.60 3.00 95.10 49.80 97.30Montenegro MNE 27.26a 118.10 2679 20.00 16.70 47.20 10.00 8.30 88.10 29.80 97.00Poland POL 25.50 115.30 9352 58.90 47.60 49.00 12.60 15.80 99.20 68.80 99.30Romania ROM 23.60 114.50 31,640 37.80 30.40 29.00 11.70 21.60 88.00 65.80 97.60Russia RUS 31.80 141.10 4712 40.00 30.00 32.00 6.60 0.60 82.30 77.10 99.50Serbia SRB 31.40 97.80 10,037 35.80 27.60 33.50 4.60 7.60 88.50 47.80 97.00Slovak Republic SVK 20.30 102.20 42,492 63.20 58.30 66.00 11.20 10.50 93.70 54.70 99.00Slovenia SVN 50.10 102.00 18,963 65.10 58.90 55.90 21.20 26.30 91.10 89.80 99.70Tajikistan TJK 4.20 53.70 516 2.00 0.10 8.80 0.10 0.70 84.40 20.20 99.60TFYR Macedonia TFRM 22.40 122.60 8472 45.60 29.40 41.50 8.90 0.50 84.30 37.60 97.00Turkmenistan TKM 9.50 22.50 3414 7.00 4.00 1.50 0.10 0.00 102.30 21.70 99.50Ukraine UKR 28.70 121.10 5477 21.20 10.30 10.60 3.50 1.80 94.40 79.40 99.70Uzbekistan UZB 6.80 46.80 334 3.20 0.90 9.10 0.20 0.50 104.90 9.20 96.90

a Data provided by Montenegro's Agency for Electronic Communications and Postal Services. (ITU offered estimated value — ITU, 2010, p104).

4 This figure was designed to be suitable for both color reproduction on the Web(free of charge) and in print, as well as for color reproduction on theWeb and grayscalein print.

601M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

countries are presented in Table 1. All data refer to 2008,which is the lat-est available at the time of this writing.

Based on the indicator values, the performance scores are assigned,applying a nine-step scoring scale (Table 2). The application of a finer/expanded scale of scores decreases the values of the discordance indi-ces, leading to more dominance relations, as discussed in Bojković,Anić, and Pejčić-Tarle (2010). The scores are determined in relation tothe ideal value, i.e., the empirical maximum determined by the ITU(ITU, 2010, p. 97). Thus, a more objective status of the countries isobtained than if a relative maximum had been adopted. The countriesthat perform closer to the ideal values will have higher scores.

4.2. Weights and thresholds

According to the relevance of the indicators, as determined by theITU, the access and use indicator groups are equally weighted, whileskills are weighted less. Within the groups, the indicators are equallyweighted (Table 2). The value of the concordance threshold reflectsthe decision-maker's risk aversion. This means that a country mayclaim to be better performing if it outperforms another one on a num-ber of criteria of sufficient importance. In our example, the value ofthe threshold p was set at 0.80, which is the sum of the criteriaweights reflecting the desired sufficient importance. In other words,only if the sum of the criteria weights, where one country performsbetter than another, is greater than 80% of the total weight of all ofthe criteria can dominance be declared. A lower value of the parame-ter p means that the decision-maker is willing to accept greater riskwhen asserting that a country is preferable.

The discordance threshold value, parameter q, was set at 0.50. Thisparameter determines the tolerance of inferior performance. A coun-try will not be considered as ‘better performing’ if, under any criteriawhere a country's performance level is worse, the difference in scoresis five or more.

5. Results and discussion

Based on the Concordance andDiscordance indices (see Appendix A)and on threshold values, and following the level construction procedure(described in Section 3.2.), the relation tree is obtained and presented(Fig. 1)4. Countries are graphically interpreted as nodes. There are twomain constituent elements of the relation tree: hierarchical levels anddominance relations. As shown by the relation tree, the selected coun-tries are classified into eight different levels, i.e., performance groups, or-ganized in a descending manner (the first level consists of the bestperforming countries— Estonia and Slovenia). The dominance relationsare illustrated as arrows. An arrow directed towards a country (node)indicates that this country is dominated. Where there are no arrowsconnecting nodes at nearby levels, the alternatives (countries) are in-comparable. The practical significance (meaning) of this feature will bediscussed in greater depth in the paragraph that follows.

5.1. The relation tree as a benchmarking support tool

From a benchmarking perspective, the performance levels(presented on the relation tree; see Fig. 1) interpret the relativeranks of the countries, and the arrows indicate the correspondingbenchmark countries. ELECTRE MLO distinguishes the countriesfrom a nearby, better-performing level that dominates the observedone and thus helps the countries that are on the lower hierarchicallevels to find their corresponding benchmarks. The benchmark coun-tries drawn from the nearby performance level are those with the

Table 2Criteria and alternatives with associated scores.

FTL MCS IIB PHC PHI IU FBB MBB GSE GTE ALR

w 4 4 4 4 4 4 4 4 2 2 2w* 0.105 0.105 0.105 0.105 0.105 0.105 0.105 0.105 0.053 0.053 0.053Ik 1–9 1–9 1–9 1–9 1–9 1–9 1–9 1–9 1–18 1–18 1–18|Ik| 9 9 9 9 9 9 9 9 18 18 18ALB 2 6 1 2 1 3 1 1 8 4 9ARM 4 6 1 1 1 1 1 1 8 4 9AZE 3 4 1 2 2 3 1 1 9 2 9BLR 6 5 1 3 2 3 1 1 9 7 9BIH 5 5 1 3 2 4 1 1 9 4 9BGR 5 8 9 3 3 4 2 2 9 5 9HRV 7 8 3 5 5 5 2 2 9 5 9CZE 4 8 4 5 5 6 3 2 9 6 9EST 6 9 9 6 6 6 4 2 9 6 9GEO 3 4 1 2 1 3 1 1 9 4 9HUN 5 7 1 6 5 6 3 2 9 7 9KAZ 4 6 1 2 2 1 1 1 9 5 9KGZ 2 4 1 1 1 2 1 1 8 4 9LVA 5 6 2 6 5 6 2 1 9 7 9LTU 4 9 2 5 5 5 3 1 9 8 9MDA 5 4 1 1 1 3 1 1 8 4 9MNG 1 3 1 2 1 2 1 1 9 5 9MNE 5 7 1 2 2 5 2 1 8 3 9POL 4 7 1 6 5 5 2 2 9 7 9ROM 4 7 3 4 3 3 2 2 8 6 9RUS 5 8 1 4 3 3 1 1 8 7 9SRB 5 6 1 4 3 4 1 1 8 5 9SVK 4 6 4 6 6 6 2 1 9 5 9SVN 8 6 2 6 6 6 4 3 9 9 9TJK 1 3 1 1 1 1 1 1 8 2 9TFRM 4 7 1 5 3 4 2 1 8 4 9TKM 2 2 1 1 1 1 1 1 9 2 9UKR 5 7 1 2 1 1 1 1 9 8 9UZB 2 3 1 1 1 1 1 1 9 1 9

* Normalized weight of the kth criterion (w).

602 M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

smallest performance gap compared to the outperformed one; hencethe epithet ‘corresponding.’ For example, Serbia may aspire to theperformance of Poland because a dominance relation exists betweenthem. Some countries can choose among several correspondingbenchmarks (for instance, Kyrgyzstan may consider four countries:Albania, Azerbaijan, Georgia and Moldavia).

An important issue of interest is the incomparability relations be-tween countries at adjacent hierarchical performance levels (where

Fig. 1. Relation tree.

there are no arrows connecting nodes, i.e., no dominance relationsbetween the countries). In these cases, there is not enough evidenceto assign dominance. To illustrate this, let us, for example, look atPoland — a third-level country. There are a total of seven countriesat the adjacent level above. The method favors Hungary as the bench-mark (BM) because, analytically, only the Hungarian Concordanceindex is above the threshold. This is not the case for the remaining sixcountries because they have worse performance than Poland in atleast one digital divide criterion, and thus, they are not eligible to be de-clared suitable BMs. However, this need not be interpreted so strictly.The very fact that these countries are positioned at the level above im-plies that they need not be a priori rejected. In certain circumstances,they can be the subject of further consideration (for example, if thedecision-maker is not exclusive about the threshold value for the Con-cordance index). However, the method highlights Hungary as the firstchoice for Poland to look up to.

5.1.1. ELECTRE MLO vs. IDI rankingsAnother subject of interest is an examination of the correspon-

dence of the obtained results with the IDI rankings. In general,there is a significant degree of correspondence between hierarchicalstatus and quantitatively expressed performance — the IDI values(Table 3).

Nevertheless, some differences between the IDI and ELECTRE MLOperformance evaluations can be noted from Table 3. The first is whenthe IDI ranks do not match the ELECTRE MLO performance level(Poland and Romania are lower than expected, according to the IDI;see Table 3). This may be a result of a country having unbalanced per-formance. In that case, the ELECTRE MLO will give more respect to thepronounced poor performance on a particular indicator. In the IDI

Table 3IDI values and obtained performance levels for EBRD countries of operation.

Level based on ELECTRE MLO Country IDI value

I EST 6.46SLO 6.41

II HUN 5.64LTU 5.55HRV 5.53CZE 5.45SVK 5.38LVA 5.28BGR 4.87

III POL 5.29RUS 4.57

IV ROM 4.73MNE 4.54a

TFRM 4.32SRB 4.23BLR 4.07UKR 3.87

V BIH 3.65KAZ 3.47

VI MDA 3.37GEO 3.22AZE 3.18ALB 3.12ARM 2.94

VII MNG 2.71KGZ 2.65

VIII TKM 2.38TJK 2.25UZB 2.25

a The comparability of ELECTRE MLO and IDI for Montenegro is restricted because,instead of estimated value for FTL used by the ITU, we used data from National Regula-tory Authority of MNE.

603M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

rankings, this observation might be neglected as a result of a compen-satory effect. Hence, a country that in the ‘International internet band-width (IIB)’ domain only achieves only 9352 Kbit/s of an idealreference value of 100,000 Kbit/s, for example, would not be betterranked than a country that surpasses this value (as in the case ofPoland relative to Bulgaria). From a policy perspective, this is impor-tant because IIB is a precondition for the advanced services of the in-formation society (e.g., e‐learning, Haßler & Jackson, 2010).

Another important difference is the way that ELECTREMLO treatsa case where a country exhibits specific development patterns, suchas pronounced good performance on some indicator. In this case, acountry may be highly ranked, but not dominate others. An exampleis Bulgaria, which is highly positioned (by both ELECTRE MLO andIDI), but cannot serve as a corresponding benchmark for any countryfrom the third level. This is because Bulgaria shows less balancedperformance among the chosen indicators — high scores for mobilepenetration and internet bandwidth and low scores in the broad-band domain. Furthermore, in 2008, Bulgaria showed a significantincrease in the value of its internet bandwidth indicator. Therefore,it is advisable to investigate its performance in previous years,using the same procedure. This may eliminate the misleading effectsof time cut analysis and unexpected trend volatility. All of this bringsBulgaria into question as a benchmark country i.e., a ‘good practice’exemplar.

To summarize, the results from ELECTRE MLO show that a betterperforming country is one that achieves better performance in a suf-ficient number of criteria and achieves a more balanced performance,meaning that a better performing country would not have markedlyworse performance on any criteria. In this way, the method respectsthe multi-dimensional nature of the digital divide (DD). Moreover,ELECTRE MLO addresses the issue of benchmark countries more thor-oughly. It practically operationalizes the “learning from the similar”

approach (Wegman & Oppe, 2010; Bauer, 2010), as it suggestslooking for corresponding benchmarks on an adjacent upper level.This kind of relation cannot be detected with linear order, CI based,benchmarking. In other words, unlike conventional quantitativemethods of ranking, in ELECTRE MLO, a country that is at a higherlevel is not necessarily a suitable benchmark country (a ‘good prac-tice’ exemplar) for countries at lower levels. However, it is up topolicy-makers to decide how high to set the desired outcomes(i.e., what development path to follow).

5.1.2. Setting a development strategy — an evolutionary vs. arevolutionary development path

An important implication for planners and practitioners is that eachrelation between countries can point to a development option. We canobserve that countries differ in the number of income/outcome orientedrelations, and therefore in possible development options. The essence ofthe hierarchical preorder is to establish gradual development paths. Thismeans that policy-makers may learn from the countries at the adjacent,better level of performance (this can be seen as an ‘evolutionary’ devel-opment path). However, depending on the extant circumstances (coun-tries' development strategies and priorities), the decision-maker mayopt for more ambitious goals, i.e., a ‘revolutionary’ development path.This is done by looking for corresponding benchmark countries on thehigher levels (to skip some intermediate levels).

If we again take Poland as an illustrative example, we can seethat Hungary, as its corresponding benchmark, achieves betterperformance in three of eleven criteria (in the domains of Fixed tele-phone lines, Internet users and Fixed broadband internet sub-scribers), while they have similar (equally rated) performance in allof the other criteria. However, we may also consider countries fromthe top level as potential BMs for Poland. In terms of the number ofbetter performing domains, both Slovenia and Estonia record signifi-cantly better performance than Poland. Moreover, unlike Hungary,which has slightly better indicator values, the top-level countrieshave significantly better performance in most of the digital dividedomains.

This leads us to the question of a suitable BM. Namely, the methoddoes not explicitly favor top-level countries (those that are better inmost/all indicators) to be suitable BMs for any lower-level ones. Rather,it promotes stepwise improvements from following a suggested path—

the ‘stepwise benchmarking’ concept (as invited in Section 2.3). Howev-er, the policy-maker still may choose how to set the development strat-egy; a policy-maker can choose between more ambitious and moreachievable policy goals.

5.2. Practical implications of the study

In general, this research contributes to the classification of a se-lected group of countries (the EBRD) into hierarchical levels of perfor-mance. Hence, this study provides an additional assessment that maycomplement other evaluations in the digital divide domain (such asvariety of composite indices).

We will reveal more specific practical implications of this researchby examining one of the lowest-level countries, Tajikistan. First,policy-makers or other stakeholders from Tajikistan can gain insightsabout the country's position relative to its policy (EBRD) and regionalpartners (Commonwealth of Independent States); they can monitorboth the country's actual performance and its progress. Thereupon,policy makers from Tajikistan may decide to attempt to match theachievements of some of the neighboring countries (in terms of per-formance), such as Turkmenistan, Uzbekistan, Mongolia or Kyrgyz-stan. According to the IDI ranking, all of these countries are higherranked and thus represent possible benchmarks for Tajikistan. Our re-search, however, eliminates the countries with performances that arenot definitively better, in this case, Turkmenistan and Uzbekistan (thesame elimination principle is applied among countries at the adjacent

604 M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

performance level as shown in the discussion on Poland). Thus, apolicy-maker from Tajikistan receives a ‘finer’ selection of potentialbenchmark countries. Nevertheless, as is the case with the practicalimplications of benchmarking, the final choice depends on the prefer-ences of policy-makers. For example, in the case of Tajikistan, thepolicy-makers still may prefer to exchange experience withKyrgyzstan rather than with Mongolia because Kyrgyzstan is a neigh-boring country and a member of Commonwealth of IndependentStates like Tajikistan, and it is also more similar with regard to popu-lation and some economic indicators. Finally, for the less developedcountries, the obtained performance levels may be seen as a measureof ambition in pursuing policy goals; policy-makers can gradually in-crease policy targets (“climbing up the relation tree level by level”) orskip some levels and attempt to achieve more.

5.3. Theoretical implications and the added value of the ELECTRE MLO

The main contribution that this research makes to the body ofknowledge in the field of DD is a novel method (ELECTRE MLO)that offers an entirely new ranking scheme suitable for the purposesof BM. Unlike other studies concerning alternatives for globallyrecommended indices, which resulted in additional index-basedrankings (such as Emrouznejad et al., 2010; Hanafizadeh et al.,2009), ELECTRE MLO is the only alternative approach that deeply ad-dresses the comparability between countries and reveals correspondingBMs. The question is what makes this method a “decent” alternative tocommonly used CIs approach.

In comparing this method to CIs, it is necessary to point out thatin both cases we observe a certain disadvantage. Namely, a countrythat registers high/top results for just one or two indicators maybe relatively highly ranked. This is due to a compensative effectthat occurs in the construction of the index. This appears in ourapproach as well. But unlike the CIs, the ELECTRE MLO provides ad-ditional pair-wise relations between countries at nearby hierarchi-cal levels. These relations ultimately reveal countries that act as‘outliers’ in this way. Technically speaking, our method does notfavor these countries as the ones to look up to. Thus, the method re-spects the multidimensionality of digital divide phenomena andallows that countries that show not only better but also more bal-anced performance are seen as preferable. This is the significant dif-ference from the existing approaches used in benchmarking thedigital divide.

The method is applied to the EBRD's countries of operation, but itcan be used as a practical tool elsewhere, with the recommendationthat it should be an internationally coordinated group of countries(benchmarking partners). The supporting software for the proposedELECTRE MLO is flexible enough to accept more indicators/criteria.This method provides a policy-maker from any country an alternativeto a number of existing indices to assess the digital divide.

5.4. Limitations of the study and future research

Some limitations of the study need to be discussed to understandits scope of application and generalization.

ELECTRE MLO can indicate how well a country is doing relative toothers, but not relative to a theoretical maximum. This specially affectsthe top rated countries, in this case Estonia and Slovenia, as themethoddoes not provide the big picture on their position, i.e., how well theystand globally.

Some software constraints should also be mentioned. If manycountries are involved, or if many countries share the same perfor-mance level, the relation tree may not be sufficiently visible. This re-quires further development of the software package. The othersolution is to develop new formats to visualize the results. Thiscould be the subject of further research.

Finally, policy makers must be aware of the fact that every math-ematical modeling of complex policy issues (such as the digital di-vide) is a kind of simplification, and therefore the results must beconsidered with caution. Additionally, the ELECTRE MLO operateswith arbitrarily imposed significance threshold values, based onuser preferences. If the value of the threshold p increases (meaningstricter conditions to declare the dominance of one country over an-other), then some countries will appear at the adjacent level aboveand vice versa. For example, if p is set to 0.85, the first two levelsremain the same, but some lower-level countries (e.g. Romania,Montenegro and Ukraine) will “climb” to the next level. Increasingthe other threshold (q), which reflects tolerance of inferior perfor-mance, may induce new dominance relations. For example, if q isset to 0.7, Croatia will outperform Bulgaria and “move it off” fromthe second level, while other levels remain the same. To conclude,the relation tree is to some extent affected by the decision-makers'preferences. Therefore, another topic for future development of theELECTRE MLO method could be the determination of sensitivityranges for the concordance and discordance thresholds in whichthe hierarchical status of the countries remains unchanged.

6. Conclusion

The issues pertaining to benchmarking the digital divide are man-ifold, including its scope, the selection of a representative set of indi-cators and supporting analytical tools, and the integration of theresults into a policy analysis.

To give more respect to both benchmarking itself and themultidimensionality of the digital divide, a new analytical tool isdeveloped as an alternative to commonly used CIs. The aim is togo beyond searching for a mythical ‘best practice’ and to discoverthe ‘relevant practice’ for achieving countries' own distinctobjectives.

The suggested procedure enables the following: 1) the classifica-tion of countries into performance groups arranged in hierarchicallevels and 2) the identification of corresponding benchmarks foroutperformed countries. Unlike the CI methodology, where the coun-tries are put into classes according to arbitrarily imposed limit values,in our approach, an eliminatory iterative procedure is proposed tospecify hierarchical levels of performance.

The methodology is applied to EBRD countries of operation.According to selected digital divide indicators, the most preferredcountries (Estonia and Slovenia) are distinguished. The remainingcountries are also classified according to their performances intoseven more levels, constituting a relation tree. The results are com-pared with IDI values, and a fair degree of similarity has beendetected. In this sense (i.e., in terms of performance evaluation),both CIs and an outranking approach are valid. However, the pro-posed approach deeply addresses the comparability between coun-tries and refines the selection of suitable BM countries for theoutperformed ones. The outcome, in the form of a relation tree,allows policy-makers to identify development options by lookingfor benchmark countries on the higher hierarchical levels.

The new ranking scheme provided by the proposed ELECTREMLO, gives more meaningful signposts for policy-makers, thus mak-ing this approach a worthy alternative to the existing procedures forbenchmarking the digital divide.

Acknowledgment

This paper is part of the “Critical infrastructure management forsustainable development in postal, communication and railwaysector of Republic of Serbia” project, funded by the Ministry ofEducation and Science of the Republic of Serbia, Project number:TR36022.

Appendix A. Concordance and discordance matrices

Table A1Concordance index.

ALB ARM AZE BLR BIH BGR HRV CZE EST GEO HUN KAZ KGZ LVA LTU MDA MNG MNE POL ROM RUS SRB SVK SVN TJK TFRM TKM UKR UZB

ALB 0.67 0.38 0.20 0.18 0.00 0.00 0.00 0.00 0.40 0.00 0.25 1.00 0.00 0.00 0.67 0.75 0.09 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.88 0.25 0.88ARM 0.33 0.42 0.17 0.18 0.00 0.00 0.00 0.00 0.44 0.00 0.00 0.67 0.00 0.00 0.33 0.40 0.08 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.83 0.00 0.83AZE 0.63 0.58 0.00 0.00 0.00 0.00 0.00 0.00 0.67 0.00 0.29 0.90 0.00 0.00 0.63 0.89 0.10 0.00 0.06 0.10 0.08 0.00 0.00 1.00 0.07 1.00 0.44 1.00BLR 0.80 0.83 1.00 0.60 0.20 0.06 0.18 0.07 1.00 0.14 0.78 1.00 0.14 0.13 1.00 1.00 0.50 0.14 0.25 0.33 0.33 0.20 0.00 1.00 0.29 1.00 0.73 1.00BIH 0.82 0.82 1.00 0.40 0.00 0.00 0.12 0.00 1.00 0.00 0.67 1.00 0.00 0.13 1.00 0.91 0.40 0.13 0.28 0.30 0.13 0.13 0.00 1.00 0.27 1.00 0.67 1.00BGR 1.00 1.00 1.00 0.80 1.00 0.20 0.31 0.00 1.00 0.31 1.00 1.00 0.46 0.35 1.00 1.00 0.86 0.46 0.75 0.75 0.82 0.57 0.24 1.00 0.83 1.00 0.93 1.00HRV 1.00 1.00 1.00 0.94 1.00 0.80 0.22 0.13 1.00 0.46 1.00 1.00 0.62 0.55 1.00 1.00 1.00 0.67 0.92 0.94 1.00 0.43 0.24 1.00 1.00 1.00 0.94 1.00CZE 1.00 1.00 1.00 0.82 0.88 0.69 0.78 0.00 1.00 0.44 1.00 1.00 0.62 0.67 0.89 1.00 0.89 0.73 1.00 0.81 0.89 0.64 0.27 1.00 1.00 1.00 0.82 1.00EST 1.00 1.00 1.00 0.93 1.00 1.00 0.87 1.00 1.00 0.91 1.00 1.00 0.92 0.93 1.00 1.00 1.00 0.92 1.00 0.94 1.00 1.00 0.44 1.00 1.00 1.00 0.94 1.00GEO 0.60 0.56 0.33 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.22 1.00 0.00 0.00 0.60 0.86 0.17 0.00 0.06 0.10 0.08 0.00 0.00 1.00 0.08 1.00 0.29 1.00HUN 1.00 1.00 1.00 0.86 1.00 0.69 0.54 0.56 0.09 1.00 1.00 1.00 0.75 0.62 1.00 1.00 1.00 1.00 0.86 0.85 1.00 0.69 0.15 1.00 1.00 1.00 0.91 1.00KAZ 0.75 1.00 0.71 0.22 0.33 0.00 0.00 0.00 0.00 0.78 0.00 0.83 0.00 0.00 0.67 0.75 0.20 0.00 0.06 0.08 0.11 0.00 0.00 1.00 0.17 1.00 0.29 1.00KGZ 0.00 0.33 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.17 0.00 0.00 0.00 0.50 0.08 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.83 0.20 0.83LVA 1.00 1.00 1.00 0.86 1.00 0.54 0.38 0.38 0.08 1.00 0.25 1.00 1.00 0.55 1.00 1.00 0.83 0.60 0.63 0.85 1.00 0.43 0.00 1.00 0.86 1.00 0.77 1.00LTU 1.00 1.00 1.00 0.87 0.87 0.65 0.45 0.33 0.07 1.00 0.38 1.00 1.00 0.45 0.88 1.00 0.86 0.64 0.75 0.88 0.88 0.38 0.13 1.00 1.00 1.00 0.86 1.00MDA 0.33 0.67 0.38 0.00 0.00 0.00 0.00 0.11 0.00 0.40 0.00 0.33 1.00 0.00 0.13 0.60 0.09 0.13 0.13 0.00 0.00 0.13 0.00 1.00 0.17 0.88 0.25 0.88MNG 0.25 0.60 0.11 0.00 0.09 0.00 0.00 0.00 0.00 0.14 0.00 0.25 0.50 0.00 0.00 0.40 0.17 0.00 0.06 0.08 0.09 0.00 0.00 1.00 0.14 0.78 0.29 0.71MNE 0.91 0.92 0.90 0.50 0.60 0.14 0.00 0.11 0.00 0.83 0.00 0.80 0.92 0.17 0.14 0.91 0.83 0.20 0.31 0.36 0.55 0.29 0.11 1.00 0.44 0.93 0.75 0.93POL 1.00 1.00 1.00 0.86 0.87 0.54 0.33 0.27 0.08 1.00 0.00 1.00 1.00 0.40 0.36 0.88 1.00 0.80 0.80 0.73 0.88 0.45 0.13 1.00 1.00 1.00 0.77 1.00ROM 1.00 1.00 0.94 0.75 0.72 0.25 0.08 0.00 0.00 0.94 0.14 0.94 1.00 0.38 0.25 0.87 0.94 0.69 0.20 0.55 0.69 0.36 0.22 1.00 0.56 0.94 0.75 0.94RUS 1.00 1.00 0.90 0.67 0.70 0.25 0.06 0.19 0.06 0.90 0.15 0.92 1.00 0.15 0.13 1.00 0.92 0.64 0.27 0.45 0.60 0.31 0.11 1.00 0.45 0.92 0.80 0.92SRB 1.00 1.00 0.92 0.67 0.88 0.18 0.00 0.11 0.00 0.92 0.00 0.89 1.00 0.00 0.13 1.00 0.91 0.45 0.13 0.31 0.40 0.15 0.00 1.00 0.33 0.92 0.60 0.92SVK 1.00 1.00 1.00 0.80 0.87 0.43 0.57 0.36 0.00 1.00 0.31 1.00 1.00 0.57 0.62 0.88 1.00 0.71 0.55 0.64 0.69 0.85 0.22 1.00 0.83 1.00 0.67 1.00SVN 1.00 1.00 1.00 1.00 1.00 0.76 0.76 0.73 0.56 1.00 0.85 1.00 1.00 1.00 0.87 1.00 1.00 0.89 0.87 0.78 0.89 1.00 0.78 1.00 0.89 1.00 0.88 1.00TJK 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.40 0.00 0.25TFRM 1.00 1.00 0.93 0.71 0.73 0.17 0.00 0.00 0.00 0.92 0.00 0.83 1.00 0.14 0.00 0.83 0.86 0.56 0.00 0.44 0.55 0.67 0.17 0.11 1.00 0.93 0.67 0.93TKM 0.13 0.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.17 0.00 0.00 0.13 0.22 0.07 0.00 0.06 0.08 0.08 0.00 0.00 0.60 0.07 0.00 0.33UKR 0.75 1.00 0.56 0.27 0.33 0.07 0.06 0.18 0.06 0.71 0.09 0.71 0.80 0.23 0.14 0.75 0.71 0.25 0.23 0.25 0.20 0.40 0.33 0.12 1.00 0.33 1.00 1.00UZB 0.13 0.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.17 0.00 0.00 0.13 0.29 0.07 0.00 0.06 0.08 0.08 0.00 0.00 0.75 0.07 0.67 0.00

605M.Petrović

etal./

Governm

entInform

ationQuarterly

29(2012)

597–607

Table A2Discordance index.

ALB ARM AZE BLR BIH BGR HRV CZE EST GEO HUN KAZ KGZ LVA LTU MDA MNG MNE POL ROM RUS SRB SVK SVN TJK TFRM TKM UKR UZB

ALB 0.22 0.11 0.44 0.33 0.89 0.56 0.44 0.89 0.11 0.44 0.22 0.00 0.44 0.44 0.33 0.06 0.33 0.44 0.22 0.33 0.33 0.56 0.67 0.00 0.33 0.06 0.33 0.06ARM 0.22 0.22 0.22 0.33 0.89 0.44 0.56 0.89 0.22 0.56 0.11 0.11 0.56 0.44 0.22 0.11 0.44 0.56 0.33 0.33 0.33 0.56 0.56 0.00 0.44 0.06 0.22 0.06AZE 0.22 0.22 0.33 0.22 0.89 0.44 0.44 0.89 0.11 0.44 0.22 0.11 0.44 0.56 0.22 0.17 0.33 0.44 0.33 0.44 0.22 0.44 0.56 0.00 0.33 0.00 0.33 0.00BLR 0.11 0.11 0.00 0.11 0.89 0.33 0.33 0.89 0.00 0.33 0.11 0.00 0.33 0.44 0.00 0.00 0.22 0.33 0.22 0.33 0.11 0.44 0.44 0.00 0.22 0.00 0.22 0.00BIH 0.11 0.11 0.00 0.17 0.89 0.33 0.33 0.89 0.00 0.33 0.11 0.00 0.33 0.44 0.00 0.06 0.22 0.33 0.22 0.33 0.11 0.44 0.44 0.00 0.22 0.00 0.22 0.00BGR 0.00 0.00 0.00 0.11 0.00 0.22 0.22 0.33 0.00 0.33 0.00 0.00 0.33 0.22 0.00 0.00 0.11 0.33 0.11 0.11 0.11 0.33 0.33 0.00 0.22 0.00 0.17 0.00HRV 0.00 0.00 0.00 0.11 0.00 0.67 0.11 0.67 0.00 0.11 0.00 0.00 0.11 0.17 0.00 0.00 0.00 0.11 0.06 0.11 0.00 0.11 0.22 0.00 0.00 0.00 0.17 0.00CZE 0.00 0.00 0.00 0.22 0.11 0.56 0.33 0.56 0.00 0.11 0.00 0.00 0.11 0.11 0.11 0.00 0.11 0.11 0.00 0.11 0.11 0.11 0.44 0.00 0.00 0.00 0.11 0.00EST 0.00 0.00 0.00 0.06 0.00 0.00 0.11 0.00 0.00 0.06 0.00 0.00 0.06 0.11 0.00 0.00 0.00 0.06 0.00 0.06 0.00 0.00 0.22 0.00 0.00 0.00 0.11 0.00GEO 0.22 0.22 0.11 0.33 0.22 0.89 0.44 0.44 0.89 0.44 0.22 0.00 0.44 0.56 0.22 0.06 0.33 0.44 0.33 0.44 0.22 0.56 0.56 0.00 0.33 0.00 0.33 0.00HUN 0.00 0.00 0.00 0.11 0.00 0.89 0.22 0.33 0.89 0.00 0.00 0.00 0.11 0.22 0.00 0.00 0.00 0.00 0.22 0.11 0.00 0.33 0.33 0.00 0.00 0.00 0.06 0.00KAZ 0.22 0.00 0.22 0.22 0.33 0.89 0.44 0.56 0.89 0.22 0.56 0.11 0.56 0.44 0.22 0.11 0.44 0.44 0.22 0.22 0.33 0.56 0.56 0.00 0.33 0.00 0.17 0.00KGZ 0.22 0.22 0.11 0.44 0.33 0.89 0.56 0.44 0.89 0.11 0.56 0.22 0.56 0.56 0.33 0.11 0.33 0.56 0.33 0.44 0.33 0.56 0.67 0.00 0.44 0.06 0.33 0.06LVA 0.00 0.00 0.00 0.11 0.00 0.78 0.22 0.22 0.78 0.00 0.11 0.00 0.00 0.33 0.00 0.00 0.11 0.11 0.11 0.22 0.00 0.22 0.33 0.00 0.11 0.00 0.11 0.00LTU 0.00 0.00 0.00 0.22 0.11 0.78 0.33 0.22 0.78 0.00 0.11 0.00 0.00 0.11 0.11 0.00 0.11 0.11 0.11 0.11 0.11 0.22 0.44 0.00 0.00 0.00 0.11 0.00MDA 0.22 0.22 0.11 0.22 0.22 0.89 0.44 0.44 0.89 0.11 0.56 0.22 0.00 0.56 0.56 0.11 0.33 0.56 0.33 0.44 0.33 0.56 0.56 0.00 0.44 0.06 0.33 0.06MNG 0.33 0.33 0.22 0.56 0.44 0.89 0.67 0.56 0.89 0.22 0.44 0.33 0.11 0.44 0.67 0.44 0.44 0.44 0.44 0.56 0.44 0.56 0.78 0.00 0.44 0.11 0.44 0.11MNE 0.06 0.06 0.06 0.22 0.11 0.89 0.33 0.33 0.89 0.06 0.44 0.11 0.06 0.44 0.33 0.06 0.11 0.44 0.22 0.22 0.22 0.44 0.44 0.00 0.33 0.06 0.28 0.06POL 0.00 0.00 0.00 0.22 0.11 0.89 0.33 0.33 0.89 0.00 0.11 0.00 0.00 0.11 0.22 0.11 0.00 0.11 0.22 0.11 0.11 0.33 0.44 0.00 0.00 0.00 0.11 0.00ROM 0.00 0.00 0.06 0.22 0.11 0.67 0.33 0.33 0.67 0.06 0.33 0.06 0.00 0.33 0.22 0.11 0.06 0.22 0.22 0.11 0.11 0.33 0.44 0.00 0.11 0.06 0.11 0.06RUS 0.00 0.00 0.06 0.11 0.11 0.89 0.22 0.33 0.89 0.06 0.33 0.06 0.00 0.33 0.22 0.00 0.06 0.22 0.22 0.22 0.11 0.33 0.33 0.00 0.11 0.06 0.06 0.06SRB 0.00 0.00 0.06 0.11 0.06 0.89 0.22 0.33 0.89 0.06 0.22 0.06 0.00 0.22 0.33 0.00 0.06 0.11 0.22 0.22 0.22 0.33 0.33 0.00 0.11 0.06 0.17 0.06SVK 0.00 0.00 0.00 0.22 0.11 0.56 0.33 0.22 0.56 0.00 0.11 0.00 0.00 0.11 0.33 0.11 0.00 0.11 0.11 0.11 0.22 0.11 0.44 0.00 0.11 0.00 0.17 0.00SVN 0.00 0.00 0.00 0.00 0.00 0.78 0.22 0.22 0.78 0.00 0.11 0.00 0.00 0.00 0.33 0.00 0.00 0.11 0.11 0.11 0.22 0.00 0.22 0.00 0.11 0.00 0.11 0.00TJK 0.33 0.33 0.22 0.56 0.44 0.89 0.67 0.56 0.89 0.22 0.56 0.33 0.11 0.56 0.67 0.44 0.17 0.44 0.56 0.44 0.56 0.44 0.56 0.78 0.44 0.11 0.44 0.11TFRM 0.00 0.00 0.06 0.22 0.11 0.89 0.33 0.33 0.89 0.06 0.22 0.06 0.00 0.22 0.22 0.11 0.06 0.11 0.22 0.22 0.17 0.11 0.33 0.44 0.00 0.06 0.22 0.06TKM 0.44 0.44 0.22 0.44 0.33 0.89 0.67 0.67 0.89 0.22 0.56 0.44 0.22 0.56 0.78 0.33 0.17 0.56 0.56 0.56 0.67 0.44 0.56 0.67 0.11 0.56 0.56 0.11UKR 0.22 0.00 0.22 0.22 0.33 0.89 0.44 0.56 0.89 0.22 0.56 0.11 0.11 0.56 0.44 0.22 0.11 0.44 0.44 0.22 0.22 0.33 0.56 0.56 0.00 0.33 0.00 0.00UZB 0.33 0.33 0.22 0.44 0.33 0.89 0.56 0.56 0.89 0.22 0.56 0.33 0.17 0.56 0.67 0.33 0.22 0.44 0.56 0.44 0.56 0.33 0.56 0.67 0.06 0.44 0.06 0.44

606M.Petrović

etal./

Governm

entInform

ationQuarterly

29(2012)

597–607

607M. Petrović et al. / Government Information Quarterly 29 (2012) 597–607

References

Akca, H., Sayili, M., & Esengun, K. (2007). Challenge of rural people to reduce digital di-vide in the globalized world: Theory and practice. Government Information Quarter-ly, 24(2), 404–413.

Al-mutawkkil, A., Heshmati, A., & Hwang, A. J. (2009). Development of telecommunica-tion and broadcasting infrastructure indices at the global level. TelecommunicationsPolicy, 33(3–4), 176–199.

Anić, I., & Larichev, O. (1996). The ELECTRE method and the problem of acyclic relationbetween alternatives. Automation and Remote Control, 8, 108–118 (in Russian).

Archibugi, D., & Coco, A. (2004). A new indicator of technological capabilities for devel-oped and developing countries (ArCo). World Development, 32(4), 629–654.

Baker, D. (2009). Advancing e-government performance in the United States throughenhanced usability benchmarks. Government Information Quarterly, 26(1), 82–88.

Bauer, M. J. (2010). Learning from each other: Promises and pitfalls of benchmarking incommunications policy. Info, 12(6), 8–20.

Bertot, J. C. (2003). The multiple dimensions of the digital divide: More than the tech-nology ‘haves’ and ‘have nots’. Government Information Quarterly, 20(2), 185–191.

Billon, M., Marco, R., & Lera-Lopez, F. (2009). Disparities in ICT adoption: Amultidimensional approach to study the cross-country digital divide. Telecommu-nications Policy, 33(10–11), 596–610.

Bojković, N., Anić, I., & Pejčić-Tarle, S. (2010). One solution for cross-country transport-sustainability evaluation using a modified ELECTRE method. Ecological Economics,69(5), 1176–1186.

Chen, W., & Wellman, B. (2004). The global digital divide — Within and between coun-tries. IT & Society, 1(7), 39–45.

Cilan, C. A., Bolat, B. A., & Coşkun, A. E. (2009). Analyzing digital divide within andbetween member and candidate countries of European Union. GovernmentInformation Quarterly, 26(1), 98–105.

Corrocher, N., & Ordanini, A. (2002). Measuring the digital divide: A framework for theanalysis of cross-country differences. Journal of Information Technology, 17(1), 9–19.

Dewan, S., Ganley, D., & Kraemer, L. K. (2005). Across the digital divide: A cross-countrymulti‐technology analysis of the determinants of IT penetration. Journal of theAssociation for Information Systems, 6(12), 409–432.

Dolowitz, D. (2003). A policy‐maker's guide to policy transfer. The Political Quarterly,74(1), 101–108.

Dolowitz, D., & Marsh, D. (1996). Who learns what from whom: A review of the policytransfer literature. Political Studies, 44(2), 343–357.

Dolowitz, D., & Marsh, D. (2000). Learning from abroad: The role of policy transfer incontemporary policy-making. Governance, 13(1), 5–24.

EBRD (2008). Comparative assessment of the telecommunications sector in the transitioncountries, EBRD telecommunications sector assessment report. url. http://www.ebrd.com/downloads/legal/telecomms/report.pdf (Retrieved in January, 2011)

Emrouznejad, A., Cabanda, E., & Gholami, R. (2010). An alternative measure of theICT-Opportunity Index. Information Management, 47(4), 246–254.

Erumban, A. A., & Jong, B. S. (2006). Cross country differences in ICT adoption: A con-sequence of culture? Journal of World Business, 41(4), 302–314.

Estrada, A. S., Song, S. H., Kim, A. Y., Namn, H. S., & Kang, C. S. (2009). Amethod of step-wise benchmarking for inefficient DMUs based on the proximity-based target selection.Expert Systems with Applications, 36(9), 11595–11604.

Evans, M. (Ed.). (2004). Policy transfer in a global perspective. : Ashgate Publishing, Ltd.Evans, D., & Yen, C. D. (2005). E-government: An analysis for implementation: Frame-

work for understanding cultural and social impact. Government Information Quar-terly, 22(3), 354–373.

Ferro, E., Helbig, C. N., & Gil-Garcia, J. R. (2011). The role of IT literacy in defining digitaldivide policy needs. Government Information Quarterly, 28(1), 3–10.

Gauld, R., Goldfinch, S., & Horsburgh, S. (2010). Do they want it? Do they use it? The‘Demand-side’ of e-government in Australia and New Zealand. Government Infor-mation Quarterly, 27(2), 177–186.

Hanafizadeh,M. R., Saghaei, A., & Hanafizadeh, P. (2009). An index for cross-country anal-ysis of ICT infrastructure and access. Telecommunications Policy, 33(7), 385–405.

Haßler, B., & Jackson, A.M. (2010). Bridging the bandwidth gap: Open educational resourcesand the digital divide. IEEE Transactions on Learning Technologies, 3(2), 110–115.

Hoffman, D. L., Novak, T. P., & Schlosser, A. E. (2000). The evolution of the digital divide:How gaps in internet access may impact electronic commerce. Journal of Computer-Mediated Communication, 5(3) (url: http://jcmc.indiana.edu/vol5/issue3/hoffman.html(Retrieved in June, 2011)).

ITU (2010). Measuring the information society, International TelecommunicationsUnion (ITU) report. url: http://www.itu.int/ITU-D/ict/publications/idi/2010/Material/MIS_2010_without_annex_4-e.pdf (Retrieved in November 2010).

Jackson, A. L., Zhao, Y., Kolenic, A., III, Hiram Fitzgerald, E. H., Harold, R., & Von Eye, A.(2008). Race, gender, and information technology use: The new digital divide.Cyber Psychology & Behavior, 11(4), 437–442.

James, J. (2009). From the relative to the absolute digital divide in developing countries.Technological Forecasting and Social Change, 76(8), 1124–1129.

James, O., & Lodge, M. (2003). The limitations of ‘policy transfer’ and ‘lesson drawing’for public policy research. Political Studies Review, 1(2), 179–193.

Jansen, J., De Vries, S., & Van Schaik, P. (2010). The contextual benchmark method:Benchmarking e-government services. Government Information Quarterly, 27(3),213–219.

Lundvall, B. A., & Tomlinson, M. (2001). Learning-by-comparing: reflections on the useand abuse of international benchmarking. In G. Sweeney (Ed.), Innovation, econom-ic progress and quality of life (pp. 203–231). Cheltenham: Edward Elgar.

McNair, C. J., & Leibried, K. H. L. (1994). Benchmarking: A tool for continuous improve-ment. Indianapolis, IN: John Wiley & Sons, Inc.

Noce, A., & McKeown, L. (2008). A new benchmark for internet use: A logistic modelingof factors influencing internet use in Canada. Government Information Quarterly,25(3), 462–476.

Norris, P. (2001). Digital divide; civic engagement, information poverty, and the internetworldwide. Cambridge: University Press.

OECD (2001). Understanding the digital divide. Organization for Economic Cooperationand Development (OECD) (url: http://www.oecd.org/dataoecd/38/57/1888451.pdf (Retrieved in November, 2010)).

Papaioannou, T. (2007). Policy benchmarking: A tool of democracy or a tool of author-itarianism? Benchmarking: An International Journal, 14(4), 497–516.

Petrović, M., Gospić, N., Pejčić-Tarle, S., & Bogojević, D. (2011). Benchmarking telecom-munications in developing countries: A three-dimensional approach. ScientificResearch and Essays, 6(4), 729–737.

Philipa, B., Hamdi, M., Lorraine, M., & Ruffing, L. (2003). Information and communicationtechnology development indices. : United Nations publication (UNCTAD/ITE/IPC/2003/1, url:http://www.unctad.org/en/docs/iteipc20031_en.pdf (Retrieved inDecember, 2010)).

Ricci, A. (2000). Measuring information society dynamics of European data on usage ofinformation and communication technologies in Europe since 1995. Telematics andInformatics, 17(1–2), 141–167.

Rose, R. (1991). What is lesson-drawing? Journal of Public Policy, 11(1), 3–30.Roy, B. (1968). Classement et choix en présence de points de vue multiples (la

méthode ELECTRE). Revue Française d'Informatique et de Recherche Opérationnelle,8, 57–75.

Sciadas, G. (2004). International benchmarking for the information society. ITU-KADO dig-ital bridges symposium. Busan, Republic of Korea: International TelecommunicationUnion (ITU) (url:http://www.itu.int/osg/spu/ni/digitalbridges/docs/background/BDB-intl-indices.pdf (Retrieved in June, 2010)).

Selhofer, H., & Husing, T. (2002). The Digital Divide Index — A measure of social in-equalities in the adoption of ICT. Proceedings of the Xth European Conference on In-formation Systems ECIS 2002 — Information systems and the future of the digital economy,Gdansk, Poland, 2002 (url:http://www.empirica.com/publikationen/documents/2002/Huesing_Selhofer_DDIX_2002.pdf (Retrieved in November, 2010)).

Selhofer, H., & Mayringer, H. (2001). Benchmarking the information society. Develop-ment in European countries. Communications and Strategies, 43, 17–55.

Spendolini, M. J. (1992). The benchmarking book. New York: American ManagementAssociation.

Trkman, P., Jerman-Blazic, B., & Turk, T. (2008). Factors of broadband development andthe design of a strategic policy framework. Telecommunications Policy, 32(2),101–115.

Vicente, M. R., & Lopez, A. J. (2006). A multivariate framework for the analysis of thedigital divide: Evidence for the European Union-15. Information Management,43(6), 756–766.

Vicente, M. R., & López, A. J. (2010). What drives broadband diffusion? Evidence fromEastern Europe. Applied Economics Letters, 17(1), 51–54.

Wegman, F., & Oppe, S. (2010). Benchmarking road safety performances of countries.Safety Science, 48(9), 1203–1211.

Dr Marijana Petrović is an assistant professor in the Department of Management,Organization and Economy, Faculty of Transport and Traffic Engineering, at the Univer-sity of Belgrade, Serbia. She is the author or coauthor of a number of scientific papers.Her research area is quality management and policy in transport and communications,especially policy modeling in telecommunications.

Dr Nataša Bojković is an assistant professor with the Faculty of Transport and TrafficEngineering, at the University of Belgrade, Serbia. She is the author or coauthor of anumber of papers in peer-reviewed international and national journals, including in-vited papers as well as conference proceedings. Her research area is managementand policy in transport and communications, especially cross-national comparisons.

Dr Ivan Anić is teaching and research assistant in mathematical analysis in the Facultyof Mathematics at the University of Belgrade, Serbia. His area of research is multi-criteria decision making with applications. He specially deals with outranking methodsand supporting software solutions. He has published a number of scientific papers inthis area and in mathematical analysis.

Dalibor Petrović is a Ph.D. candidate and teaching and research assistant in theDepartment of the Social Sciences of Faculty of Transport and Traffic Engineering atthe University of Belgrade, Serbia. He holds a Master's degree in Sociology, Universityof Belgrade. Since the beginning of his academic career, he has been interested in thestudy of the social aspects of internet use and is the author of the first sociologicalstudy about the social aspects of internet use in Serbia titled “Internet and new formsof sociability”.