ContentsWelcome to the Arcati Mainframe Yearbook 2010 ............................................................. 3Mainframe continuity planning 2.0 ..................................................................................... 4Leveraging existing mainframe investments in modernization projects ........................11Lowering mainframe TCO through zIIP specialty engine exploitation .......................... 14SNA mainframe security – because SNA isn’t hacked, instead it is infiltrated ............. 22TurboHercules and disaster recovery – an innovative approach to mainframe outage
and business continuity ................................................................................................ 34Software migrations: past, present, and future ............................................................... 38
The 2010 Mainframe User Survey .................................................................................... 42An analysis of the profile, plans, and priorities of mainframe users
Vendor Directory ................................................................................................................ 54Vendors, consultants and service providers in the z/OS and OS/390 environment
A media guide for IBM mainframers ............................................................................... 131Information resources, publications and user groups for the z/OS environment
Glossary of Terminology ................................................................................................. 134Definitions of some mainframe-related terms
Technical information ...................................................................................................... 160Hardware tables – z10; mainframe hardware timeline 1952-2010;mainframe operating system development
There’s an old Chinese curse that says, “may you live in interesting times”, and 2009 was quite an interesting year. If 2008was the year of recession, then 2009 was the year for spotting glimmers of recovery – some of which were no more thanmirages and some which hinted at better times to come in the global economy.
For many mainframe users the most interesting new product in 2009 was not from IBM butfrom NEON Enterprise Software and was called zPrime. Why the excitement? Well, IBM andmany independent software vendors (ISVs) generally charge users for software by the amountof General Purpose Processor (GPP) capacity they use. IBM also sells specialty processors,which are available for workloads like Linux, Java (think WebSphere), and DB2. For a typicalmainframe site, processing work in a specialty processor saves money because it is not usingthe chargeable GPPs – and there’s an added benefit that it can save an organization money byputting off the need for an expensive processor upgrade. The controversial zPrime software,according to NEON, allows users to run up to 50% of their workload on specialty processors.So in effect a zIIP doesn’t just run DB2 work, it can also run IMS, CICS, TSO/ISPF, batch, etc, etc.On the plus side this offers the strong possibility of bringing down the total cost of ownership ofthe mainframe, which is always good for users (though there is some debate about theachievable level of savings). On the negative side, IBM and ISVs may opt to limit the use ofzPrime through licence changes (particularly for those users who are tied into EnterpriseLicence Agreements) if specialty processors are used for workloads other than those intended. IBM has already issuedletters about customers violating agreements if they use zPrime, and NEON has sued IBM. It will be interesting to see whathappens next. Arcati’s Mainframe Market Information Service will be monitoring zPrime developments closely.
Another continuing trend in 2009 has been the encouragement of young IT professionals into mainframe computing. IBM hasits Academic Initiative, which was introduced in 2004. This runs at universities in the USA, UK, and Europe. Similarly, CA isworking with universities, starting in the Czech Republic, to provide mainframes that students can use for specific trainingmodules. These and other initiatives will ensure an on-going supply of qualified COBOL and Assembler programmers.Having young well-trained programmers ensures the future of mainframes – particularly in the light of the generally acceptedfact that mainframe expertise is typically provided by an ageing (and gradually retiring) workforce.
For IBM, 2009 saw little in the way of acquisition, and I guess the recession must be to blame for that in many ways. 15January saw IBM acquire the e-mail service assets of Outblaze. On the 5 May it got Exeros for its data discovery software.There were two acquisitions at the end of July – Ounce Labs for its source code analysis, and SPSS for its statistical analysissoftware. In September IBM got its hands on Redhill Solutions for its analytics and optimization software. The SPSS
acquisition makes IBM a big player in the Business Intelligencefield and may perhaps cause SAS Software a few sleeplessnights, and it may cause some other software vendors –Accelrys Software, Angoss, FICO, KXEN, Portrait Software, andSAP – to form partnerships or acquire their competition.
In terms of software developments, 2009 saw theannouncement of z/OS Version 1.11, and the release of IMSVersion 11 and CICS Transaction Server Version 4.1. In June,IBM announced ‘Cobra’, the codename for DB2 9.7 for LUW. InOctober DB2 pureScale was announced. DB2 pureScale is adatabase cluster solution for Online Transaction Processing(OLTP) workloads. In 2009 there were big birthdays for CICS andCOBOL at 40 and 50 respectively. IMS turned 41 while DB2 isstill a relative youngster at 26 and Java is only 14!
The use of Eclipse has grown in recent months. Eclipse, you’llremember, started life as an IBM Canada project. It wasdeveloped by Object Technology International (OTI) as a Java-based replacement for the Smalltalk-based VisualAge family ofIDE products. The Eclipse Foundation was created in January2004. It’s a multi-language software development environmentcomprising an IDE and a plug-in system to extend it. As well as
The Arcati Mainframe Yearbook 2010
Publisher: Mark LillycropEditor: Trevor EddollsContributors: DataDirect, Anura Gurugé, ReginaldHarbeck, William Olders, OpenMainframe.org, MarkWilson
All company and product names mentioned in thispublication remain the property of their respectiveowners.
This Yearbook is the copyright of Arcati Limited, andmay not be reproduced or distributed in whole or inpart without the permission of the owner. A licence forinternal e-mail or intranet distribution may be obtainedfrom the publisher. Please contact Arcati for details.
IBM championing its use, CA InterTest Batch and CA InterTest for CICS now feature a Graphical User Interface based onthe Eclipse Platform. And Compuware announced Xpediter/Eclipse 2.0, saying that it helps new developers becomeproductive more quickly by moving away from the traditional green screen interface and providing a modernized point-and-click environment. I’m sure further Eclipse-related announcements will be made by other software vendors in themonths ahead.
Looking forward, the next-generation mainframe hardware (z11 or zNext) is likely to be the biggest news in the monthsahead. According to the Mainframe Market Monitor, the rumours are that the system has been delayed from the end of2010 to early 2011, but users should start planning now by building scope for an upgrade into new contracts. When thenew system does arrive, there is a very good chance that we will see some significant changes both in the way thatUnix, Linux and Windows apps are supported on the System z and in the way that software charges are handled,particularly for the largest users.
These are very complex and demanding times for mainframe users, and we hope that the information and researchprovide in these pages will help to ease the load a little. As always we are extremely grateful to the sponsors andadvertisers who continue to make this complimentary publication possible. We wish a happy and prosperous 2010 toall our readers!
Mainframe continuityplanning 2.0
Reginald Harbeck from CA’s MainframeBusiness Unit takes a fresh look at mainframecontinuity planning and the important newchanges that have taken place that must betaken into consideration.
When the original Mainframe Continuity PlanningWhitepaper was published in 2005, it was intendedto kick off the discussion about what to do toensure generational continuity for mainframeenvironments by reviewing the relevant contextand facts and suggesting four approaches thatwere available to respond to it.
Since then, some important new changes haveoccurred in the world of large-scale business IT,which primarily involve the same organizations asthose that use mainframes. These new realitieshave led to the opportunity to revisit this importantissue and offer a substantial update from a 2010perspective.
Specifically, the new circumstances that haveemerged since 2005 include:
• The economic tsunami that flooded worldmarkets in the fall of 2008
• The related delay in retirement of the currentgeneration of mainframers
• The reassignment of experienced non-mainframe IT personnel to handle recentlyvacated mainframe roles
• Important academic initiatives to encourage andenable mainframe careers
• A shrinking IT workforce with increasing non-mainframe requirements
• The initial emergence of a new generation ofmainframers
• The ever-increasing manageability of themainframe, due to initiatives such as CA’sMainframe 2.0
• The imminent tipping point of mainframe Linuxinto a full production-active platform, due, inpart, to the emerging importance of enterprisevirtualization and cloud computing
This article looks at each circumstance in detailand provides options for ensuring mainframecontinuity accordingly.
The Economic Tsunami of 2008They say every cloud has a silver lining, but thatmay not be much comfort to those whose goldensunshine has been obscured.
In this case, the darkness was the apparent near-collapse of the world economy. Many peoplesuffered, and continue to suffer, from a loss oftheir jobs and retirement savings. What silver liningcould come from such a disaster?
From governments to boardrooms and executivesof the largest organizations on earth (who,coincidentally, are generally the same ones usingmainframes) all the way to the front line, themessage is becoming loud and clear: do not justlive for the day. Look down the road, and countboth the costs and the value.
This is particularly relevant given the increasingimportance of regulatory compliance andcorporate governance. Of course, the associatedcosts may be seen as burdensome given thedownturn, but the costs of irresponsibility havealready been seen. A focus on delivering businessvalue through protection of data and provablecompliance with relevant laws and regulations isboth cost effective and essential to staying inbusiness
A retrenchment on scrupulous business practicesmust necessarily extend to technology decisions,including those concerning moving betweenplatforms and building new applications.Consequently, organizations may increasingly docareful cost-benefits analyses of their platformchoices, which often have the result of highlightingthe value of centralized, quality computing –particularly as provided by the mainframe.
Delayed retirement of mainframersA common lament among mainframers at or nearretirement age is, “my 401K is now a 201K!”. Thisreduction in retirement savings is not limited tothe USA either – rather, it is a global issue. The
potential loss of some medical coverage uponretirement is a related issue.
To top it all off, the most experienced mainframershave a genuine feeling of being valued by theirorganizations, as they are encouraged topostpone retirement, and sometimes even givenincentives such as flexible schedules.
Of course, none of this will dissuade someonewho is determined to retire – especially if they areplanning to come back to their previous job as ahigh-priced consultant days after their retirementparty. Still, between these returners and those whochoose to put in some career overtime, theprecipitous drop in available mainframe expertisethat many predicted has not yet happened.
The good news is this may give you an “extension”of the implicit deadline for getting your nextgeneration of mainframe management in place.The bad news is, as the economy improves, thesesame people may retire, creating an even moresudden wave than originally predicted.
Reassignment of non-mainframe IT tomainframe rolesBack when we were all preparing for Y2K, therewere a number of remediation techniques incommon use. In some cases, applications wererewritten and data structures changed to havefour-digit years, thus preventing Y2K-typeproblems from recurring for the next 8,000-plusyears.
In other cases, stop-gap approaches such as“windowing” were employed, whereby applicationsinterpreted low double-digit year values as beingin the 2000s rather than the 1900s. As long as theapplications and/or their data were regularlyrenewed or reinterpreted in a sliding fashion tokeep up with the progress of time, this was also areasonably sustainable approach.
Similar to this windowing approach, manyorganizations have begun moving experiencednon-mainframe personnel into mainframe positionsbeing vacated by retirees. These people requireless training due to their experience in computingand corporate cultures, though they still requiresome specific mainframe ramping-up.
As long as these replacement people do not retiretoo soon and are able to effectively pick up theirresponsibilities (see the section on mainframemanageability below), this appears to be areasonably effective part of a larger strategy,especially for organizations moving more of theirwork to the mainframe, freeing up positions fromnon-mainframe platforms.
Academic initiativesMany colleges and universities teach computerscience, but few have kept up a mainframe trackuntil recently. This has changed with thecollaboration of mainframe hardware and softwarevendors with various post-secondary institutions.
Today, there are more and more students learningabout mainframe technology, either as part of theirdiploma or degree, or as part of a certificateprogram.
While this is not sufficient to fill the looming gapby itself, it is a very valuable approach to planningfor the future, and should be a good source ofsome portion of the new generation required –particularly when there is geographical synergybetween the institutions and large organizationsthat employ mainframe technologists.
Shrinking IT workforceWhile academic institutions are turning out moremainframers, they are still graduating fewer ITpersonnel in total.
This is a problem, since non-mainframe platformsmay require a linear growth in support staff
requirements commensurate with a linear growthin capacity.
But it is also an advantage for the mainframe,which has a small support staff requirement tobegin with, and an even smaller and shrinkingincremental staffing need as capacity grows. Inother words, while the problem of keeping themainframe properly-staffed is an imminentconcern, it is also solvable. This may end up beingmore of a concern for other platforms.
Emergence of a new generationIt may not be a stretch to say that there would beno mainframe, or mainframe culture, withoutSHARE. Founded in 1955, this important usergroup has been a key source of input into thecreation and ongoing evolution of the mainframeand large IT enterprise in general.
So, when the zNextGen project in SHARE wasfounded soon after the original MainframeContinuity Planning whitepaper was published, thisproject became an instant bellwether for the futureof the mainframe workforce.
And the news is good: starting with a handful ofnew mainframers, zNextGen has grown to over550 members and has become an important on-ramp for new mainframers to get involved withSHARE and other mainframe cultural andtechnical learning and advancement.
Of course, if anything, this supports the contentionthat a new generation is needed by enabling sucha generation and demonstrating that it exists andis growing.
Increasing manageability of the mainframeOf all the good news about the mainframe, thismay be the best. Yes, the demographics aremoving in the right direction. Yes, we have beengiven a bit of a reprieve before the wave of massretirements hits. But even better, new mainframers
can now expect to be effective in their roles farearlier in their careers as they grow into their newresponsibilities.
An important example of this is CA’s Mainframe2.0 strategy, which, among other things, entailsthe creation of the CA Mainframe SoftwareManager (CA MSM) and something currentlycalled CA Project Flow.
CA MSM handles the details of acquiring, installing,and maintaining mainframe software such as CA’susing a browser-based interface – enablingmainframe neophytes to be effective at that taskbefore they have even gotten used to use of a 3270interface. In fact, statistics CA has publisheddemonstrate that a new mainframer may takeupwards of 1000 percent of the time manuallyinstalling or maintaining a product using 3270 andSMP/E compared to how long it takes using CAMSM.
By May of 2011, CA is planning to offer advancedconfiguration services as part of CA MSM,including an interface with IBM’s Health Checkerso that recommendations about how to get morevalue from CA products can be made. These will,in turn, result in automated improvementsgenerated by the simple push of a button thataccepts the recommendations.
Building beyond this, CA is also developing aproduct that, at the time of this writing, is code-named “Project Flow”. Beginning in May of 2010with a DBA interface, Project Flow will provide arevolutionary, role-based intelligent managementworkspace enabling greater productivity of thecurrent generation mainframer in a way thatfosters the training of, and knowledge transfer to,the next generation of mainframe IT staff. ProjectFlow shifts the emphasis from a product-centricdelivery to a role-centered model.
Mainframe Linux, Virtualization and CloudComputingIf mainframe Linux has taken a decade to becomean “overnight success” then virtualization hastaken three. And z/VM, once written off by someas all but dead, has claimed its rightful role at theforefront of production-quality OS virtualization,enabling hundreds, thousands, or moreconcurrent images to run on a single mainframe.
And, of course, most of those new images areLinux.
This makes the mainframe, with its low overallcost, economies of scale, reliability, and security,the ideal destination to port established non-mainframe applications, as well as develop newones.
Taking a step into the future, the promises of CloudComputing, with the ability to dynamically employarbitrary computing capacity for productionpurposes, seem ideally suited for this environment.ideal destination to port established non-mainframe applications, as well as develop newones. Taking a step into the future, the promisesof Cloud Computing, with the ability to dynamicallyemploy arbitrary computing capacity for productionpurposes, seem ideally suited for this environment.
But why is this relevant to the discussion of thefuture of the mainframe workforce? The answer:because it is an opportunity for a new segment ofcomputing professionals to work on the mainframeas new workloads begin to arrive with greatervolume and frequency.
That means experienced Linux people, of course.It can also mean people who are familiar with z/VM. In any case, it means more people, whichmeans the mainframe is alive and well andgrowing, and that it is therefore even moreimportant to get a new generation in place tomanage it.
The optionsSo the context has not so much changed asevolved. One thing that has not changed is theneed to plan for a successful future, particularlygiven these new factors.
In the original Mainframe Continuity Planningwhitepaper, four alternative courses of action weresuggested:
• Migrating to non-mainframe platforms• Outsourcing mainframe activities• Maintaining existing mainframe resources via
expert consultants• Maintaining existing mainframe resources via
an in-house continuity strategy.
While each of these continues to be relevant,today’s large business IT organizations need tolook beyond maintaining a functional environmentto supporting and enhancing the business valuethat IT supports.
For this reason, rather than rehashing the strengthsand weaknesses of each of these options, thefollowing steps represent more of a call to actionfor any organization large enough to have amainframe, as they move forward from a businessvalue perspective.
Step 1: know thyselfOne of the oldest and most famous aphorisms inwestern literature comes from the Delphic Oracleof ancient Greece: know thyself.
To put it another way, you cannot get from point Ato point B if you do not realize that you are startingat point A. In the world of large-scale IT, that meansunderstanding where all your IT money is going,how much each platform actually costs, and whatbusiness-enabling work it does.
This, of course, is not easy. For example, manyorganizations believe that most or all of their newapplications have been exclusively developed on
non-mainframe platforms over the past decadeor two. It turns out that this is usually an illusionperpetuated by the fact that the mainframe almostnever fails. The clue: on those rare occasionswhen a mainframe actually stopped working, mostor all critical production “non-mainframe”applications suddenly stopped functioning as well.
Still, it is essential: every cent you spend on IT isfor the value that it brings the business, so notknowing the actual value of your currentenvironment is a recipe for making disastrousdecisions about the future.
Step 2: look down the roadIn a famous Calvin and Hobbes cartoon, Calvinasserts to his companion Hobbes that his mottois “Live for the moment” because “You could stepinto the road tomorrow and – wham – get hit by acement truck!” Hobbes replies that his motto is“Look down the road.”
But of course, avoiding disaster is never enough.If you are not pursuing success that supports yourorganization’s business needs and opportunities,you are just an overhead branch waiting to betrimmed. Where is your business today, where isit going in the future, and how can IT support itand enable it most cost effectively?
Sometimes the answer to this may lead to a shiftor consolidation of platforms. It is certainly likelyto lead to reorganization at some point. And it mustlead to a sustainable staffing model that positionsyou for a successful future.
One key matter to keep in mind as you plan: themore you grow your mainframe environment, thefewer additional staff you will require, enabling aspectacular economy of scale that just keepsgetting better.
Step 3: get the right resourcesWhere should your applications be running? Howmuch additional value can you get by movingthem? Who should be managing them and theplatforms they run on?
Whether these questions lead to a reinvestmentin the mainframe or some other platform, tooutsourcing or redoubling your mainframe staff,standing still is not an option. It is time to act.
If you have done your homework and you knowwhere you are and where you are going, this stepshould, in some ways, “build itself,” as you ensureyou have the right people, platforms, applications,and management software to move to the future.
Again: do not leap without looking. What are thereal costs of the hardware, people, applications,and management software? That includesknowing that you have people you can trust usingtools that enable them to effectively meet yourbusiness needs and opportunities.
Step 4: engage!It turns out that the mainframe was never theproblem. The question has always been how tomove your business forward. But, in many cases,your mainframe, with the right resourcessupporting it, may be exactly what you need tomake it so.
By focusing on a business-enabling outcome,staffing, platform choice, and management toolmatters become enablers instead of problems.
So why wait another minute pondering what to doabout the mainframe workforce? Focus on results,and let the resources follow.
ConclusionThe mainframe workforce issue continues to beimportant and needs to be managed. However,the options and opportunities just keep gettingbetter, as the mainframe becomes a more andmore attractive platform for large-scale businessIT.
By taking the time to think through your business-enabling IT needs and opportunities, planning forthe future, and taking action to move forward, youhave the opportunity to move in syncwith the business that is paying for your ITdepartment.
Reg Harbeck is CA's Product ManagementDirector for Mainframe Strategy. In the more thantwo decades since he received his Bachelor'sDegree in Computer Science he has worked withoperating systems, networks, security, andapplications on mainframes, UNIX, Linux,Windows, and other platforms. Reg has been withCA for a dozen years, during which time he hastravelled to every continent where there aremainframes and met with and presented to ITmanagement and technical audiences, includingat Gartner, IBM zSeries, CMG, SHARE, GSE, CAWorld and ManageTech user conferences. He isalso on the SHARE Board of Directors. Reg isthe published author of many whitepapers, articlesand blog entries which are available online, andwas also responsible for CA's Releasing LatentValue and Mainframe 2.0 books, both publishedin 2009.
IBM made absolutely sure you knew their latest announcement was from them by calling it IBMSmart Business Development and Test on IBM Cloud. This gives customers a free publiccloud beta for software development. Get in early, because the beta will be open and free untilgeneral availability (sometime early in 2010).
20% of the Fortune 50 use DataKinetics® tableBASE® real-timein-memory optimization for IBM System z Mainframes.They’reoptimizing their mainframes to achieve superior performancegains and improving efficiencies. Like the Fortune 50, you toocan do more with your MIPS, generate a rapid ROI, and avoidunnecessary costs.
DataDirect takes a detailed look at DataDirectShadow Version 7.1.3 – Web servicesperformance benchmarks.
BUSINESS CASE: MAINFRAME TCO AND zIIPFor many large organizations, IBM System zmainframes are an essential component of theirenterprise. The platform’s robust databasemanagement systems, security, and transactionprocessing capabilities have contributed to themainframe playing a unique role in today’s modernIT architectures. However, three critical factorshave emerged over the last ten years that presentunique challenges to the mainframe:
• Agility – the ease in which older legacy basedapplications and data can be integrated withnew applications,
• Diminishing skills – the decline in the availabilityof mainframe development and operational skillsets
• Total cost of ownership or TCO – mainframehardware, software, processing costs, andincreasing licence fees from mainframeindependent software vendors.
Organizations continue to assess the role of themainframe in the context of these challenges.Service-oriented integration approaches such asWeb services and Data services are enablinggreater infrastructure agility by allowingorganizations to transform legacy data andapplications into industry-standard artefacts thatare more easily re-used and recombined into newbusiness services. Mainframe integration toolshave evolved from green-screens to point-and-click integrated development environments (IDE)that provide an intuitive bridge between COBOLand Java or .NET. The remaining challenge has
been costs. Even with the improvements inhardware performance and energy consumption,the mainframe is still viewed as a high-costplatform. This has driven organizations that aredependent on the mainframe to explore strategiesfor lowering mainframe TCO. For some this mightmean migrating off the platform in total or in astaged process.
Recognizing this trend, IBM has added newhardware features to handle specializedworkloads. These specialty engines differ from thetraditional General Purpose Processors (GPP) inthat their processing capacities are not includedin calculating the overall speed and capacity,typically measured in terms of MIPS – Millions ofInstructions Per Second – of the mainframe, whichis a common mechanism for determiningsoftware licensing charges. Using these enginesto handle eligible workloads is seen as asignificant new means to lowering mainframeTCO. This article will examine the System zIntegrated Information Processor (zIIP) specialtyengine, and how it can be used in combinationwith DataDirect Shadow to lower the processingcosts of mainframe SOA workloads, reducingannual MIPs consumption and delay costlyupgrades.
The transformation of mainframe assets into Webservices to support integration with an SOArequires additional processing. This additionalprocessing is justified given the benefits ofinteroperability afforded to the mainframe by anSOA. This processing uses CPU to un-marshaland marshal the XML, which is necessary tocommunicate with the underlying mainframeasset and to return the response, authenticate andauthorize access to mainframe assets, andprovide operational management capabilities.
Levels of mainframe CPU consumption from SOAinitiatives are expected to increase as mainframeSOA adoption matures and the number andcomplexity of Web services on the mainframegrows. Therefore, scalability and processing
efficiency are important considerations forachieving your mainframe SOA goals.
Product descriptionsDataDirect Shadow V7 is mainframe middlewaresoftware that deploys as a unified foundationarchitecture designed to reduce the complexity ofintegrating mainframe data, business logic, andscreens with new Java, .NET, or Webapplications. Included in Shadow is acomprehensive platform for mainframe SOAenablement – Shadow z/Services – whichprovides bi-directional Web services capabilities(publishing and consumption), as well asorchestration using BPEL 2.0. Shadow V7 includesunique, patent-pending technology allowingcustomers to divert processing-intensive Webservices workloads away from the mainframeGeneral Purpose Processor, to significantlyreduce the costs of integration. The result: withShadow z/Services V7, your SOA initiatives can:
• Increase throughput• Lower CPU consumption• Increase your control over mainframe cost of
Use of Shadow does not require a zIIP to bepresent. However, it is required to fully receive thebenefits described. If you do not have a zIIP, butare interested in exploring the benefits of Shadow’sunique zIIP exploitation, an audit can be performedto determine workloads within your mainframe thatare eligible for specialty engine offload.
Performance analysisThe performance analysis of DataDirect Shadowz/Services V7.1.3 (Shadow z/Services V7) wasconducted at the DataDirect Technologiesmainframe products development lab in SugarLand, Texas during the month of January 2009.Testing was performed in a controlled environmentusing Shadow z/Services V6.1 and V7.1.3. Yourresults may vary due to environmental factorsincluding but not limited to: the hardware and
software configuration, test scenario configuration,concurrent workloads, and the monitoringtechniques used to collect performance data.
Tests were run for each of the major mainframeWeb services provider integration types: BusinessLogic Integration (BLI), Screen Logic Integration(SLI), and Data Logic Integration (DLI).
Business Logic Integration (BLI) – Web serviceinvokes an existing COBOL COMMAREAprogram. The program has no inputs. Output ispopulated into 800 separate fields within theprogram and returned in the output COMMAREAfor a total output COMMAREA length of 32K.Shadow z/Services builds the resulting SOAPresponse and returns it over HTTP.
Screen Logic Integration (SLI) – Web serviceinvokes existing CICS terminal-orientedtransaction. Input is a 2-byte company name and3-byte flight number. The SLI navigates through20 CICS screens and returns a flight destinationfield of 3 bytes. Shadow z/Services builds a SOAPresponse and returns it over HTTP.
Data Logic Integration (DLI) – Web serviceexecutes the SQL statement “SELECT ID, NAME,DEPT, JOB, YEARS, SALARY, COMM FROMQ.STAFF” that returns 7 columns consisting of anumeric, char, and varchar data totaling 34 bytesper row and fetching 35 rows. Shadow z/Servicesconverts this to a SOAP response over HTTP.
Data collection for the test consisted of collectingtransaction counts from Parasoft SOA test; allother data including CPU time and zIIP CPU timewere taken from Shadow z/Services interval SMF/Logging records recorded for each Web serviceoperation.
zIIP exploitation benchmarksIt is common for organizations to addressincreased demand for throughput by addingadditional hardware (in a distributed environment)or capacity (in the mainframe environment). The
reality of adding hardware to increase processingcapacity has its limits. Organizations that havetaken this approach must deal with the additionalcosts to maintain data centres, specifically, powerusage, floor space, and labour costs. Theseissues are fueling the market demand forvirtualization and consolidation, two areas that aremainframe strengths, fueling an increase indemand. However, moving SOA processing to themainframe has its costs too. The most significantbeing the increase in licensing and maintenancefees for mainframe software. Most mainframesoftware fees are tied to the capacity of themainframe, which is typically measured in MIPS.As the volume of Web services on the mainframegrow so does the increase in CPU consumptionand therefore the capacity of the mainframe mustbe managed to balance the additional processingneeds with that of the IT budget.
As Figure 1 shows Shadow z/Services V7provides across the board increases inTransactions per Second (TPS) versus ShadowV6.1. BLI showed the greatest increase of greaterthan 300% and an impressive 175% increaseacross BLI, SLI, and DLI services.
Understanding the processing efficiency, asmeasured by the amount of CPU time required toprocess each transaction, gives your organizationa key piece of information necessary for modellingMIPS capacity requirements and planning capitalexpenditures for the mainframe. Shadowz/Services V7 helps reduce your MIPS growth forprocessing SOA workloads by dramaticallyimproving the efficiency of processing theseworkloads.
A case in point is illustrated in Figure 2. On average,Shadow V7 per transaction CPU consumption is430% less than V6.1 for the same workload, thusproviding a significant CPU dividend toorganizations using prior releases of Shadowz/Services, thereby allowing organizations justnow adopting SOA to better manage their MIPSgrowth as these initiatives mature. This efficiencyallows you to re-allocate these MIPS to newworkloads or absorb increased MIPSrequirements from existing workloads.
So how does Shadow z/Services V7 increasethroughput while decreasing per transaction CPUconsumption? It does so through numerousoptimizations for security, operational diagnostics,optimizing access to the back-end assets, andthrough broad exploitation of the IBM mainframespecialty engine, the System z Integrated
Information Processor, sometimes referred to asthe zIIP.
Shadow V7 includes a patent-pending mainframethreading model called the Logical DispatchableUnit (LDU). Most, if not all mainframe middlewareutilize Task Control Blocks (TCB) threads tohandle integration functions. This poses a problemwhen attempting to access a zIIP specialty engine,which requires a Service Request Block (SRB)thread. Shadow’s LDU is a hybrid threading modelthat dynamically switches from SRB mode to TCBmode for optimized performance and zIIP access.
Unlike other products that cannot exploit the zIIPor do so under narrow circumstances, the LDUenables almost all necessary processing, suchas XML un-marshalling and marshalling, securityprocessing, as well as Shadow exclusive value-
z/Services and assuming an 80% SOA offload rate(see Figure 3), you can delay that upgrade untilthe end of year 3, a full year later. In addition, thetotal MIPS required are on average 8% less dueto the efficiencies of Shadow z/Services.Considering the fully loaded costs of each MIP canrun into the thousands of dollars, the savingsprovided by Shadow z/Services are significant andprovide a measurable way to reduce the Total Costof Ownership (TCO) for the mainframe.
SummaryMainframe infrastructures of today bear littleresemblance to the rigid, monolithic systems ofthe past. Industry standards and integration toolshave matured to provide increased flexibility andintuitive integrated development environments tosimplify reusing legacy data sources in new
application development initiatives. As this articlehas shown, utilizing zIIP specialty engines as partof your SOA enablement strategy can significantlylower mainframe TCO. However, without softwareto exploit the zIIP across multiple data andapplications environments for Web services, SOAor data connectivity, its true potential cannot berealized. DataDirect Shadow is a single unified-platform, built exclusively for integration with IBMSystem z mainframes. It uses patent-pendingtechnology to deliver broad exploitation of zIIPengines and enabling customers to deliverbusiness systems that involve the System zplatform in a timely manner and with a materialreduction in overall TCO. DataDirect Shadowprovides a new benchmark in mainframeintegration, one that not only allows the mainframeto be a full participant within an SOA, but extends
The 2009 Guide Share Europe National Conference was held on 4th and 5th November atWhittlebury Hall. This pulled together lots of mainframers, who were very interesting to talk to –including three young lads who are mainframe apprentices! – plus numerous excellent speakers.There were also a number of vendors there in the exhibition area who were keen to chat andpass on information about their new products – which was also very informative. Whilemanagement may feel that a couple of days out of the office must mean IT staff are simplyenjoying themselves, the truth is these conferences help so much to share information andkeep abreast of trends and new developments. Many thanks to the organizers for setting upsuch an excellent event, and to Mark Wilson who was conference manager for this year'sconference.
Environment configuration information
HardwareIBM z9 mainframe• CPU Model 2096 R07• Physical CPUs 4
o 2 Central Processors running atapproximately 87.43 MIPS each
o 1 – System z9 Integration InformationProcessor (zIIP) running atapproximately 476.5 MIPS
o 1 – System z Application AssistProcessor (zAAP) running atapproximately 476.5 MIPS
• LPARs, one of which is a Coupling Facilityo One LPAR inactiveo Test system set with an initial weight of
60 and all LPARs capped• 8GB Real Storage partitioned across 5
the value of the platform to business by enablingit take on new workloads with dramatically reducedcosts.
Regardless of the size, scope, and maturity of yourSOA and whether you have a zIIP engine or not,DataDirect is ready to help you realize the fullpotential of SOA on System z.
DataDirect Technologies is the software industry’sonly comprehensive provider of software forconnecting the world’s most critical businessapplications to data and services, running on anyplatform, using proven and emerging standards.
DataDirect Technologies is an operating companyof Progress Software Corporation (NASDAQ:PRGS). For more information, visitwww.datadirect.com, or 3005 Carrington Mill Blvd,Suite 400, Morrisville, NC 27560, or phone 919-461-4200 or 800-876-3101.
SNA mainframe security –because SNA isn’t hacked,instead it is infiltratedAnura Gurugé explains that hacking is forsport, infiltration is for gain, and that Internetfirewalls are not designed to recognize andintercept SNA-specific SNA-based threats toSNA mainframe applications. It is as simple asthat.
That IBM’s z/Center of Excellence, in 2008, wouldpublish a 47 page manual entitled ‘SECURING ANSNA ENVIRONMENT FOR THE 21ST CENTURY’should have been a huge red flag.
SNA mainframe applications are by no meansimmune to being compromised, and beingcompromised badly. Maintaining an ultra-secure,fully encrypted IP network, replete with state-of-the-art firewalls with intrusion detection, andinsisting on ‘clean’ workstations running the best‘anti-threat’ technology that money can buy, doesnot, alack, mean that your mainframe SNA/APPN
applications are safe from infiltration. Nearly all ofthe current techniques being used to successfullyinfiltrate SNA mainframe applications are SNA-specific and SNA-based – with many beingprogrammatic and artfully designed to interact withVTAM on a peer-to-peer basis.
Internet firewalls with their IP-orientation are notequipped to deal with these ‘SNA Application Layer’threats. These threats also invariably goundetected by intrusion detectors, including thez/OS IDS, because they do not exhibit roguecharacteristics. The current SNA threats are notfrom bored teenagers hacking for a ‘hit,’ but fromseasoned professionals expertly infiltrating SNAapplications for financial, political or espionagegain.
IBM, in addition to publishing the 47-page manualmentioned at the start, also did an ‘SNA SecurityConsiderations’ session at the March 2009,SHARE Conference in Austin [Session 3612]. ThatIBM in 2008 and 2009 is actively talking about SNAsecurity should really be interpreted as more thana red flag, it is a clarion call. Other than about thedangers of SNA-over-IP (à la tn3270), unless
Figure 1: APPN/EE mainframe firewall and VTAM definition scrubbing from SDS
measures such as SSL were being used, I do notrecall IBM talking about SNA application securityper se ten years ago, or fifteen years ago.
That was because SNA mainframes applications,primarily due to their reliance on physically secureprivate networks, were on the whole relativelyimmune to being compromised. But that haschanged, and IBM, both in its manual and theSHARE presentation, point this out – upfront. SNAapplications used to be secure, but not anymore.As for dealing with this new challenge, IBM stronglyrecommends capitalizing on as many layers ofpolicy-based security as possible. The APPN/EE
Firewall and VTAM definition scrubbing productavailable from SDS is indeed a policy-basedsolution – designed to analyze and verify thevalidity of all SNA/APPN logon sequences via theintelligent and incisive application of sophisticated,context-sensitive APPN/EE-specific policies.These provide an immediate, demonstrabledeterrent to some of the threats identified by IBMand are consistent with the recommendationsmade by IBM.
At this juncture it is worth taking a minute toelaborate as to why IT departments, includingmost data center professionals, are not au fait with
Figure 2: SDS’s policy-based APPN/EE Firewall and VTAM definition scrubbingsolution adds a much needed layer of SNA/APPN logon specific security to protectagainst mainframe infiltration.
the current vulnerabilities of SNA mainframeapplications. Simply put, there is close to zeropublicity about SNA application compromise –which is what makes IBM’s current initiatives bothnoteworthy and laudable. There are four primaryreasons as to why we do not hear about SNAapplications being compromised. These being:
1 Enterprises that are compromised (which areinvariably Fortune 1000 corporations orgovernment agencies) do not, for veryunderstandable reasons, want to tell the worldthat their mission-critical applications anddatabases were breached – particularly sincein most cases they cannot accurately quantifythe amount of ‘assets’ misappropriated and thevolume of sensitive information exposed.
2 There really are no dedicated, independentwatchdog organizations that monitor andpublicize SNA vulnerabilities, in marked contrastto all the groups and individuals that trackthreats to workstation software.
3 Those infiltrating SNA applications, who areprofessionals engaged in what they perceiveas a business endeavor, have no desire,whatsoever, to publicize their exploits – not onlyare they committing a crime, but the longer theycan go without being detected the more theycan ‘steal.’
4 Given the expert degree of stealth involved,many enterprises never realize that they havebeen compromised – or worse still, that theycontinue to be compromised.
So, that is the challenge facing us. Your mainframeSNA applications may have already beencompromised. There could be data beingsiphoned off or rogue transactions being discreetlyinserted even as you read this. Have you everdiscovered, with a sickening thud in your stomach,that there was spyware on your PC? While we allnow have decent ‘anti-piracy’ software on our PCs,the same is not the case when it comes to SNAmainframe applications – unless you decide,
forthwith, to implement an APPN/EE firewall behindVTAM and install software that will scrub yourVTAM definitions to eradicate known vulnerabilities.
Unexpectedness and complacency set thestageWhen confronted with the list of the currentlyknown SNA mainframe threat scenarios it is easyto fall into the trap of going: “Ah! That should neverhappen. You can fix that by doing this, that andthe other.” This instinctive, knee-jerk reaction itselfcaptures the challenge facing us. Yes, most likely,most of these vulnerabilities could have been fixedat their source when that SNA application was firstimplemented. But, what we do not know, unlesswe specifically test for it today, using the knownattack methodologies, is whether the necessarysafeguards are already in place. Practice versustheory.
If we take that saying about closing the barn doorafter the horse has fled, in this situation, thechallenge facing us, right now, is determining notonly whether that barn door is currently open orclosed -- but whether it was properly installed, inthe first place, to prevent the horse from beingable to open it. To make matters worse, it ispossible that this barn was never built, in the firstplace, to house animals as strong and smart ashorses. I think you catch my drift.
A customer who recently implemented the APPN/EE Firewall discovered to their chagrin thatbecause of some wrongly coded RACFparameters the administrator of a peer-connectedUnix system could create userIDs and passwordsfor an SNA application. Now we all know that thisparticular vulnerability should never have come tobe. But it happened, as accidents do – and wasonly discovered because the APPN/EE Firewallwas able to automatically detect the incorrectlyset parameters. IP-oriented firewalls cannot detectthese vulnerabilities.
That SNA pre-Web was perceived to be invincibleis now proving to be the major cause of its current
vulnerabilities. Many of today’s SNA mainframeapplications were conceived in an era wheresecurity threats were very different to what theyare today.
It is just as with air travel.
In the early 1970s, during the dawn of SNA, whenit came to air travel, there were no metal detectors,X-ray machines or security checks. You never hadto show an ID. You could travel under assumednames. Things are, obviously, very different today.The problem when it comes to SNA mainframeapplications, however, is that we still haven’t gotaround to installing all of the metal detectors, letalone the ID verification, as we should – and aswe must. Think of the APPN/EE Firewall as thatmuch-needed metal detector and the VTAMdefinition scrubber as a powerful, automated IDverifier for SNA mainframe applications.
The bottom line here is that so many of today’sSNA mainframe applications were developed andimplemented well before mainframe infiltrationbecame a highly profitable, big stakes business.Consequently, these applications still have manyunplugged vulnerabilities. The problem is thattoday we have dedicated professionals, in somecases sponsored by what IBM politely refers toas ‘unorthodox governments,’ hell-bent on gainfullyexploiting these vulnerabilities – to your cost.
SDS has the expertise and a ‘Security Probing’product that can be initially used to determine yourlevel of exposure based on the currentlysuccessful SNA infiltration techniques. Given thesensitivity of what is being tested, this initial SNAmainframe security audit is best done asdiscreetly as possible, in a tightly controlled, self-contained environment. Thus SDS can help youconduct this audit on a test LPAR to avoid exposingyour mission-critical production systems to evena trial infiltration.
The key here is to talk to SDS, ASAP.
As with any serious security threat, whether it isto a mainframe, Windows XP or to the homeland,it is best not to divulge all of the exact details as tohow the threat works. Publicizing vulnerabilitiescould attract other ‘vultures’ to the scene. That isvery much the case here. SDS, a long-timeproponent of mainframe business, and a trustedIBM Partner, has made a conscious decision tokeep secret the exact details of the known SNAmainframe vulnerabilities.
Bona fide SNA mainframe customers have to signa non-disclosure agreement (NDA) so that thenecessary confidentiality can be maintained forthe mutual benefit of the entire mainframecommunity. Given the insidiousness of theattacks, and the nature of the professionalsinvolved, this has to be a concerted communityeffort -- us ‘mainframers’ protecting our belovedmainframes and maintaining their reputation forintegrity. Hence, the discretion. Hence, the caution.
Consequently, in this White Paper, I too have tobe somewhat vague as to how I describe thesethreats. We do not want this White Paper to providethe unscrupulous with ideas as to how to infiltrateSNA applications. I will tell you enough to convinceyou that the threat is real. Then you need to workwith SDS to put together a security audit plan tosee where you stand.
Fifteen known SNA mainframe vulnerabilitiesSDS, working in conjunction with a strategicbusiness partner who has been proactive in SNAmainframe security since 1995, has documentedfifteen discrete techniques that are currently beingused, very successfully, to infiltrate unprotectedSNA mainframes. Precise details of how theseattacks work, with network diagrams, data flowsand references to the relevant VTAM ‘features,’ willbe made available to you once SDS hasestablished your credentials.
The fifteen documented threats fall into thefollowing high-level categories:
1 SAF spoofing, whether it be RACF, ACF2 orTop Secret, whereby a mainframe SAF is ledto believe that incoming rogue user logonshave been authenticated by a ‘trusted’ partnerSAF by inserting relevant credentials obtainedthrough a technique known as ‘BIND-scanning.’
2 SSCP or CP spoofing where a VTAM controlpoint, due to inadequate definition controls,accepts rogue session initiation requests froman external system masquerading as anauthenticated peer node.
3 Locating an old fixed function terminal [egprinter] defined to VTAM as a Type 1 Node andreplacing it with a programmatic, Type 2.1 Node[eg PC with Comm. Server] that can send apeer-to-peer session initiate hoping that theexisting VTAM definitions will accept andprocess the initiate thus providing a potentialpath into VTAM than can be further exploited.
4 Seeking ‘holes’ in SAF definitions that permitlogons from peer systems to be accepted on
the assumption that the logon wasauthenticated by a trusted SAF in the peersystem, even though it was not – and thencapitalizing on this unauthenticated access bytrying to obtain administrative rights on thetarget mainframe.
5 SNA-specific variant of the activeeavesdropping-based, ‘Man-in-the- Middle’(MITM) attacks that have been used on theInternet where the attacker gets between twobona fide SNA systems and intercepts all thedata flowing between the two.
6 A variant of the ‘Man-in-the-Middle’ (MITM) attackscenario when specific SNA applications (orsession partners) are spoofed as opposed tothe entire system (per the configuration shownin Figure 3).
7 A variant of phishing, with elements of denial-of-service (Do S), whereby terminal usersfrustrated by not being able to logon to a desiredsystem end up typing in userIDs and
Figure 3: The basic notion of a ‘Man-in-the-Middle’ attack in the context of SNA
passwords into bogus logon screens hopingto establish a connection.
8 ‘Session forwarding,’ another variant of the‘Man-in-the-Middle,’ this time exploiting thedynamic resource search capabilities of APPN/HPR given that the protocols used in thesesearches contain a wealth of ‘address’information pertaining to both the source anddestination in their headers.
9 Attempted session capturing with queued,pending session initiation requests – awaitinga session termination that might activate thequeued request.
10 ‘BIND-scanning,’ mentioned in #1 above, is anSNA/APPN-specific variant of the oft practiced‘Port Scanning’ attack used in IP networks toprogrammatically locate open, unsecured host
ports – with infiltrators, in the case of SNA,intercepting and analyzing as many BINDs asthey can find to determine if a target SNAmainframe application may accept rogueBINDs with a less secure ‘mode entry’ profile.
11 Expert and precise use of denial-of-service (DoS), to one or both ends of a networked system,hoping that the added load on the systemsmight expose infiltration opportunities; e.g.,vulnerabilities caused by buffer overflow orcredentials exposed in an obscure, not oftenseen error message to do with the unexpectedsession rejections precipitated by the Do S.
12 NetID spoofing, with the aid of Type 2.1 nodeemulation software (eg Microsoft’s HostIntegration Server (HIS) or IBM’sCommunications Server), in some instancesexploiting Wi-Fi wireless connections, whereby
Figure 4: Quintessential, ‘nothing-but-text,’ logon screens such as this from SNAmainframe applications, devoid of any logos, graphics, or topical news stories,can be easily replicated by professional infiltrators to mount very effective phishingattacks to harvest valid userIDs and passwords.
a rogue system masquerades as a valid nodewithin a known network.
13 Sophisticated, SNA application specificphishing, sometimes by actually tapping intothe SNA side of a data center residenttn3270(E) server, to programmatically harvestlarge volumes of valid userIDs and passwordsthat can then be used to infiltrate applicationswithout any push-backs from SAF.
14 Session hijacking by monitoring sessiontimeout intervals and faking a timeout at theremote end and diverting still active mainframesession to a rogue end point (typically a PCwith Type 2(.1) node terminal emulationsoftware).
15 Mainframe application replication (à la PC virusschemes), typically in conjunction with sessionhijacking as described in #14 above, and thenmasquerading as a bona fide applicationinteracting with real end users and trying tosubmit bogus transactions to other applicationson a trusted, peer basis.
Some background to the attack scenariosExactly how a professional infiltrator will unleashthese attacks against a specific mainframeenvironment will obviously depend on numerouscriteria and will vary from one target to another.Invariably the attacks are painstakinglycustomized for each system targeted to bestexploit the perceived (or known) securityvulnerabilities of that system. It is also importantto keep in mind that the SNA mainframe applicationspecific attacks might not be included within theopening rounds of offensives launched againstyour system.
As IBM and others repeatedly point out, your IPnetwork, despite all of ‘battendown- the-hatches’technology available, may still be your weakestsecurity link. Then there is always tn3270(E) andFTP – particularly if you are not using the necessaryprecautions. The attacks may be launched per a
well laid out campaign spanning weeks, if notmonths, starting with the IP network.
Once infiltrated, the malicious activity, like spywareon a PC, could be ongoing for protracted periodsof time. Take a successful phishing attack as anexample. Once the perpetrators harvest a batchof valid userIDs/passwords, they can use theseto repeatedly gain access to the target applicationwithout in any way rousing suspicion. If the goal issurreptitious eavesdropping for commercial orpolitical espionage, this activity could go on for along time until the password comes due forrenewal.
[These valid credential-based, rogue logons arelikely to be detected by the policy-based APPN/EE Firewall which will spot any changes in‘incoming characteristics’ or ‘logon behavior’ (e.g.,changes in logon times). The APPN/EE Firewallcan also facilitate powerful session auditing bygenerating SMF records journaling each sessionstart and session end. This SNA/APPN sessionspecific auditing, combined with a suitable script,will enable you to quickly establish if there are anyincongruous logons that should be furtherinvestigated. It is SNA/APPN-specific auditingcapabilities such as these that will, in future, letyou sleep soundly at night without recurringnightmares about infiltrators.]
Professional infiltrators will invariably try to pickthe easier ‘locks’ first. So they are likely to startwith the IP network, tn3270(E) server and anyother Web-to-host ‘gateways’ you may have. Insome cases they might need to tap into the IPnetwork to obtain userIDs/passwords, NetIDs,BIND parameters or screen formats they need inorder to launch an SNA mainframe attack. In someinstances they might combine network infiltrationwith a mainframe infiltration method to maximizetheir haul.
There is also a possibility that multiple categoriesof SNA mainframe attacks may be launchedagainst the same mainframe, serially or in parallel.What is crucial to note is that these attacks, in
marked contrast to what is often the case withattacks on Windows, are not being done for sport,publicity or vandalism. Given that they are donefor gain, the attacks will be carefully planned, verydeliberate in their nature, and well orchestratedwhen it comes to execution. There are unlikely tobe any warning signs beforehand. Theseprofessionals do not launch exploratory probesagainst the target system. They have test systemsthat they use for that type of testing and staging.Consequently, intrusion detection systems typicallydo not notice any suspicious activity prior to anattack. Hence the need for an intelligent, incisive,policy-driven APPN/EE Firewall (with auxiliarySMF-based session auditing) to try and interceptthese very insidious attacks.
These professionals also try hard not to leave anytraces of their attack. Usually somebody discoversthat something is missing or is not right. But mosttimes the experts have to surmise how the attackcame to be – was it a session hijack, ‘Man-in-the-Middle’ (MITM) or phishing? Often, it is hard to tell.So the goal, in all cases, is to try to proactivelyprevent the attacks from taking place byimplementing the relevant technology to block allof the currently known attack scenarios. Thatmeans closing the newly strengthened barn door,with its heavyduty lock, before the horse gets anyideas about stretching it legs.
That, unfortunately, is not allThe fifteen malicious infiltration techniquescategorized above are, unfortunately, not the sumtotal of the SNA mainframe applicationvulnerabilities now being regularly exposed. It cannow be divulged that over the last five years, therewere several instances of users (in some cases3rd party customers) or company employeesaccidentally stumbling upon major securityloopholes in SNA mainframe applications. In someof these instances the vulnerability had to do withinadequate data access security when it came tothe partitioning of sensitive data being maintainedby a single SNA application.
In one case, two competing financial servicescorporations, both subscribing to a 3rd party SNAmainframe application, discovered, to their initialglee, that they could ‘eavesdrop’ on the activitiesof their competitor – each believing, however, thatthis was strictly a one-way loophole, and that theother was not able to spy on them. Suffice to saythings soon turned very ugly when both partiesdiscovered to their mutual horror that each hadbeen eavesdropping on the other. But, there wasunanimity on one crucial issue – they, for obviousreasons, wanted a total embargo on any publicitypertaining to this breach. There is a possibility thatthis breach may have violated some laws – but acourt case was the last thing that anybody wanted.
So yet again, the desire for secrecy, when itcomes to any and all infractions involvingmainframes, prevailed. But there is a definitedownside to this secrecy. It, at a disservice toothers which includes you and me, continues tofoster that complacency about SNA mainframeapplications being immune to compromise –when, in reality, they are not. If the mainframe worldwas not such a ‘closed community,’ so to speak,people would genuinely claim that this was indeeda conspiracy!
Then there is the always thorny and sensitivematter of intentional sabotage by disgruntledemployees. In this unsavoury arena, we, if you stopand think about it for a second, are now in themidst of ‘THE PERFECT STORM’ when it comesto SNA mainframe applications. So many of theenterprises so greatly impacted by the recentmayhem in the global financial sector are by andlarge blue-chip mainframe customers! We haveall heard of the layoffs. We know the names ofthe once mighty corporations that have suffered,the mergers that have occurred and the datacenters that have gone dark.
In the case of SNA mainframe applications thereis also another pivotal factor that exacerbatesmatters even further. The knowledge required todetect a potential sabotage is restricted to a verysmall, specialized group of ‘veterans.’ Most
enterprises no longer have too many of these 20plus year ‘Sys. Progs.’ Who really do know all theins-and-outs of SNA/APPN and VTAM. Of latemany have opted to retire. Some, unfortunately,got caught in the mayhem. But the bottom linehere is that there are a lot of disgruntled data centerfolks out there, and not enough qualified ‘goodcops’ to keep them all at bay. And this, alack, getsworse, though I really don’t want to start gettinginto specifics here.
We have all heard the stories about the dedicated‘mainframers’ who have preserved an old rescuedmainframe in their basement or garage. But todayyou don’t need an old mainframe, let alone adecent size server, in order to mount a subterfugeattack on corporate data assets masquerading asan ‘in-network’ peer node. You can run a fairlyeffective mainframe emulation on a $799 laptop.Plus, in today’s world of APPN/HPR and EE, youdon’t need to have an SSCP capability in order towreak havoc. Just a Type 2.1 node will suffice –and you can even get freeware 2.1 emulations forWindows, Linux, and Unix.
So this is the challenge. The stakes areinordinately high. A lot can be gained, financiallyand politically, by infiltrating SNA mainframeapplications. Hence why this is a business – avery lucrative business at that. The ever increasingdisgruntlement among the data center communityincreases the pool of prospects that can betempted into discreetly leaving open a ‘back door.
A call for action for SNA mainframeapplications
The best course of action when it comes to furthersecuring your SNA mainframe applications is tofollow the adage that goes: ‘forewarned isforearmed.’
The first thing that has to be done forthwith is toget a much deeper understanding of the currentlydocumented attack scenarios. What I havedescribed here is but an outline. SDS has
presentations and documents with the pertinenttechnical details, such as the OPNDSTstatements that may be used, the exact step-by-step process of an attack scenario and thereference numbers of IBM SNA programmingmanuals used by the professional infiltrators todevelop their attack software. SDS will also helpyou identify and establish the different ‘securityspheres’ you may need to protect – depending onwhich categories of attacks you are likely to bevulnerable to.
The ‘security spheres,’ in essence, deal with thepotential span of an SNA/APPN session. An LU6.2 session between two SNA/APPN Appls.running within the same LPAR is likely to beconsiderably more immune from attack than saya LULU Session Type 0 that spans twoautonomous SNA networks (via say SNI or anExtended Border Node). SDS, in conjunction witha strategic technology partner, have identified six‘security spheres’ in the context of the fifteen attackscenarios described above – with a categorizationof which ‘security spheres’ are vulnerable to eachof the fifteen types of attack. For example, ‘SSCPor CP spoofing’ (the #2 scenario) can not occur ifa session is contained within a single LPAR. Thisis also true for scenarios #1 and #3. But theseattack scenarios are possible in the other ‘securityspheres,’ including sessions contained within aParallel Sysplex or an in-house intranet.
Once you have contacted SDS and haveestablished your bona fides, you can immediatelystart working with qualified SDS staff to scope outexactly how vulnerable your SNA mainframeapplications are – at present. At this juncture, I willremind of you of my earlier observation. It is easy,even natural, to look at the list of documentedvulnerabilities and claim that there is no way thatany of these attacks will succeed against yoursystems. You could, very well, be right. There isalso a possibility, however slim, that your faith andoptimism is misplaced. Remember that in today’ssociety you can never totally rule out the possibilitythat you have been ‘sold down the river’ by adisgruntled employee. The VTAM definitions that
you were sure are as watertight as possible mayhave been tampered with to permit infiltration.
Given what is at stake, and the insidiousness ofthe attacks, you really cannot take a chance here.Have you ever done a full-scale, bleeding bodieson the ground, disaster recovery (DR) drill? Somany seasoned IT professionals used to be soconfident about the absolute infallibility of their DRscheme – until that first real drill. DR, particularlysince 2002, is now on a different plane. We nowhave to do the same when it comes to SNAmainframe application security.
SDS, as previously mentioned, will also be ableto set up realistic probe scenarios that you coulduse against a test configuration to determine,unequivocally, the vulnerabilities of your SNAsystems vis-à-vis the known threats. By this stageyou probably would have decided thatimplementing the APPN/EE Firewall (with itsauxiliary SMF-based session auditing) inconjunction with the VTAM definition scrubbing isimperative – with time being of the essence. Thisis like buying insurance. It is difficult to calculatethe immediate ROI, but we all know that the realreturns are going to be HUGE.
The bottom lineThat SNA mainframe applications are in some waymagically immune to being compromised is now,alack, but a myth. Yes, it is true that SNA mainframeapplications used to be secure fifteen to twentyyears ago – but that was in a different era. In thosesimpler days, you could fly without having to putyour shoes through an X-ray machine, buy goodswith a credit card without the vendor having toinstantly verify your card with an online terminalor check into a hotel without having to produce anID. Things have changed, and in this instance notfor the better.
That the transition, in the late 1990s, from mainlyprivate, leased-line networks to IP networktechnology (with tn3270(E)) created numeroussecurity exposures was well known. Much was
done, particularly in terms of powerful, end-to-endencryption and user authentication (e.g., with SSL/TLS), to minimize if not totally eradicate these IP-network related vulnerabilities – but, obviously, thiswas totally contingent on enterprisesimplementing all the necessary safeguards. It issobering, though, that IBM still cites lack of IP-network security as one of the major threats facingSNA customers.
Since 2008, IBM appears to have noticeablyratcheted up its advocacy when it comes to SNAmainframe application security. That it publisheda 47-page manual entitled ‘Securing an SNAEnvironment for the 21st Century’ and did asession at the March 2009 SHARE Conferenceon a similar theme is highly noteworthy. Let’s faceit. Despite its continuing strategic importance tocustomers, from IBM’s perspective SNA/APPN isin ‘end game’ mode.
That IBM has suddenly started emphasizing SNAsecurity, when there is really no SNA/APPNspecific R&D effort per se, is telling to anyseasoned ‘IBM watcher’ – and I will confess that Ido fall into those ranks. That is why I agreed towrite this White Paper. There is a real and presentdanger. Given my 35-year association with SNA,it would have been remiss for me not to use myplatform, however rickety, to share with you whatI have learned of late.
The fifteen attack scenarios described above areplausible and potential. I have seen some of thedocumentation. Though I would very much like todo so, I cannot directly share this documentationwith you. So you have to first talk to SDS. If youare reading this, there is a fairly good chance thatyou have already had prior interactions with SDS.They are committed to mainframe solutions. Theyhave been serving the mainframe community fora very long time.
The policy-based APPN/EE Firewall (with SMF-based session auditing) and the companion VTAMdefinition scrubbing definitely plugs a hithertounprotected opening in SNA mainframe security.
APPN – A landmark 1986 rework of the originalSNA architecture to enable plug-and-play, peer-to-peer networking unhampered by centralizedcontrol from a mainframe.
BIND – The SNA/APPN command used toactivate an LU-LU session following thesuccessful completion of the SNA/APPN sessioninitiation processing.
Comm. Server – IBM’s all inclusive, multi-platform, software bundle that provides a plethoraof terminal emulation, Web-to-host, andnetworking capabilities.
Control Point – SNA/APPN/HPR functionality thatperforms authorization, directory services andconfiguration management.
Denial of Service – An insidious, carefullyorchestrated attack on computer systems ornetworks to overload their resources with abarrage of requests in the hope of discoveringoverload induced vulnerabilities within the targetsor to just disrupt the mission-critical activities ofan enterprise.
DLSw – Widely used SNA/APPN(/NetBIOS)-over-TCP/IP transport mechanism which,however, unlike EE, does not support SNA COSor routing.
EE – HPR-over-UDP/IP, created by committeeand codified in RFC 2353 in 1998, which permitsSNA/APPN networking, replete with native COSand routing, across IP networks.
Firewall – Specialized software designed toprevent unauthorized access to a computersystem while permitting validated, non-harmfulinteractions to get through.
HPR – An SNA architecture developed by IBM inthe early 1990s to imbue SNA/APPN with dynamicalternate routing, nimble intermediate noderouting and proactive congestion control in ordermake SNA networking more competitive withTCP/IP.
Intrusion Detection System – Intelligent systemsthat attempt to intercept unauthorized access intocomputer systems or networks by proactively
monitoring suspicious activity based onpreviously identified policies.
IP – The primary, underlying, connectionlessprotocol which is the basis for all Internet ProtocolSuite based networking.
LU – SNA’s software interface (or ‘port’) throughwhich end users gain access to the SNAnetwork.
LU 6.2 – SNA’s protocol suite for program-to-program communications.
Man-in-the-Middle – Data siphoning schemewhere fraudulent software manages to insertitself, undetected, between two network partnersby actively emulating the two partners beingdeceived.
Node – In SNA, a total unit of network-attachablefunctionality, realized in software, that getsimplemented within a device or runs on acomputer.
Phishing – A malicious scheme to obtain thecredentials necessary to access a securesystem by masquerading as that system andfooling people into entering the sought aftercredentials.
SAF – Security Authentication Facility; eg RACF,ACF2 or TopSecret.
SSCP – SNA’s System Services Control Point,in an hierarchical network, typically implementedon a mainframe within VTAM, that is responsiblefor directory services and configurationmanagement. Now superseded by the peer-to-peer oriented functionality of APPN/HPR controlpoints.
tn3270(E) – Widely used, client-servertechnology that permits TCP/IP clients to accessmainframe-resident SNA applications using 3270data streams.
Type 2.1 Node – A peer-to-peer capable node inSNA/APPN that has since evolved into APPN endand network nodes.
VTAM – IBM’s mainframe-based software forimplementing an SNA/APPN/HPR node within anLPAR.
Today it is nearly impossible to implement an IP-network without some kind of firewall – evenWindows has a built-in firewall. Thus, it makessense to have an APPN/EE-aware firewall, runningabove VTAM, that can be finely customized withpolicies specific to your configuration. Hence, thisrecommendation.
So the bottom line here is rather simple andstraightforward. Contact SDS, and tell them youwant to find out more about SNA security.
The ROI of this call, alone, could be great.
Software Diversified Services (SDS),(www.sdsusa.com) is based in Minneapolis, MN,and has been providing premium mainframesolutions to the IBM world since 1982.
Anura Gurugé (www.guruge.com) is an ex-IBMerwho used to be ‘Mr SNA’ for the longest time. Hisfirst book, ‘SNA: Theory and Practice’ waspublished in 1984. He was heavily involved withToken-Ring switching, APPN, Frame Relay, andWeb-to-host. These days he is a consultant anda professional writer. His latest critically-acclaimedbook is ‘Popes and the Tale of Their Names’. Hecan be contacted at [email protected].
TurboHercules and disasterrecovery – an innovativeapproach to mainframeoutage and businesscontinuity
In general, the IBM mainframe computer is avery reliable system. But, despite the bestplanning, it is still possible that yourmainframe could become unavailable. Fire,flood, major power interruptions, etc can allbring the system to a grinding halt. Mostorganizations have some sort of businesscontinuity plan, but it usually requires theavailability of a second machine, either rentedor purchased. A TurboHercules-based systemcan replace the mainframe for these outagesat a significantly lower cost than a rented orpurchased mainframe
Introduction“Be prepared” applies as much to a good disasterrecovery plan as it has ever applied to the BoyScouts. One aspect of the preparation involvesthe generation and verification of an up-to-date setof backup tapes. This article explains theTurboHercules mechanisms for keeping yourbackup tapes and backup machine up-to-date andready to go in the event of an outage of yourmainframe.
The problemMost mainframe sites create regular backups ofthe data and programs on their mainframe. Therationale is that, if the mainframe becomesunavailable for any reason, these tapes can beused to load up an alternate machine and continuebusiness. So far so good.
Unfortunately many of these same sites never fullytest the backup tapes they create for a variety of
reasons (not enough mainframe time available,full test would disrupt operations, etc). So they aretaking the functionality of the backup tapes as agiven once they are created. This is a riskyapproach since the failure of any one of the tapesin a series could potentially wipe out your ability torestore the entire series. Of course, you couldrevert to an earlier set of tapes assuming you havethem, but now you have a patchwork image thathas some files more up-to-date than others.
The TurboHercules approachThis problem can be solved by usingTurboHercules as your disaster recovery/businesscontinuity solution.
The ancillary mainframeTurboHercules provides an alternative mainframethat you can run on in the event that your regularmainframe becomes inoperable for any reason(fire, flood, electrical outage, etc). This alternativemainframe has all of the functionality of themainframe it is configured to backup. All that isneeded is an up-to-date copy of the informationon the regular mainframe. Once loaded, theTurboHercules alternative can be powered up andIPLed within minutes to provide businesscontinuity.
Keeping your ancillary mainframe up-to-dateKeeping your TurboHercules mainframe up-to-date is a straightforward process.
On your regular mainframe, perform your backupsto tape or virtual tape in the usual manner. We willbe using an alternate package of software on theTurboHercules machine to verify the backups andnot your mainframe itself. Therefore, you have theability to perform these backups on a much moreregular basis, even hourly, if desired.
Once the backup tapes or tape images arecomplete, you move them to the TurboHercules
system where they will be verified and loaded intothe emulated disk images. Please note that duringthe entire backup/verify/restore process, we willnot be running a copy of your mainframe OS. Allthe work is carried out by a series of stand-aloneapplications running on the mainframe or on theTurboHercules system. In effect, theTurboHercules system is configured as a coldbackup to your mainframe in compliance with theIBM licensing provisions.
Once the images are loaded into the emulateddisks on your TurboHercules system, it is readyfor operation. If there is a failure with your regularmainframe, the TurboHercules system can beIPLed and brought online with data that is as up-to-date as your last backup.
As mentioned earlier, the backup/verification/restoration process can easily be automated,especially if you are using a virtual tape library asyour mainframe backup destination.
The failoverWhen the “event” occurs and your mainframe isrendered inoperable, you move yourcommunications lines to the TurboHerculessystem as planned, turn on and IPL theTurboHercules machine from its virtual DASDdevices, and continue operations. A properlyscripted failover could easily be performed fromcold in less than an hour or so depending on thepeople running the exercise.
Going backOnce the regular mainframe or mainframeenvironment has been repaired, you will need toreverse the process detailed above. At aconvenient time, you perform a complete backupof the TurboHercules system to tape or virtualtape. You then move the communications linesback to the regular mainframe, restore the tapeimages to the mainframe DASD devices, IPL from
the updated DASD devices, restore the machineand go back to operations as usual.
SummaryProperly sized, a TurboHercules-based systemcan support a mainframe workload during theoutage of the mainframe itself. As such, theTurboHercules package is an attractive alternativeto the purchase of a second mainframe or even asupport contract with a disaster recovery/business continuity provider.
In this article we have outlined one approach thatcan be used to back up a mainframe system.Each site is different so please [email protected] to arrange a proof ofconcept demonstration for your installation.
Finally, for some applications like the support of az/Linux-based mainframe system, aTurboHercules-based system can also be usedfor offline tasks such as program development,testing, and education, thus freeing up valuablemainframe cycles for production.
About Turbo HerculesTurboHercules is the first commercial entityexclusively dedicated to Hercules, the well-knownopen source software implementation of the IBMmainframe architecture.
Hercules is a computer program which permitsanyone to run mainframe software on their ownpersonal computer, without having access to anexpensive mainframe. Hercules is particularlyuseful for software developers who can use it tocreate, test, and maintain software designed to
run on IBM mainframe computers. Hercules is alsouseful for training mainframe programmers andoperators, where students can be given their ownmainframe environment on a PC, on which theycan experiment freely without any risk of disruptingtheir company’s production mainframe system.
Hercules is an open-source program, whichmeans that its authors permit anyone to download,use, and customize the program without payment,but without any guarantee of support or warranty.Typical users tend to be enthusiasts who arewilling to devote time and effort into learning howto set up the program and make it work.
TurboHercules addresses a different category ofuser: the corporate or educational institution whichsimply wants a low-cost platform for runningmainframe software, without having to becomeHercules experts.
The mission of TurboHercules is to providecommercial-grade support, training, consultingand custom solutions to enterprises andeducational institutions using or evaluatingHercules. TurboHercules takes the most recentstable release of Hercules and preparesinstallation routines and instructions to simplify theinstallation of Hercules. We then provideconsulting and support services to customersusing Hercules to accomplish real-world tasks.For more information on TurboHercules, visit theweb site www.turbohercules.com.
Placement of this white paper in the ArcatiMainframe Yearbook was sponsored byOpenMainframe.org.
IBM’s Information Champions are objective experts. They have no official obligation to IBM. Theysimply share their opinions and years of experience with others in the field, and their workcontributes greatly to the overall success of IBM Information Management. Do you know anexceptional community member who should be recognized as an Information Champion? Youcan nominate them at www-01.ibm.com/software/data/champion/nomination.html.
Imagine a world with …
… only one color
… only one TV station
… only one political party
… only one mainframe manufacturer
Now imagine a different world
Promoting Innovation and Customer Choice in Mainframe Computing
Mark Wilson takes a look atold, current, and futuretrends in software migrationstrategies.
Introduction“Software Migration” is a termthe majority of IT professionalsare familiar with, and those in the System z(mainframe) marketplace are no exception. Wetypically encounter a migration when seniormanagement utter the words, “Reduce costs”.However, there are several other reasonsmigrations are considered:
• Mergers and Acquisitions• Outsourcing• Globalization.
As a company changes and the functionality andtechnology of products evolve, a software solutioninstalled as recently as two years ago may nolonger be the best or most cost effective.
Costs are still an issue for System z users evengiven all the work that IBM has done to help reducecosts. A quick search of the Internet and you find:
For four years running, attendees at the Gartnerconference listed third-party software costs as theNo. 1 inhibitor to mainframe growth, according toGartner analyst Mike Chuba.
“This is the tail that wags the dog on mainframeprocurements”, Chuba said. “In many situations,customers make decisions on what hardwarethey’re going to buy based on their existingsoftware contracts.”
38% of the mainframers surveyed said third-partysoftware costs were the No. 1 inhibitor tomainframe growth in their organizations.
So, cost is typically the No. 1 driver for performinga software migration. However, beware: it’s notalways the most obvious vendors that cause thebiggest issues!
Retaining the existing product may risk:
• Higher licence charges• Increased maintenance overhead• Vendor support erosion• Reduced competitive advantage• Compromised requirements• Inability to exploit new technology• Duplication in functionality.
Even when a clear cost case exists for performinga migration, the perceived pain and effort requiredto migrate to a different solution often dissuadescompanies from doing so.
But its not all bad! Organizations all over the worldhave totally transformed their software portfolio,making significant cost savings, and driven newworking practices to improve efficiency.
Software vendor rationalizationMany organisations have looked at theirprocurement processes around mainframesoftware and found that they are dealing with fartoo many vendors. Having a vast portfolio ofproducts and vendors carries an overhead thatcan be summarized as:
has meant some products being removed andothers migrated. In the past, the attitude wasalways only if we have a near perfect fit. However,I believe this has changed in recent years as manyorganizations start to accept less functionality ifthe price is right.
The goal here is to reduce the number of vendorsand then agree a better financial deal with thoseleft, with the aim of reducing mainframe softwarecosts. The two big winners are likely to be IBMand CA, who are both willing to negotiate. I haveseen many cases in the last few years where theyhave worked with their customers to reducecosts, whilst protecting the investment they havemade in mainframe technology.
Some of the issues with software reviews andmigrationsLargely driven by the software vendorrationalization point above, I feel mainframe ITteams will be asked to make some very harddecisions in 2010, just as they were in 2009.
I have worked on one project recently where wewere told to “migrate from ISV X to ISV Y”, withoutany discussion or debate. The costs savings wereso great, the senior management team made thedecision without any technical review. The edictfrom on high was: “Just make it happen”. I believeour mainframe technical teams will be faced withmany such challenges in 2010.
It’s always good to look at what software we haveinstalled and ask ourselves some very simplequestions:
• Do we really need it?• Is there a cheaper alternative out there?• Do we use the software efficiently?
– Do we need to run on all LPARS?• Can we do things differently?
– Just because we have always donesomething doesn’t mean its right
• Have we got any duplicate functionality?
• Are there functions in base z/OS that we canuse to replace functions in ISV products?
So what is a software migration?Simple really! Remove Product A and replace withProduct B. It certainly sounds simple? Well insome cases it is.
Software migrations fall into the following areas:
Add to the above Quality Assurance and trainingand you have your typical software migration –see Figure 1.
Software migrations in the pastIn the past software migrations have beenundertaken only after a detailed technical reviewhas been undertaken. No stone would be leftunturned to ensure that all the nuances of thecurrent product were understood. The technicalteams involved would want to ensure that everyfunction currently utilized could be replaced or insome way emulated by the proposed new product.
The technical team would have a great deal ofinfluence in the decision making process and in