Learning Objectives ? Web viewAutomated testing, which includes unit testing and integration testing,

  • Published on

  • View

  • Download


Insert Class Title as per Title Page

Automated Testing with the AutoCAD .NET API

Automated Testing with the AutoCAD .NET API

Scott McFarlane Woolpert, Inc.

CP2654Automated testing, which includes unit testing and integration testing, is one of the cornerstones of the Agile development process. Yet we see it as difficult and expensive, especially for programs written with the AutoCAD .NET API. So what makes it so difficult? What are the obstacles? Can we overcome them? This class will explore the topic of automated testing, specifically for .NET programs written for products based on AutoCAD software. You will learn how good programming practices, such as single responsibility, abstraction, interface development, and dependency inversion, go hand in hand with writing testable code, and how to use these techniques to decouple your code from the AutoCAD API, enabling you to unit test your business logic in Microsoft Visual Studio. Well also look at a few of the popular testing tools, including the Gallio Automation Platform, which integrates AutoCAD into its test runner.

Learning Objectives

At the end of this class, you will be able to:

Recognize the obstacles to doing software testing, particularly with programs written for AutoCAD

Understand the goals, philosophy and principles of test automation

Appreciate the benefits of test automation

Use the correct Terminology: What is a mock, a stub, a fake, a dummy?

Use abstraction and dependency inversion to write testable code

About the Speaker

Scott is senior software engineer for the Infrastructure service line at Woolpert, Inc. He specializes in custom database applications that use Autodesk software in the AEC, FM and GIS industries. He has more than 30 years of programming experience, and has been developing for Autodesk platforms since 1986. He is the author of AutoCAD Database Connectivity from Autodesk Press, as well as several articles. Scott has attended every AU, and has been a speaker since 1996. He also served two two-year terms on the AUGI Board of Directors.



My name is Scott McFarlane. If youre like me, your formal training or college degree was focused on some design-related discipline (I am a degreed architect). You probably also had a strong interest in computer programming, but no formal training. Sure, I took my share of computer programming courses in college with very useful programming languages of the day, such as Fortran, COBOL, and Assembler. But generally I consider myself self-taught when it comes to software development. I started using LISP within the first month I ever started using AutoCAD. From there I jumped on each AutoCAD API as it was added. Today, Im pretty much exclusively a C#.NET developer when it comes to AutoCAD.

In recent years, I have become a student of software engineering, software design patterns, and best practices. These practices are the kinds of things Ive been doing (in some form or another) for many years, yet these days they are formalized, studied and documented. Let me stress that I will forever be a student of these practices continuously learning and adapting my software design style as my knowledge grows. In this class, while I try to offer a better way to solve certain problems, there is likely an even better way that I havent explored. I always welcome discussion and debate on these subjects so please feel free to ask questions during the class, track me down at the conference, or contact me after the conference is over.

Obstacles to Testing AutoCAD Code

Automated testing is an important and integral part of the software development process. However, as AutoCAD programmers, we face unique challenges that often discourage or prevent us from writing tests for our programs.

Some of these obstacles include:

Our programs are hosted by acad.exe An AutoCAD .NET program is compiled into a .DLL, which is loaded into AutoCADs memory space. In our code, we rely on a set of assemblies specifically designed to work with AutoCAD. Once these assemblies are referenced into our code, our program will only work inside a running session of AutoCAD. This makes it impossible to use unit-testing tools inside of Visual Studio.

AutoCAD programs typically rely on heavy user-interaction While tools exist to assist in automated user interface testing, this type of testing is very difficult, particularly with a complex program like AutoCAD. While UI testing itself is beyond the scope of this class, we will explore ways to separate the UI-specific code from the business logic code, in order to minimize the amount of UI testing that is necessary.

But even beyond these obstacles, many developers dont write automated tests, primarily because their code is not designed to be testable. Programs written for AutoCAD are often single, monolithic applications that are tightly coupled with the AutoCAD API.

In order to be successful with automated testing, your code must first be written in such a way that it is testable. In order to write testable code, several good programming practices must be followed, including, Single Responsibility Principle (SRP), separation of concerns, and Dependency Injection. Testing goes hand-in-hand with these practices. Use of these practices results in testable code, and good testing practices drives good design.

Test-Driven Development

One practice related to testing that you hear a lot about is test-driven development, or TDD. TDD is one of the cornerstones of the Agile software development methodology. It is an iterative practice that consists of first writing a (failing) test, then writing just enough code to make the test pass. The refine or write more tests, then add more code until the tests pass. When TDD is strictly followed, you never write a single line of production code that doesnt already have a test written for it.

For those of us who have been writing production code for years with little or no automated testing, making a jump into a strict TDD process is extremely difficult. But even if we dont practice TDD, automated testing should not be left to the end of the project. Start writing tests as early as possible, either before you write the code, or at least concurrently with the code. Having tests written early will yield significant benefits, including:

Testing early forces you to write testable code. Rather than writing a bunch of production code and worrying about whether it will be testable, writing your tests first forces your code to be testable without having to think about it.

Tests provide a safety net for refactoring. Refactoring is the process of modifying code to improve design, make the code more readable and reduce duplication, without changing its overall functionality. Tests give you constant feedback during the refactoring process, so you can feel confident that changes made during refactoring havent broken anything.

Types of Testing

The key to successful testing is to design the system using a loosely coupled, component-based architecture. This allows you to test each component in isolation, by substituting fake versions of other dependent components. It also allows you to easily tier the testing process into the following levels:

Unit Tests These tests verify the behavior of individual classes and methods in isolation from other components. Unit tests are typically written concurrently with the production code being tested, using tools available within the specific development environment (such as Visual Studio). These must be executed continuously as the code evolves to ensure that new functionality does not negatively affect the behavior of existing functionality. And, as mentioned earlier, unit tests also provide a safety net for code refactoring or code modifications due to evolving requirements.

Component Tests These tests verify the behavior of specific groups of classes or components and how they interact with one another. Component tests can take the form of either automated tests within the development environment, or as separate executable scripts that are run outside the development environment.

Integration Tests These tests verify the system as a whole, within its production environment, or at least within a development environment that duplicates the production environment as closely as possible. Integration tests will usually be separate executable programs so that they can be deployed and run in the production environment.

Acceptance Tests These tests are performed by the end-user, and verify the behavior of the entire system with respect to the software requirements. While some acceptance tests may be automated, there will always be a portion of the acceptance testing that requires some human interaction.

All of the testing approaches described above must occur as early as possible, and as often as feasible, throughout the development process. This allows you to identify and address any issues with minimal impact to the overall schedule and budget of your project.

Goals of Automated Testing

Regardless of which type of automated test you are writing, you should keep in mind the following good practices:

Tests should be fully automated and easy to run the easier your tests are to run, the more likely you will be to run them, and run them often. They should also run fast enough to provide timely feedback.

Test should be reliable Tests are useless if you cant trust them. They should be repeatable, independent, and self-checking. Any data used for an automated test should be isolated by test. Shared data between tests can lead to unreliable tests.

Tests should be easy to maintain You should treat your test code with the same care and attention as your production code. As your code evolves, your tests will evolve as well. If tests are hard to maintain, they will quickly become obsolete and you will stop using them.

Tests should help us understand the system Tests can be a great source of documentation, and since they are compiled code they will never become out-of-sync with production code like code comments tend to do.

The Big Split

Over the past several releases, Autodesk has been working to separate the AutoCAD business logic from the AutoCAD user interface. This effort, which allows Autodesk to share a single codebase for its OS-independent business logic, has been referred to as The Big Split. In AutoCAD 2013 this work is complete, and we now have a three-tier architecture as shown in the diagram below.

It is important to understand that each tier is dependent on the tiers below it, but knows nothing about the tiers above it.

At the lowest tier you have the AcDb functionality, which is concerned with the DWG database logic. At this level, you can develop components such as ObjectDBX (object enablers) and those that work with RealDWG.

At the middle tier you have the AcCore functionality, which is concerned with AutoCADs core business logic. At this level, you can develop components and applications that work with AcCoreConsole (a command line only version of AutoCAD), and AutoCAD WS.

These two lower tiers together represent the AutoCAD Core Engine. This is the common platform upon which components at the top tier, i.e. AutoCAD for Windows, AutoCAD for Mac, AutoCAD WS and AutoCAD Console are developed.


So by now youre probably asking, What is AcCoreConsole, and what can it do for me as a developer? Think of AcCoreConsole as a command-line only version of AutoCAD. It is a console application that has nearly all of the core functionality of AutoCAD, but without the heavy graphical user interface and it launches in less than 1 second (on my machine, anyway). It is sometimes referred to as a headless AutoCAD.

Because of its fast loading time, and the absence of the UI overhead, AcCoreConsole is a perfect tool for running batch operations and scripts. But it also provides the perfect platform for certain types of automated testing.

Separation of Concerns

So why am I bringing this up? At a high level, the big split represents an effort to separate the concerns of database (DWG) access, business logic, and UI. As AutoCAD developers, we can learn a lot from this, and we should seriously consider a similar approach in our own applications.

So how should we separate out our code? The very first thing we should do is determine what parts of our code are dependent on AutoCAD, and what parts can we separate out into non-AutoCAD-dependent components.

Then within those components that are dependent on the AutoCAD API, you should consider further separating out the parts that depend on the core tier (acdbmgd.dll and accoremgd.dll) with those that depend on the UI tier (acmgd.dll).

Unit Tests vs. Integration Tests

Below is a table that shows some of the key differences between unit tests and integration tests.

Unit Tests

Integration Tests

Should run without configuration

May require configuration


Use external resources

No teardown required

May require teardown

Run in memory

May use disk or database

Run in development environment

May require a separate environment




Should be repeatable, but may not be

Run often

Run less often

Unit Tests and AutoCAD

Based on the above table, a true unit test must be able to isolate the code being tested from any external sources, and operate completely in memory and preferably within the IDE (i.e. Visual Studio). By this definition, it is impossible to write unit tests for code that references any of the AutoCAD DLLs. So, to properly unit test your AutoCAD programs, you must be able to isolate that parts you want to test from the parts that truly need the AutoCAD API.

Lets look at a very simple example.

Examine the code below. This is a simple AutoCAD program that changes the color of all the circles in the drawing to a specific color based on the circle radius. Small (r < 1.0) circles are changed to yellow, medium (1.0 10.0)


circle.ColorIndex = 1;




circle.ColorIndex = 3;






catch (Exception ex)







The majority of this code is boiler plate code that you see in any program that manipulates AutoCAD entities in model space. There is really no need to test that code. The part of the code we really want to test is the logic that sets the circle color based on the radius.

So the first step is to extract that code into its own method:

private void SetCircleColor(Circle circle)


if (circle.Radius < 1.0)


circle.ColorIndex = 2;


else if (circle.Radius > 10.0)


circle.ColorIndex = 1;




circle.ColorIndex = 3;



By doing this, we have not only separated our core code from the AutoCAD boiler-plate stuff, but weve greatly reduced the number of touch points into the AutoCAD API. If you were to look at SetCircleColor out of the context of AutoCAD, you would see a simple method that it takes an argument of type Circle, which has a Radius property (which could be read-only) and a ColorIndex property. You may also wonder what a ColorIndex is, and wonder what the values 1, 2, and 3 represent.

Now that we have AutoCAD out of our heads (for the moment) lets look at the extracted method in a more abstract way:

public class CircleMarker


public static void MarkCircleSize(ICircle circle)


if (circle.Radius < 1.0)




else if (circle.Radius > 10.0)










First, youll notice that weve removed the notion of color from the method, and replaced it with the more generic mark concept. Second, you may notice that we changed the argument type from the concrete Circle class to an interface called ICircle.

public interface ICircle


double Radius { get; set; }

void MarkAsSmall();

void MarkAsLarge();

void MarkAsMedium();


And finally, now that we have all of our business logic completely decoupled from the AutoCAD API, weve put it into a new class called CircleMarker, and we can move it into a separate assembly that does not reference the AutoCAD DLLs. This allows us to write unit tests against our code that can be run inside the Visual Studio IDE.

Below is a simple test that verifies that a circle with a radius of 0.9 will result in a call to MarkAsSmall(). (This test uses NSubstitute, which is a free isolation framework. There are numerous free libraries out there that provide similar functionality.)


public void MarkSmallCircle()


// Create a fake implementation of ICircle

var circle = Substitute.For();

// Control what is returned by the Radius property


// Exercise the logic being tested


// Verify that MarkAsSmall was called



To make it actually work in AutoCAD, well need a concrete implementation of ICircle that is part of our AutoCAD-dependent code. Below is an example of this, which just encapsulates an already open Circle object.

class MyCircle : ICircle


private readonly Circle _circle;

public MyCircle(Circle circle)


_circle = circle;


public double Radius


get { return _circle.Radius; }


public void MarkAsSmall()


_circle.ColorIndex = 2;


public void MarkAsLarge()


_circle.ColorIndex = 1;


public void MarkAsMedium()


_circle.ColorIndex = 3;



Theres at least a dozen different ways we could have designed this ICircle interface, but the point of this exercise is to illustrate an approach that consists of the following thought process:

Separate your core logic from the AutoCAD-specific code by reducing the touch points into the AutoCAD API.

Take a moment to think about your problem out of the context of AutoCAD. What would your program look like if it were written for some other CAD software? Could all or part of your program function as just a console application?

Rather than treating your program as a subordinate of AutoCAD, think of AutoCAD as a subordinate of your program. What role does AutoCAD really play in the context of your application? What touch points into AutoCAD do you really need, and can you use abstraction to decouple your core business logic from those touch points?

This is probably the most important point to take away from this class. The more of your code you can decouple from AutoCAD the better off you will be. If you can accomplish some level of separation as this simple example illustrates, youll have code that is not only easier to test, you will realize other significant benefits, including:

Code will be easier to maintain mixing your business logic with the AutoCAD database specific code (transactions, etc.) makes your code harder to read, more tightly coupled with AutoCAD, and thus more difficult to maintain.

New AutoCAD releases will have less impact if you can, for example, separate 50% of your code into a non-AutoCAD-dependent module, thats 50% of your code that wont change if (and when) changes are made to the AutoCAD API. It also makes it easier to support multiple releases of AutoCAD (or even other platforms) with a single codebase.

Interfaces, Dependency Injection, and Isolation

The key to writing unit-testable code is to be able to isolate it from its dependencies. In the example unit test above, I noted the use of NSubstitute, which is one of many isolation frameworks. Sometimes these libraries are referred to as mocking frameworks, but they all basically do the same thing: Allow you to easily create fake implementations for interfaces (or abstract classes) that your component under test is dependent on.

But this only works if you program to interfaces rather than implementations.

Consider the following simple code example, which might represent some business logic code that uses a data tier, and a logging component.

public class Example1


public void DoTheWork()


DataRepository dataRepository = new DataRepository();

Logger logger = new Logger();

logger.Log("Getting the data");

DataSet theData = dataRepository.GetSomeData();

// Do some work with the data...




While it is good that weve separated the data access and logging code into their own components, there are still some issues with this example. First, this method is not only tightly coupled with its dependencies, it is also responsible for their creation. This results in the following issues:

This code is nearly impossible to reuse because it is so tightly coupled with its dependencies.

This code would have to be modified if there is a need to replace one of the dependent components with a new implementation.

This code is impossible to unit test, because it cannot be isolated from its dependencies.

Now examine the following alternative:

public class Example2


private readonly IDataRepository _dataRepository;

private readonly ILogger _logger;

public Example2(IDataRepository dataRepository, ILogger logger)


_dataRepository = dataRepository;

_logger = logger;


public void DoTheWork()


_logger.Log("Getting the data");

DataSet theData = _dataRepository.GetSomeData();

// Do some work with the data...




Here, weve done two key things: 1) Weve introduced abstraction by creating and developing against interfaces rather than specific concrete implementations, and 2) The class is no longer responsible for the creation of its dependencies they are injected into it via the class constructor.

This illustrates the following key principles and design patterns of software engineering:

Separation of Concerns This class is now only responsible for the specific job it was designed to do.

Abstraction By using interfaces, we have established a set of protocols by which the components interact, separately from the classes themselves.

Inversion of Control The class has relinquished control of the creation and initialization of its dependencies.

Dependency Injection This pattern is based on Inversion of Control, and describes the way in which an object obtains references to its dependencies.

Writing code that can be completely isolated from external dependencies is not that easy to do. You must examine your code and really think about where it is reaching out into the outside world. Some typical external services that many applications use include:

External databases

Files, disk access (System.IO namespace)

Web services (WebClient)


Exception Handling


In addition, even if you have successfully separated your business logic from your user interface, beware of UI dependencies finding their way into your business logic, in the form of things like message boxes or progress bars. Finally, beware of APIs that may not provide a consistent result. DateTime.Now is a classic example of this. Use abstraction if necessary to ensure you have complete control over any direct and indirect inputs into the code you are testing.

Running Automated (Integration) Tests in AutoCAD with Gallio

So far Ive been talking about unit testing your business logic code by separating it from the AutoCAD API through abstraction. At some point you will need to run tests against the code that depends on the AutoCAD API. One way this can be accomplished is by using an open source test automation platform called Gallio (http://www.gallio.org). Gallio is a very flexible, and extensible testing framework that This section will walk you through the steps required to set up an series of automated tests that exercise the full AutoCAD dependent code for the ChangeCircleColors function shown earlier.

Download Gallio

The first step is to download Gallio from their web site. The download is just a ZIP file that you can extract to a location, such as Program Files. Note: Before you extract the ZIP file, be sure to check its properties and click Unblock if that option is available.

Create a Test Project

Next, add a Class Library project to your solution in Visual Studio. This project will reference the following DLLs:

AcCoreMgd.dll, AcDbMgd.dll (AutoCAD)

Gallio.dll, MbUnit.dll These can be found in the Gallio bin folder. The default Gallio test runner looks for tests in your DLL that use the MbUnit testing framework.

FluentAssertions.dll Not required, but I use this instead of the Assert class included with MbUnit. Google it, or pull it into your project with NuGet.

The code for your test will look like this:


public void TestMethod1()


var document = Application.DocumentManager.MdiActiveDocument;

var database = document.Database;

var id = database.Create(circle => circle.Radius = 0.9);

new Example3().ChangeCircleColors();

using (var tr = database.TransactionManager.StartTransaction())


var circle = (Circle) tr.GetObject(id, OpenMode.ForRead);




Notice the Create method on the database object. This is a handy extension method I wrote to help clean up the test code. Heres what it looks like:

public static class ExtensionMethods


public static ObjectId Create(this Database database, Action action)

where T : Entity, new()


using (var tr = database.TransactionManager.StartTransaction())




// Open model space

var blockTable = (BlockTable)tr

.GetObject(database.BlockTableId, OpenMode.ForRead);

var modelSpace = (BlockTableRecord)tr

.GetObject(blockTable[BlockTableRecord.ModelSpace], OpenMode.ForWrite);

var obj = new T();



var objectId = modelSpace.AppendEntity(obj);

tr.AddNewlyCreatedDBObject(obj, true);


return objectId;


catch (Exception)








Set up a Batch File to Launch the Test Runner

Add a file called test.bat to your project. This file will look something like so:


"C:\Gallio\bin\Gallio.Echo.exe" TestingExample1GallioTest.dll /r:AutoCAD


Your file may look slightly different depending on where you installed Gallio, and the name of your test DLL. Be sure to set the properties of test.bat such that it gets copied to your output directory.

Run Your Tests

Finally, build your project, navigate to the project output folder, and launch test.bat. You should see a console window open, then AutoCAD will launch. Your tests will run, and then AutoCAD will close.

The output in the Console window should look something like this:


In this class weve examined some ways that you can run automated tests against your AutoCAD .NET plugin code. To utilize unit testing frameworks inside of Visual Studio, we demonstrated an approach in which business logic (the code we really want to test) is separated into its own module that does not reference the AutoCAD DLLs. For full integration testing, we walked through an example of using the Gallio test automation framework.


Meszaros, Gerard (2007). xUnit Test Patterns: Refactoring Test Code. Pearson Education (USA).

Osherove, Roy (2009). The Art of Unit Testing with Examples in .NET. Manning Publications Co. (USA).




View more >