22
·' j : - ... I LEARNING INVARIANT TEXTURE CHARACTERISTICS IN DYNAMIC ENVIRONMENTS: A Model evolution approach Peter W. Pachowicz P91·4 MLI91·2

LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

·' j : - ... I

LEARNING INVARIANT TEXTURE

CHARACTERISTICS IN

DYNAMIC ENVIRONMENTS:

A Model evolution approach

Peter W. Pachowicz

P91·4 MLI91·2

Page 2: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

1

LEARNING INVARIANT TEXTURE CHARACTERISTICS IN DYNAMIC ENVIRONMENTS:

A model evolution approach

Peter W. Pachowicz

Center for Artificial Intelligence George Mason University

Fairfax, VA 22030

ABSTRACT

The paper presents an approach to the acquisition of texture models of specific objects under the following assumptions: (I) the system has to recognize objects on a sequence of images. (2) images of a sequence demonstrate the variability of conditions under which objects are perceived (e.g., resolution. lighting, surface positioning), (3) an observer or objects can move, (4) the extraction of texture attributes and training events can be imperfect, and (5) the system has to work autonomously (Le .• without teacher help). In order to recognize textured objects under such assumptions. the system has to adapt to the environment through the evolution of texture models.

We propose to apply an incrementalleaming methodology to acquire texture descriptions from a sequence of images. The closed-loop system architecture integrates recognition and leaming processes allowing the system to evolve texture models. While the initial acquisition of texture models is driven by a teacher. the evolution of these models is performed over a sequence of images without teacher help. The texture descriptions initially acquired are applied to recognizing and to extracting objects on the next images. The effectiveness of such recognition and object extraction is monitored over time (a sequence of images) automatically. When this effectiveness decreases. the system activates learning processes to improve its models. Such improvement is performed by applying an incrementalleaming methodology. In order to evolve models, the recognition and control systems have to prepare new training examples for the next learning phase. and they have to choose the most suitable evolution strategy.

ACKNOWLEDGEMENT

Author. wishes to thank Michael Hieb for his help in the preparation of this report. This research was done in the Artificial Intelligence Center of George Mason University. Research activities of the Center are sponsored in part by the Defense Advanced Research Projects Agency under grant, administrated by the Office of Naval Research, No. NOOOl4-87-K-0874, in part by the Office of Naval Research under grant No. NOOOI4-K0226. and in part by the Office of Naval Research under grant No. NOOO14-88-K0397.

Page 3: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

2 1. MOTIVATION AND JUSTIFICATION

1.1. Introduction

Most vision research has focused on recognizing textures under statiQnary conditions, i.e., for stable lighting conditions, resolution and surface positioning (Rosenfeld and Davis, 1979). Relatively little has been done on the problem of recognizing textures under dynamic conditions. The problem of recognizing textures under non-stationary conditions occurs in more situations and therefore is of significant practical imponance. For example, consider the following two scenarios that are within the scope of this proposal:

[scenario-I] an observer (e.g .• an autonomous land vehicle) is moving through the outdoor environment recognizing unstructured objects. and

[scenario-2] an observer (e.g., a satellite surveillance system) is tracking a particular textured object that is moving through the environment.

The common aspect for both scenarios is that a system has to recognize objects on a series of images. Images of such a series are affected by the variability (continuous changes) of conditions under which objects are perceived. In order to recognize an object on such a . sequence of images. the system has to update its texture model regarding changes in resolution, lighting, and surface position. In this repon, we propose an approach to update texture models through the integration of learning and recognition processes within a closed loop. The proposed approach incorporates the analysis of system recognition effectiveness performed over a sequence of images in order to detect changes of texture occurrences. If the system recognition effectiveness decreases then it activates learning processes of model evolution to protect the system recognition power for the next images. For both scenarios, the system learns initial texture descriptions from teacher-provided examples. Then, the system updates these descriptions automatically without teacher help.

Let us investigate the interesting features of above scenarios. For the first scenario, the learning processes have to acquire distinctive description of textures (i.e., descriptions that allow one to recognize different classes among themselves). In the second scenario, the learning process has to acquire a characteristic description (Le., a description that concentrates on a specific class and regards all other classes as belonging to the background). In the fIrSt scenario. the variability of texture properties can be limited and the number of texture classes can be assumed to be finite. In the second scenario, a given texture must be recognized from a rapidly changing background with the assumption that this panicular texture can also change its characteristics. In such a case, the system has to recognize the panicular texture from a large number of background textures. The variability of conditions under which an object is perceived is greater when it is moving rapidly. The activation of learning processes performed to update its texture model can then be more frequent. Fonunately, the learning of an object characteristic description is simpler and faster than learning distinctive characteristics. The application of incremental learning speeds up the learning processes, as well.

Variability oftexture requires the development of system capabilities that will reconfigure, tune, and update its models (knowledge) regarding observer/object movement and the variation of sensing conditions (changing resolution, lighting, surface positioning. and overlapping noise). An approach presented in this paper proposes to investigate the adaptability of vision systems through the development and implementation of a model evolution methodology. Specific research topics of presented approach include:

(I) The application of a symbolic learning methodology performed in an incremental mode to evolve distinctive and characteristic descriptions of texture,

Page 4: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

3 (2) Texture recognition and image segmentation performed for distinctive and

characteristic descriptions of texture (and possible integration of supervised and unsupervised image segmentation),

(3) Integration of recognition and learning processes within a closed-loop system working without teacher help,

(4) The development of a model evolution methodology through the unsupervised extraction of new training examples and the application of different evolution modes and strategies.

As indicated by Rosenfeld and his panelists (1986), relatively little research effon has been conducted on high-level vision and on the development of new methodologies useful for this level, when compared with an immense amount of research conducted on low-level vision. A still an unexplored area is the integration of low-level vision with artificial intelligence for the acquisition of visual concepts and manipulating these concepts in order to update, remove noise, and efficiently store them. Panicularly little has been done in the area of applying machine learning methods to the adaptability of vision systems. The problems of applying symbolic machine learning to texture recognition tasks have been barely explored so far. However, they were initiated and illustrated by simple examples almost two decades ago (Michalski. 1973).

Most works on adaptive vision systems are limited, for example, to the improvemel1t of image segmentation (e.g., Hsiao and Sawchuk, 1989, Bhanu, et al., 1989. 1990), and the application of expen systems to the automatic configuration of vision systems (Liedtke and Ender, 1986, Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems dealing with the great variability of texture occurrences.

Researchers generally try to avoid the problem of texture variability by the application of more powerful methods for the acquisition of texture characteristics; e.g., Markov random fields (Cross and Jain, 1983). Gibbs random fields (Denn and Elliott, 1987), and the spatial frequency spectrum (Weszka, et a., 1976, Liu and Jernigan, 1990). These methods belong to the class of feature extraction oriented methods, where extracting relevant features plays a very imponant role (DuBuf, et aI.• 1990). The main problem with traditional approaches is that we do not have a universal feature extraction method that works effectively with noisy and imperfect data. When one considers that training data can be noisy and imperfect. one has to agree that the derived descriptions of texture classes can also contain noisy components. The development of highly adapted feature extraction methods does not make sense because these features have to be sensitive in wide range of texture occurrence.

The application of machine learning methodology to the acquisition of texture descriptions gives us an opportunity to manipulate acquired models (knowledge) in order to improve them and match them to testing data. A symbolic machine learning approach gives also an opportunity to acquire hybrid models. i.e., models that include numeric and symbolic attributes. Recently, we illustrated the advantages of such an approach to a texture recognition problem (Pachowicz, 1990, Bala and Pachowicz, 1990). In applying machine learning to the problems of vision system adaptability, we expect to benefit in the following two ways: (i) we seek an increase in the flexibility, effectiveness and autonomy of vision systems, and (ii) we are looking for verification of applied learning methodologies in order to improve or redesign them specifically for the difficult domain of vision.

1.2. Incremental Learning and Model Evolution

Since the ability to learn classification decisions is fundamental to intelligent behavior, the application of this ability (often performed in incremental mode) is gaining momentum in many practical engineering situations. Practical applications of gradual knowledge acquisition can

Page 5: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

4 move AI out of the purely theoretical domains into the hard and serious engineering domains. Such domains are not limited to autonomous robotics and machine perception (computer vision) but include all aspects of automatic knowledge acquisition performed especially without teacher help. This kind of knowledge acquisition and modification is called "model evolution" (or "system evolution").

We already have systems that are able to evolve over time. They are based on the principles of control engineering and multi-level control systems (Jamshidi, 1983), however, their evolution is mainly limited to the adjustment of parameters provided by higher control (adaptation) levels with regard to the modification of a given a-priori mathematical models. The capability of manipulating system models is given by tools of artificial intelligence. Models represented, for example, in the rule form can be modified through showing the system new facts. Some of the problems and proposed solutions to model evolution -are addressed by Goldfarb (1990). Generally speaking, model evolution integrates multi-disciplinary research. Goldfarb integrates so called "Pattern Learning" as symbol formation and recognition processes (perception) with Artificial Intelligence as symbol manipulation processes. He understands system evolution as system adaptation through learning from the environment, where a system's "structure" is modified dynamically during such learning. He focuses, however, on the design of a learning system called a transformation system (that functions as a learning and a recognition system) and on its implementation incorporating a neural net. Presented examples belong to the class of a "toy problem" and the dynamic performance of his system (Le., learning performed in the incremental mode) is still questionable.

In our approach we integrate computer vision with (i) machine learning as a tool for the acquisition and modification of knowledge (models), (ii) pattern recognition for the recognition and extraction of new training examples, and (iii) control engineering for the activation, management and stabilization of learning and recognition processes. This integration is provided to perform system evolution in the dynamic fashion what is crucial for autonomous intelligent systems. The dynamic performance is provided through the application of incremental learning and system closed-loop architecture running concept recognition, learning and verification processes.

The incremental learning methodology has already been implemented successfully in several learning programs --- for example, in AQ series of learning programs (Michalski and Larson, 1978, Michalski and Chilausky, 1989). INDUCE-4 program (Bentrup, et al., 1987), on the basis of the ID learning program (Utgoff, 1989), and in conceptual clustering (Fisher, 1987. Gennari, et al., 1989). It has been proven that incremental learning increases the speed of learning processes, however, it can give slightly more· complex models and a little worse recognition effectiveness (Bentrup, et al.• 1987. Reinke and Michalski, 1988). Real advantages of incremental learning, such as the increase in learning speed and the capability of modifying system knowledge/models through new facts, indicate that we can apply this technique to the model evolution problem presented in this proposal.

1.3. Variability of Texture Concepts

The characteristics of visual concepts depend on both the method used to characterize them and the variability of external conditions. Circumstances that cause the variability of external conditions can be divided into the following two groups: (1) projection variability (e.g .• changing resolution. different surface orientation) and (2) the influence of natural agents (e.g., changing illumination, overlapped noise.)

Changing resolution varies both the repetition rate of texture features and texture patterns (grouped features), and the characteristics of a single pattern. The variability of the repetition rate is primarily used (as a positive effect) to determine surface orientation in the 3-D space

Page 6: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

5 (Kanatani and Chou, 1989). However, the influence of the changing resolution on the pattern characteristics is highly negative and difficult to predict. The increase in resolution brings new primitives into evidence, while the decrease in resolution blurs texture toward a constant tone, and some pattern elements may be no longer visible. The variability of texture features obtained for different resolutions has been demonstrated by Roan, et a1. (1987) and by Unser and Eden (1989), In several cases, the variability of texture features can be approximated and used to predict features for the other resolution; however, the extrapolation of such characteristics can be inefficient

Changing lighting is caused by the characteristics of the light source (e.g., intensity, light spectrum), its position in relation to the object surface, and irregularities such as shadows or highlights. For example. the variability of the texture concept caused by changing lighting can easily be observed for specific 3-D textures composed of large 3-D material structures. These structures can reflect the propagated light differently when compared to a focused and spread light source, various light positions. and the light spectrum (sunrise, noon. sunset). In particular, sharp shadows can create a light subtexture overlapped with physical texture.

Changing surface orientation influences the projection of 3-D microstructure of a surface onto a 2-D image. Natural texture perceived from different orientations to the texture surface can bring into the evidence new texture elements while hiding other elements by partial or full occlusion. For example, a texture composed of mushrooms will be viewed differently from above (circular hats) than from the side (trunks and side-viewed hats). Moreover. an angled projection of a surface involves variability in the resolution. To avoid some of these problems, one can apply an "active observer" approach (Aloimonos and Shulman, 1989).

Other negative influences on the recognition effectiveness include noise, imperfect extraction of texture attributes. and the acquisition of imperfect models. Since the training data contains noise and the extraction of attributes is not able to remove this noise completely. the acquired description of a concept will contain noisy components. Applied matching of an imperfect texture model with a testing event increases the possibility of generating an incorrect classification decision. Manipulation of such a model in order to remove some noise as less significant model components can improve recognition effectiveness (Bala and Pachowicz, 1990).

2. PROPOSED SYSTEM ARCHITECTURE

Since we focus on the acquisition and recognition of texture characteristics through the analysis of a sequence of images (i.e.. images illustrating objects affected by variable perceptual conditions), one has to create a system that will arrange the cooperation between different modules in performing a given task and that will demonstrate developed methods through practical experiments with real-world data. Our proposed system architecture, presented in Figure 1, integrates two main modules of texture data analysis: the learning module that performs the acquisition of texture descriptions, and the recognition module that utilizes acquired descriptions in order to segment a 2-D image and to annotate (recognize) object areas.

The system closed-loop organizes the flow of information between learning and recognition modules in order to create system adaptive capabilities. Learned texture models are stored within a Knowledge Base of concept descriptions. This knowledge is utilized. by the recognition module to classify unknown texture samples. The recognition module provides the segmentation of images into homogeneous areas, and it annotates texture areas by an object's name. The fmal result of such a segmentation and recognition is analyzed by the modeling and control module. This module collects and processes the recognition rates for each texture class distinguished on a series of images in order to express dynamic changes in the system's

Page 7: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

6 recognition effectiveness. This module chooses new learning data, activates and controls learning processes in· such a case where the system recognition effectiveness of a given class is decreasing over time. It also provides the verification of evolved models. In this way. the loop is closed and the system is capable of adapting to texture variable occurrences without teacher help.

We assume that the system is working with a time series of input images, and the image analysis (i.e., image segmentation and texture recognition) can be limited to a single image; i.e., without consideration of motion analysis and optical flow. Both the learning and recognition processes can be applied to a single image. as well. The system should react dynamically to changes in recognition effectiveness through the analysis of recognition results for a given sequence of images.

~::::;:::.::::::::::::::::::::::::?1 KNOWLEDGEF::::::::::::::-__ ~ BASE --....,;;;;::~

Attributes Parameters

IMAGE ACQUISmON AND EXTRACTION OF TEXTURE AlTRlBUTES

-...,-- IMAGE SEQUENC

Attributes

Fig. 1 System architecture

3. LEARNING TEXTURE DESCRIPTIONS

The presented methodology assumes that the acquisition of texture descriptions is performed incorporating learning-from-examples methodology (Michalski, 1983). For example, the AQ14 learning program can be applied to perform functions of a learning kernel. This kernel, however, must be associated with control structures responsible for grouping new learning examples and determining parameters of the learning program. The key research problems related to the learning texture descriptions and explained in next subsections include:

1. Development of "model evolution modes", incorporating incremental learning and system-provided training examples.

2. Application of model evolution processes in the case of the acquisition of discriminant and characteristic descriptions.

3. Analysis of the learning module and creation of a new engineering-oriented learning tool.

Page 8: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

7 3.1. Acquisition and Optimization of Initial Texture Descriptions

We assume that initial texture descriptions can be acquired, for example, by running the AQ14 learning program with teacher· provided training examples. These training examples must be extracted from texture areas of the first image from a given time sequence of images representing the changes of perceptual conditions along with observer/object displacement. It means that texture areas for the ftrst learning loop must be preclassifted by a teacher through an interactive process with an image analysis system. A teacher also has to confirm the segmentation and recognition decisions when the system applies acquired initial texture models.

Acquired texture descriptions must then be optimized. The optimization process is very important especially for the model evolution approach applied to the learning invariant texture characteristics. Optimization protects the system against noise cumulation over time that decreases system recognition effectiveness. Such optimization performed after acquisition of initial concept descriptions incorporates techniques developed, investigated and presented in the separate elaborations (pachowicz and Bala, 1991).

3.2. Incremental Modification of Texture Descriptions

The proposed approach to the evolution of a texture model incorporates incrementa1leaming in such a way that new training examples must be provided by the system to modify texture descriptions. The basic modes of such model evolution are presented in this section, while the choice of new training examples is discussed in section 5.2. First of all, we have to explain the generation of the rule description of a class and the classification of testing data. The description of a class is generated by the AQ14 program as a rule where a preconditional pan of this rule consists of complexes. A complex is represented as a hyper-rectangle in the n­dimensional feature space. Each complex is created over positive examples in such a way that it cannot contain any negative example. The simplest classiftcation procedure measures the distance from testing data to each complex. If a complex covers the testing event (i.e., the event is inside a hyper-rectangle) then we say that such an event is "matched strictly" (Le., the distance is equal to zero). If the event is not matched strictly by any complex, then we look for the nearest complex. Such matching is called ttflexible matching". and the classiftcation decision yields this class for which the nearest complex belongs to.

Class B

.,......... Miss-c:lassified

test data

Attribute xl

CIassB

Added complex

Attribute xl Fig. 2 Model evolution through generalization

Page 9: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

8 The presented approach to model evolution includes the following three evolution modes: (i) through generalization, (ii) specialization, and (iii) generalization-with-specialization. The fIrst mode, evolution through generalization, allows for evolving the model of one class only and has no influence on the evolution of the other class descriptions. This situation occurs (see Figure 2) when test data is classifIed incorrectly and is not matched strictly with any complex. In such a case, test data is provided to evolve a given class description through generalization; i.e., the class description is extended by a complex covering new positive example(s) that was not classifIed to this class (but it should). At this time, when a given class is generalized by positive examples, descriptions of other classes do not change because new learning data is not covered by any complex of other class descriptions. So, the description of a given class is extended through the feature space.

The second mode of model evolution, evolution through speCialization, allows for evolving the model of a class by showing negative examples. Negative examples are examples that were classifIed to this class incorrectly and were matched by the description of this class through strict matching. In such a case, the class description must by modifIed by shrinking or removing a complex that covers testing data incorrectly (see Figure 3). This test data is then provided to the learning system as a negative example according to the class that matched this data during the recognition phase. The learning system can modify the description of this class in several ways. One of the methods can eliminate the matching complex if it is very small and less typical for the concept. The complex must be partitioned if it is large or if it is most typical for the concept. One important feature of such specialization of a class description is that the learning system can manipulate one class with regard to the other classes because it shrinks rather than extends the already acquired description.

M M I< I<Class B !l !l

J:j= J:j ·s ·s -<- -<

Attribute xl Atttibute xl Fig. 3 Model evolution through specialization

The third mode of model evolution, model evolution through generalizanon-with-specio.lization, allows for evolving models of several classes in the same time. While the model evolution through generalization and the evolution through specialization modify one class description only, the third mode of model evolution applies the same learning data as positive examples to evolve one class description and as negative examples to specialize descriptions of the other classes. Let us consider the distribution of two class descriptions in the feature space presented in Figure 4. When the test data is matched incorrectly to one class and this match is strict, one can evolve this class by applying specialization. But the result of such specialization does not guarantee that the test data will be classifIed correctly to the second class through flexible

= - Areaot

PartitiODed com lex

Page 10: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

9 matching. The reversing of the classification decision, however, is guaranteed when one description is specialized while the second description is generalized at the same time and by the same examples. In such a case, the testing data applied to the learning phase is used to evolve descriptions of both classes.

10( 10(~P-------------------------~ ~P-------------------------~Class B. Class 8 Generalizedg g ::I ::I .Q .Q'.5 '5< <

Auribute xl Attribute xl Fig. 4 Model evolution through genera1ization-with-specialization

The third mode of model evolution is similar to the second mode, however, it provides larger changes in the distribution of class descriptions in the feature space. While the second mode of model evolution shrinks class description only, the.third mode erodes this description by the aggressive extension of other class descriptions. In this way, we can group evolution modes into:

(i) modes that evolve a single class description without significant changes in the spatial relation with other class descriptions, and

(li) modes that evolve several class descriptions through changing their spatial relationship.

Different modes are applicable to different learning situations. Intuitively. considerably smaller changes should be done ·by less significant new learning data or when less significant complexes are evolved. Larger changes must be done when new learning data has significant strength, it evolves most typical complexes of class description, or when modified descriptions would overlap. The distinction between the strength of new learning data and significance of concept components is necessary to create stabilization mechanisms for model evolution. The problem with the control of evolution modes is then moved to the evaluation of the recognition phase, data interpretation, verification of learning processes, and the choice of new learning examples.

4. TEXTURE RECOGNITION AND IMAGE SEGMENTATION

The second main module of system architecture (Figure 1) performs recognition and segmentation of texture images into homogeneous areas. Each texture area corresponding to a specific object should be associated with a coefficient indicating recognition effectiveness. This effectiveness is then analyzed by integrating module of the recognition and learning systems.

Page 11: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

10 4.1. Recognizing Texture Samples

The input to the recognition process (see Figure 5) is a 2-D picture of decreased resolution, in which each sample is represented by a vector of attributes. Applied recognition processes perfonn inductive assertion that matches concept descriptions with texture samples. Inductive assertion can classify a single sample or a subset of samples belonging to a local moving window. While the classification process executed for distinctive descriptions is based on the application of flexible matching (i.e., minimization of a distance from the test data to the border of concepts). the classification process executed for a characteristic description is based on the application of strict matching (Le., where a description must cover test data --- if it does not, the system classifies test data to the background).

Input picture: Output picture: vectors of amibutes Classification rules Control vectors of assigned

classification decissions

< CLASS-4 (c=.79) CLASS-SO (c=.71) CLASS-S (c=.SS) >

The problem with the application of strict matching is that noisy data and especially data characterizing an evolving concept of fluid boundaries usually are not matched strictly because such test data are moved outside of the concept boundary. The computation of a distance between test data and a concept does not make sense in the situation where there is one class only (see the second scenario presented in section 1.1); i.e.• an object description occupies limited space in the feature space and the background class is the complement to the object class. In such a case, we transfonn a given characteristic description into a set of extended descriptions with a different confidence level. An extended description is modeled by "growing" the original description of a concept (Figure 6) through the feature space. The degree of such a concept growing is controlled by system ability to recognize additionally presented tuning data with sufficiently high confidence. The recognition is perfonned by strictly matching test data with a series of extended descriptions returning the highest confidence value.

The second problem with texture recognition through matching test samples with rule descriptions of texture classes is that the image data and concept descriptions are large enough to slow down the recognition process. The designing parallel algorithms and a hardware processor for matching both distinctive and characteristic descriptions of texture concepts is crucial to perfonn object recognition and image segmentation task.

Recognition of texture samples through marching

characteristic descri .

Fig.S Recognition of texture samples

Page 12: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

1 1 Characteristic description Extended descriptions

xl

36 36

27 27

Background Background

30 38 38 xl Targec Target [xl=30 .. 32][x2=27 .. 29] v [xl=33.•36][x2=29 .. 32] v [xl=30 .. 32][x2=27 ..29] v [xl=33..36][x2=29..32] v [xl=31 .. 34][x2=31..34] v [xl=36 •.38][x2=33 .. 36] [x1=31..34][x2=31..34] v [xl=36 .. 38][x2=33 ..36] =>

(p=l.)

[xl=29 .. 33][x2=26 .. 30] v [xl=30 .. 37][x2=28 ..3S] v [x1=3S .. 39][x2=32 ..37] => (p=.8)

[x1=28..34][x2=2S..31] v [x1=29 . .37][x2=27 .. 34] v [x1=34 .. 40][x2=3 1..38] => (p=.6)

Fig.6 Object characteristic description and its extended representation that can be applied to match test data

4.2. Image Segmentation and Annotation

Since the recognition process transforms attribute vectors to a list of classification hypotheses, such a "picture" is provided as the input to unify the classification hypotheses in order to segment and annotate texture areas. We present two unification approaches for such input data. The frrst approach is based on the pyramidal transformation of input classification hypotheses, where the unification is performed hierarchically towards a final set of decisions. In this method. the resolution of data is decreased when moving up the pyramid. The second method preserves the resolution by applying propagation of data to the neighboring elements.

Since the unification can include negative secondary effects such as blurring. a segmented image will not preserve sharp boarders between texture areas that can be important for relatively small objects. The .extraction of texture attributes does not preserve the blurring effect at all. Thus, this effect can be naturally expanded to the consecutive steps of recognition and segmentation. The reduction of blurring effect can be reached both through the application of additional filtering applied before unification (Hsiao and Sawchuk, 1989), and through the application of an edge preserving method. Edge information can be received from other data processing modules working in parallel like color. contour or stereo. A special image can be created that represents only significant edges extracted from other kinds of data. The purpose of such an edge map is to cut the propagation links between neighboring elements of the raster of classification hypotheses. The execution of unification processes will then decrease the blurring effect.

Page 13: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

I 2

S. INTEGRATING RECOGNITION AND LEARNING PROCESSES

In order to build an evolving system, one has to design a feedback loop that will provide both the information and control that is necessary to modify a system "strUcture". In our case, the feedback loop links the recognition module with the learning module via the module of modeling and control (Figure 1). The information that is provided by the recognition system consists of a segmented texture image. along with annotated texture areas and with these areas associated recognition parameters. The system then performs an analysis of recognition effectiveness for texture areas observed on both the current image and on a sequence of previous images. Based on this analysis, the system activates learning processes that evolve texture models. selects new learning data from the image. and provides this data to the learning system.

Since training data is selected by the system itself. this data must represent classes properly. If this data is incorrect then an evolved model could give worse recognition results. It can happen that a given model is evolved toward another class and finally represents both texture classes. Therefore, the process of correctly evolving object models is one of the most difficult problems to solve by machine learning. In this section. we present the basic approach to model evolution that incoIpOmtes the following processes:

1. The acquisition and analysis of time characteristics of system recognition effectiveness,

2. Automatic selection of new training events, 3. The development of evolution strategies activating and controlling learning

processes, 4. Verification of applied model evolution processes.

5.1. Analysis of System Recognition Effectiveness

While the recognition module performs typical static (single image) recognition and segmentation, the analysis of system recognition and segmentation effectiveness is necessary to decide (i) when incremental learning must be executed, and (ii) which learning strategies must be applied. The simplest time characteristics of system recognition effectiveness can be obtained by collecting recognition parameters for each class of texture represented on a sequence of images. The obtained curve can then be characterized by both static and dynamic parameters. While static parameters represent current discrimination power of texture models, dynamic parameters project the future trend of the recognition effectiveness for each class of texture. Both static and dynamic characteristics of recognition effectiveness must be considered to initiate model evolution processes and to choose the most suitable evolution strategy (see next section).

The second task performed by the analysis of system recognition effectiveness is the verification of applied learning process. Let us assume that the system decided to activate incremental learning to update a given texture modeL When the model is updated, the recognition is repeated and the improvement of recognition effectiveness is compared with the old result. In such a case, where the recognition effectiveness is increased sufficiently, the system continues recognition processes for the next image. But if the improvement in the recognition effectiveness is not satisfactory, the system has to provide new learning examples and it has to repeat the learning process. The execution of such a scenario is very difficult. and it has to be implemented gradually. We begin from a simple problem in which positive examples are provided to the incrementalleaming system (to generalize texture model). We include then a more sophisticated approach that manipulates a texture description in order to specialize it. and

Page 14: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

1 3 finally integrate an approach that generalizes and specializes texture descriptions at the same time in order to evolve texture models more aggressively.

5.2. Automatic Selection of New Training Events

The evolution of texture models is performed by presenting to the system new training events and executing incremental learning processes. Since the system is initially trained by a teacher indicating image areas representing texture classes on the first image of a sequence, it has to work without the help of a teacher through the next images. It means that learning data for model evolution must be selected by the system itself. If this data is incorrect (Le., new training events does not properly represent texture classes) then evolved models could give worse recognition results.

Se nted texture ima

Object D

Object C

Border areas Fig.7 Image areas used to the automatic choice of new training events

Since new training data has a direct influence on the system evolution capabilities. it must be chosen very carefully from segmented and annotated texture images. We distinguish the following two approaches that can be applied to such an automatic choice of new training data (see Figure 7), i.e.,

[approach 1] new training events are chosen from an area to the border between two texture surfaces. and

[approach 2] new training events are chosen from a texture surface area that is not influenced by border area mis-classifications.

The first approach is generally more difficult and involves cooperation between different modules of computer vision system (e.g., texture, contour, color and stereo). The second approach to the automatic selection of new training events is based on the extraction of training events from unified area of texture surface that is out of the border between texture surfaces. The unified texture areas are acquired during segmentation, where classified image elements are linked together in order to indicate these surfaces. In this way. unified texture areas are less noisy than areas before the application of unification processes. Overlapping unified image with an image before such an unification, the system can extract candidate training events for the next learning step. There can be several classes of such events (see Figure 8); e.g .•

Page 15: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

14 (i) isolated mis-classified texture elements, (ii) mis-classified clusters of texture elements, and (iii) mis-classified spots of a texture.

These mis-classified texture elements can then be used as training events applied to evolve texture descriptions. It is obvious that isolated mis-classified texture elements have less influence on the increase of the recognition effectiveness of a given class. while mis-classifed spots of a texture have greater influence on such increase in the recognition effectiveness. On the other hand. model evolution incorporating isolated mis-classified texture elements as new training events is safer (the evolution is executed more carefully) than model evolution incorporating mis-classified spots of a texture. Mis-classified spots can easily represent small shapes of another overlapped texture. while isolated mis-classified texture elements have less chance to do so. The choice of new training events must represent a kind of balance between more progressive but risky model evolution. and less progressive but safer model evolution. The choice of new training events suitable to the current state of model evolution should be managed by applying a model evolution strategy.

Textme image after segmentation and Texture image after recognition step se amtion into surface areas

Object .. B •

Object <t:m.......-§----+-- Miss-classiraed spot or

Object B

Object A

Surface areas

Miss-classilied cluster texture elemeDts

Isolated miss-classified texture element

texture elemeDtsB 11

Fig.S Selection of new training events

Page 16: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

I 5

5.3. Development of Model Evolution Strategies

A model evolution strategy is a schema applied to evolve texture models. This schema integrates the analysis of model recognition effectiveness, automatic choice of new training events. and verification of evolved models. An evolution strategy generates a "plan" that must be executed by the learning module. A single node of such a plan indicates the evolution mode to be applied. and provides training events.

Threshold

Time..... Sttafegy 'X'

Fig.9 A diagram of recognition effectiveness before and after execution of an evolution strategy (case-I: slightly decreasing recognition effectiveness through the sequence of last images)

The execution of an evolving strategy depends on the characteristics of system recognition effectiveness. Let us consider the following two situations:

(1) the recognition effectiveness decreases slightly through the sequence of last three images (see Figure 9), and

(2) the recognition effectiveness decrel\Sts rapidly (see Figure 10).

In the first case, applied evolution strategy chooses new training data as isolated mis-classified texture elements. The model evolution is performed applying an evolution mode depending on the mis-classification characteristics of new training data. The applied strategy is then verified by applying evolved models on the same image. Based on this verification, applied evolution strategy can be modified (to increase/decrease the evolution effectiveness) or accepted to be performed on the next texture image. Such an evolution strategy can·decide whether applied evolution must be perfonned once or it has to be performed continuously (over the next input images) as long as it is necessary.

In the second case, the system has to switch its evolution strategy in order to apply a more effective one. For example. the system can try to improve its recognition effectiveness in the following way: the system recalls the former texture image. increases its recognition requirements satisfied during verification stage, and applies a more effective model evolution strategy. Such a strategy can involve the selection of more influential training data (Le., mis­classified spots of texture elements) and more aggressive modes of model evolution. The

i Extrapolated

i :

characteristics I i ~ ~ IInitiated an evolution !process:

Page 17: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

1 6 former image and increased recognition requirements cause the system to adapt more closely to this image. Such an adaptation of increased restrictions is used to protect the system against a significant drop in the recognition effectiveness when models are applied to the next image.

Expected characreristics

Threshold

for strategy 'Y' ..........,..*+

i

I 1-4 I

Initiated an evolution process on recalled former image

New threshold

Extrapolated characteristics

+Strategy ~. Time

b) .." .."

~ ".:I

~ = 00.:1 °c ~ ~

CiG

+ ~ Time

Strategy .V' Strategy 7: ...

Fig.lO A diagram of recognition effectiveness improved through the application of two evolution strategies, i.e., strategy Y and strategy Z (case-2: rapidly decreasing recognition effectiveness through the sequence of last images)

5.4. Verification of Evolved Models

There is no doubt that the verification of applied processes is necessary in complex and autonomous systems. Such verification is essential both for correct evolution of system models and for the choice and control of system evolution strategies. The verification of evolved models is performed within the system main loop (Figure 1) where evolved models are applied by the recognition module. Evolved models are applied either on the same texture image which was used to modify these models or on a pair of images representing significant changes in external conditions. The goal of verification processes can vary depending on the request generated by an evolution strategy that currently controls evolving processes. Let us consider the following two verification approaches:

Threshold

New threshold

Initiated an evolution ocess for next images

Extrapolated characteristics

Page 18: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

I 7

(I) statistical approach --- is applied to compare gained improvement of system recognition effectiveness through the application of evolved models using an evolution strategy,

(2) structural approach --- is applied to compare the differences in shape of textured objects recognized and segmented in a given image by texture models and such models obtained after evolution.

The first approach (which is the simplest one) is based on the statistical measure of recognition effectiveness computed for each class of texture. The system has simply to compare the recognition parameters obtained before and after applied model evolution. While such comparison does not involve the analysis of differences in shape, the second approach follows borders between texture surfaces. Such structural analysis can be useful to analyze consequences of more progressive but risky model evolution strategies.

6. CONCLUSIONS

This report presented a machine learning approach to the problem of invariant recognition of texture concepts. The proposed approach develops a model evolution methodology in the creation of highly autonomous system capable to work without teacher help. The innovation of presented methodology is in the following areas:

• integration of recognition and learning processes within a vision system,

• organization of recognition and learning processes within a closed loop,

• evolution of initially acquired models through unsupervised extraction of new training examples,

• dynamic adaptation of the system to a new environment and changing perceptual conditions.

Preliminary experiments with model evolution approach to system adaptability were already carried out incorporating the AQ14leaming program. Received results and final conclusions were used in the development of an approach presented in this paper.

7. REFERENCES

Aloimonos, Y. and D. Shulman. "Learning Early-Vision Computations". Journal ofthe Optical Society ofAmerica At pp.908-919, 1989.

Bala, J.W. and P.W. Pachowicz, "Recognition of Noisy and Imperfect Texture Concepts via Iterative Optimization of Their Rule Descriptions", submitted to Int. J. ofPanern Recognition and Artificial Intelligence. 1990.

Bentrup, J.A., G.J. Mehler and J.D. Riedesel, "INDUCE 4: A Program for Incrementally Learning Structural Descriptions from Examples", UIUCDCS-F-87-958t Computer Science Department, University of Illinois. Urbana, 1987.

Bhanu. B .• S. Lee and J. Ming, "Adaptive Image Segmentation Using a Genetic Algorithm", Proc. Image Understanding Workshop, Palo Alto, CA, pp.l043-1055, 1989.

Page 19: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

1 8 Bhanu, B., S. Lee and J. Ming, "Self-Optimizing Control System for Adaptive Image Segmentation", Pittsburg, PA, pp.583-596, 1990.

Cross, G.R. and A.K. Jain, "Markov Random Field Texture Models", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.PAMI-5, No.1, pp.149-163, 1983.

Derin, H. and H. Elliot, "Modeling and Segmentation of Noisy and Textured Images Using Gibbs Random Fields", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.PAMI­9, No.1, pp.39-55, 1987.

DuBuf, J.M.H., M. Kardan and M. Spann, "Texture Feature Performance for Image Segmentation", Pattern Recognition, Vo1.23, No.3-4, pp.291-309, 1990.

Fisher, D.H., "Knowledge Acquisition Via Incremental Conceptual Clustering", Machine Learning, Vo1.2, pp.139-172, 1987.

Gennari, J.H., P. Langley and D. Fisher, "Models of Incremental Concept Formation", Artificial Intelligence, Vo1.40, pp.11-61, 1989.

Goldfarb, L., "On the Foundations of Intelligent Processes - I. An evolving model for pattern learning", Pattern Recognition, Vo1.23, No.6, pp.595-616, 1990.

Haralick, R.M., K. Shanmugan and I. Dinstein, "Texture features for image classification", IEEE Trans. System, Man Cybernet. SMC-3, pp.610-621, 1973.

Hawkins. J.K.,"Textural Properties for Pattern Recognition", in Picture Processing and Psychopictorics, B.C. Lipkin and A. Rosenfeld (eds.). Academic Press, New York, pp.347­370. 1970.

Hsiao. J.Y. and A.A. Sawchuk, "Unsupervised Textured Image Segmentation USing Feature Smoothing and Probabilistic Relaxation Techniques". Computer Vision, Graphics and Image Processing, Vo1.48, pp.1-21, 1989.

Jamshidi, M .• Large-Scale Systems: Modeling and Control, North-Holland, 1983.

Kanatani. K.-I. and T.-C. Chou, "Shape from Texture: General Principle", Artificial Intelligence, Vo1.38, pp.I-48, 1989.

Liedtke. C.-E. and M. Ender, itA Knowledge Based Vision System for the Automated Adaptation to New Scene Contents", Proc. 8th Int. Conf. on Pattern Recognition. Paris, pp.795-797, 1986.

Liu. S-S., and M.E. Jernigan, "Texture Analysis and Discrimination in Additive Noise", Computer Vision, Graphics and Image Processing, Vo1.49, pp.52-67, 1990.

Matsuyama. T., "Expert Systems for Image Processing: Knowledge-Based Composition of Image Processes", Computer Vision, Graphics and Image Processing. Vo1.48, pp.22-49, 1989.

Michalski, R.S., "AQVAL/1 - Computer Implementation of a Variable-Valued Logic System VLl and Examples of its Application to Pattern Recognition", Proc. of the First Int. Joint Conf. on Pattern Recognition, pp.3-17, Washington, DC. 1973.

Page 20: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

1 9 Michalski, R.S. and J.B. Larson, "Selection of most representative training examples and incremental generation of VLl hypotheses: the underlining methodology and descriptions of programs ESEL and AQll ", Repon 867, Department of Computer Science, University of Illinois, Urbana, 1978.

. . Michalski, R.S. and R.L. Chilausky, "Learning by Being Told and Learning form Examples: An experimental comparison of the two methods of knowledge acquisition in the context of developing an expen system for soybean disease diagnosis", Policy and Information Systems, Vol.4, pp.125-160, 1980.

Michalski, R. S., "A Theory and Methodology of Inductive Learning", in Machine Learning: An Anificial Intelligence Approach, TlOOA Publishing, Palo Alto, CA, pp 83-134, 1983.

Michalski, R.S., I. Mozetic, 1. Hong and N. Lavrac, "The AQI5 Inductive Learning System: An Overview and Experiments", ISO 86-23, UIUCDCS-R-86-1260, Department of Computer Science, University of Illinois, Urbana, 1986.

Neumann, B., "Towards Computer Aided Vision System Configuration", in Anificial Intelligence II: Methodology, Systems, Applications, Ph. Jorrand and V. Sgurev (eds), pp.385­393, Elsevier Pub., 1987.

Niemann, H., H. Bruenig, R. Salzbrunn and S. Schroeder, "A Knowledge-Based Vision System for Industrial Applications", Machine Vision and Applications, Vo1.3, pp201-229, 1990.

Pachowicz, P.W., "Low-Level Numerical Characteristics and Inductive Learning Methodology in Texture Recognition", Proc. IEEE International Workshop on Tools for AI, Washington, D.C., pp.91-98, October 1989.

Pachowicz, P.W., "Local Characteristics of Binary Images and Their Application to the Automatic Control of Low-Level Robot Vision", Computer Vision, Graphics and Image Processing, 1990a (in press).

Pachowicz, P.W., "Integrating Low-Level Features Computation with Inductive Learning Techniques for Texture Recognition", I nlerntltional Journal ofPattern Recognition and Artificial Intelligence, VolA, No.2, pp.147-165. 1990b.

Pachowicz, P.W., "Learning-Based Architecture for Robust Recognition of Variable Texture to Navigate in Natural Terrain", Proc.IEEE International Workshop on Intelligent Robots and Systems '90, Tsuchiura, Ibarald, Japan, pp.135-142. July 199Oc.

Pachowicz, P.W .• "Advancing Learning-from-Examples Tools for Intelligent Autonomous Systems: Engineering requirements and proposed solutions", 1991 (in preparation).

Pachowicz, P:W. and 1. Bala, "Advancing Texture Recognition through Machine Learning and Concept Optimization: Pan 1 - Acquiring optimal concept prototypes for texture classes", 1991 (in preparation).

Reinke, R.E. and R.S. Michalski, "Incremental Learning of Concept Descriptions: A Method and Experimental Results", Machine Intelligence 11. J.E. Hayes, D. Michie and 1. Richards (Eds), Clarendon Press, Oxford. pp.263-288. 1988.

Roan, S. 1., 1.K. Aggarwal and W.N. Martin, "Multiple Resolution Imagery and Texture Analysis", Pattern Recognition, vo1.20, No.1, pp.17-31,1987.

Page 21: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems

20

Rosenfeld, A. and L. Davis, "Image Segmentation and Image Models", Proc. ofIEEE, Vo1.67, No.12, pp. 7646-7772, 1979.

Rosenfeld, A., 1. Kender, M. Nagao, L. Uhr, W.B. Thompson, V.A. Kovalevsky, D. Sher and S. Tanimoto, "DIALOG Expert Vision Systems: Some Issues", Computer Vision, Graphics and Image Processing, Vo1.34, pp.99-1l7, 1986.

Unser, M. and M. Eden, "Multiresolution Feature Extraction and Selection for Texture Segmentation", IEEE Trans. Pattern Analysis and Mach. Intel., Vol.PAMI-ll, No.7, pp.717­728, 1989.

Utgoff, P.E., "Incremental Induction of Decision Trees", Machine Learning, VolA, pp.161­186, 1989.

Weszka, J.S., C.R. Dyer and A. Rosenfeld, "A Comparative Study of Texture Measures for Terrain Classification", IEEE Trans. Systems Man Cybern., Vo1.SMC-6, NoA, pp.269-285, 1976.

Page 22: LEARNING INVARIANT TEXTURE CHARACTERISTICS IN ...Neumann, 1987, Matsuyama, 1989, Niemann, et al., 1990). Relatively less effon is paid to the adaptability of texture recognition systems