73
The ORT Braude College 7 th Interdisciplinary Research Conference Book of Abstracts September 19-20, 2011 Pastoral Kfar Blum Hotel, Upper Galilee ISBN 978-965-91208-5-7

The ORT Braude College

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The ORT Braude College

The ORT Braude College

7th

Interdisciplinary Research Conference

Book of Abstracts

September 19-20, 2011

Pastoral Kfar Blum Hotel, Upper Galilee

ISBN 978-965-91208-5-7

Page 2: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Biotechnology Engineering | 1

Profiling Expression of Structural and Functional Proteins in

Engineered Muscle Tissue Following Application of

Various Stretching Regimes

Rosa Azhari1, Iris Bonshtein

1, Ehud Kroll

2

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901750, Fax: 972-4-9901839, E-mail: [email protected]

2Faculty of Aerospace Engineering, Technion - Israel Institute of Technology, Technion City, Haifa

32000, Israel, Tel: 972-4-8293813, Fax: 972-4-8292030, E-mail: [email protected]

Keywords: Tissue engineering, skeletal muscle, scaffolds, electrospinning, mechano-

transduction

Mechano-transduction has been demonstrated to play a major role in affecting proliferation

and differentiation of skeletal muscle cells and determining the morphology and properties of

the tissue obtained. Therefore, various mechanical stimulation techniques have been

incorporated as essential components in novel tissue engineering methods developed for

obtaining functional skeletal muscle tissue.

The main objective of this research is to study the effects of applying various stretching

regimes on skeletal muscle cells by profiling the expression of various functional and

structural proteins in the engineered tissue.

Electrospun, hybrid, micro-fibrous scaffolds, developed in our group, combining a synthetic

polymer, poly--caprolactone (PCL), with natural connective tissue components: chondroitin

sulfate and gelatin were used in this study. A computer-controlled stretching machine was

designed to apply various unidirectional and cyclic stretching patterns on cell-seeded

scaffolds, clamped horizontally in a circulating medium bioreactor. Stresses induced in the

constructs were measured continuously. The morphology of tissue obtained was analyzed

using histological methods and the expression of intracellular and extracellular functional and

structural proteins was demonstrated by specific immunostaining and confocal microscopy.

Proteins and RNA were extracted from the constructs using T-PER (Pierce) and the RNeasy

mini kit (Qiagen) respectively.

No significant differences were found between the morphology and characteristics of tissue

produced on non-stretched samples and ones stretched in a slow, gradual, manner, with

deformations up to 50% of the original length (stresses up to 6MPa). On the other hand, the

same deformation applied by abrupt stretches (each 5% of the original length), induced better

alignment of myotubes and higher expression of intracellular and extracellular functional and

structural proteins, including: myosin heavy chain, alpha-actinin, desmin, myogenin, myo D

fibronectin and integrin. Application of a cyclic pattern of sinusoidal stretches was also

examined. Affymetrix Gene Chip microarrays were used to quantify up-regulation and down-

regulation of gene expression in the tissue, following the change in the mechanical

stimulation regimes. It was shown that different patterns of application of mechanical stimuli

induce altered cell responses, even if the final stresses applied are of the same magnitude.

Acknowledgement: This study was supported by grants from the Israeli Science Foundation

(618-08) and the ORT Braude College Research Committee.

Page 3: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

2 | Biotechnology Engineering

Coca-Cola Consumption as a Risk Factor for Experimental

Fatty Liver Disease

Maria Grozovski

1, Michal Maoz

2, Nimer Assy

3

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901895, Fax: 972-4-9901839, E-mail: [email protected]

2Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972- 72 -2463672, Fax: 972-4-9901839, E-mail: [email protected]

3Ziv Medical Center, P.O.Box 1008, Safed 13100, Israel, Tel: 972-4-6828442, Fax: 972-4-6828441,

E-mail: [email protected]

Keywords: Coca- Cola, fatty liver, antioxidants, triglycerides, lipid peroxidation

While the rise in non-alcoholic fatty liver disease (NAFLD) parallels the increase in obesity

and diabetes, a significant increase in Coca-Cola consumption in industrialized countries has

also occurred. The increased consumption of Coca-Cola is linked with complications of the

metabolic syndrome.

The aim of the present study is to determine whether there is a relationship between Coca-

Cola consumption and experimental non-alcoholic fatty liver and to evaluate the effects of

aqueous Inula viscosa extract on the hepatic lipid content and oxidative stress parameters in

"Coca-Cola" rats.

Forty-eight male Sprague–Dawley rats, divided into two groups, were studied: Rats on a

standard rat chow diet for 12 weeks (24 rats) and rats on a fructose enriched diet (FED) for

12 weeks (24 rats). 12 weeks after the initiation of FED each diet group was randomly

divided into four treatment groups: the first group remained untreated, the second group was

given Inula viscosa extract only (5.6 mg/kg per day), the third group was given 4 ml regular

Coca-Cola per day and the fourth group was given 2.8 mg/kg of Inula viscosa extract

together with 4ml of regular Coca-Cola per day for 4 weeks. Hepatic extracts from the rat

livers underwent biochemical assays to determine the levels of cholesterol, triglycerides,

antioxidants, malonic dialdehyde (MDA) and protein C.

Coca-Cola had increased hepatic triglyceride, protein C and MDA (+270%, +30% and +99%,

respectively) and decreased hepatic levels of alpha–tocopherol and paraoxonase (PON)

activity (-25% and -44%, respectively) in healthy rats as compared to the control group.

Coca-Cola had increased hepatic triglyceride, hepatic cholesterol (+78% and +5%,

respectively), decreased the hepatic levels of MDA, alpha-tocopherol and paraoxonase

(PON) activity (-13%, -30% and -32%, respectively) in FED rats as compared to untreated

rats. Combination of Inula viscosa extract and Coca-Cola reduced hepatic triglyceride,

hepatic cholesterol, hepatic MDA and hepatic protein C and increased paraoxonase activity

in the same group compared with untreated rats. Coca-Cola consumption is a risk factor for

fatty liver disease and also supports liver inflammation in FED rats. Administration of

aqueous Inula viscosa extract at a concentration of 5.6 mg per day for 4 weeks, to rats with

the fatty liver disease and "Coca Cola" rats, may improve hepatic lipid metabolism and cause

other favorable changes in the hepatic oxidative – anti oxidative milieu.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 4: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Biotechnology Engineering | 3

Molecular Modeling Studies of Curing of Styrene-Free

Unsaturated Polyester Alkyd

Dafna Knani

1, David Alperstein

2, Iris Mironi-Harpaz

3, Moshe Narkis

3

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901875, Fax: 972-4-9901839, E-mail: [email protected]

2Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901944, Fax: 972-4-9901886, E-mail: [email protected]

3Faculty of Chemical Engineering, Technion - Israel Institute of Technology, Technion City, Haifa

32000, Israel, Tel: 972-4-8292937, Fax: 972-4- 8295672, E-mail: [email protected]

Keywords: Styrene-free unsaturated polyester, Density Functional Theory (DFT), Monte

Carlo simulation, crosslinking, binding energies, free radicals

Crosslinking of Unsaturated Polyesters (UPs) without styrene is a new process. Thermoset

UPs are usually obtained by the crosslinking of alkyd chains dissolved in an unsaturated

reactive monomeric diluent, usually styrene. UP alkyd chains (without styrene) were

intrinsically cured into a cross-linked resin in the presence of peroxide. Simulation of the

crosslinking reaction was used to define the species involved in the process.

Two commercial software tools were used for the simulation. DMOL3 by Accelrys, a

quantum code based on density functional theory (DFT) was used for the analysis of the

scission reaction species. DMOL3 was applied to the scission reaction in the following way:

1) All possible intermediate species and products of the scission reaction were formulated.

2) Structures of all possible intermediate species were built and optimized using DMOL3

structural optimization.

3) Structures of all possible products were built and optimized using a standard force-field

minimizer.

4) The lowest energy products were taken as the scission reaction products.

The other simulation tool used was "Networks" by Accelrys, which is a polymer crosslinking

simulation tool, based on a Monte Carlo procedure. "Networks" was used for the simulation

characterization and analysis of the forming cross-linked network.

Binding energies for all the possible radicals obtained in various polymer-chain scission

reactions were calculated using DMOL3 software. The results were compared to the binding

energy of the polyester. The binding energy of the polyester was found to be −5409 kcal/mol.

According to the results, it seems that cleavage of the polyester chain occurs preferably to

yield the following pair of free radicals:

Although each of these radicals is not the lowest energy radical that may be obtained, the

sum of their binding energy is the lowest.

The scission point on the polyester chain where cleavage occurs preferably was identified

using binding energy calculations.

Page 5: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

4 | Biotechnology Engineering

Improving the Efficiency of Wastewater Treatment in

Biochemical Factories

Vyacheslav V. Korostovenko¹, Vera A. Gron², Natalia M. Kaplichenko³,

Ekaterina G. Kaplichenko4

¹Doctor of Engineering, Professor, Department of Thermal Engineering and Technosphere Safety in

Mining and Metallurgical Production, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk,

Russia 660041, Tel. 8-(391) 265-54-63, E-mail: [email protected]

²Cand. Sc., Assistant Professor, Department of Thermal Engineering and Technosphere Safety in

Mining and Metallurgical Production, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk,

Russia 660041, Tel. 8-(391) 265-54-63

³Senior lecturer, Department of Thermal Engineering and Technosphere Safety in Mining and

Metallurgical Production, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk, Russia

660041, Tel. 8 (391) 201-30-70, E-mail: [email protected]

4Fourth year student, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk, Russia, 660041

Keywords: Sewage, biochemical factories, microorganisms, organic substances, activated

sludge, purification plants, equipment

Sewage of biochemical production is considered to be highly polluted. It contains dissolved

organic and mineral substances and a variety of insoluble impurities. It is composed of

sulfates, ammonia nitrogen, chloride, furfural, methanol, phenol, formaldehyde, fat, iron,

phosphates and suspended solids, mainly represented by lignohumic complex and it contains

gypsum.

The aim of the study is to develop a wastewater treatment system that allows recycling

industrial water and significantly reduces the load on municipal wastewater purification

plants.

Wastewater pH in hydrolysis factories is 4.5-5.5, so before its treatment the water is

neutralized to pH 6.5-8.5 to ensure the efficient activity of microorganisms. To separate

coarse and floating impurities it was suggested to use two radial sediment ponds, 18 m in

diameter, with central issue of waste water. Their clarification efficiency is up to 70%. Free

from suspended solids, waste water goes to the well hole, and then to the final stage of

treatment. For destruction of organic substances in the liquid waste, it was suggested to use

aerobic biological treatment on the highly efficient filter tank in which the aeration zone is

combined with the secondary sedimentation tank with a sludge chamber for recycling sludge

and removal of excess sludge mixture. Filter tank has a high oxidative capacity due to a large

amount of microorganisms. Freshness of the idea is in filtering the sludge mixture in aeration

tank with initial dose of sludge to 20 g/l using strainers, so that no more than 3-4 g/l of

suspended can go to the secondary tank.

The study was conducted at one of the hydrolysis factories. The average flow of wastewater

was 9500 m³/day, initial TBOD = 4000 mg/l О2, filter tank diameter was 28 m; the sludge

zone diameter was 18 m. The filter tank parameters depend on the dose of activated sludge

and its properties determined by the structure and biochemical oxidation of pollution.

Biochemical destruction of organic pollutants is influenced by bio enosis, which is all biotic

community in the filter tank.

The study suggests a compact efficient wastewater treatment technology with a purification

rate of 97%, and using the treated water with 0.85 coefficient of water cycle.

Page 6: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Biotechnology Engineering | 5

Nitrification of Effluent in an Intermittent Bio-Filter System

Isam Sabbah

1, 2, Nedal Masalha

2, Katie Baransi

2, Majida Jiryes

1, Ali Nujedat

3

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901750, Fax: 972-4-9901839, E-mail: [email protected]

2The Galilee Society Institute of Applied Research, P.O. Box 437, Shefa-Amr 20200, Israel;Tel: 972- 4

9504523, Fax:972- 4- 950452, E-mail: [email protected]

3Zuckerberg Institute for Water Research- ZIWR, Ben-Gurion University of the Negev, Sede Boqer

Campus; Tel: 972- 8- 6596832, Fax: 972- 8 -6596909, E-mail: [email protected]

Keywords: Wastewater, nitrification, tuff, bio-filter

It is widely recognized that increasing stress on freshwater supply places further emphasis on

the development of sustainable, reliable and cost-effective technologies capable of treatment

and reclamation of wastewater for reuse, especially in rural areas.

Reusing treated effluents unrestrictedly in local agriculture, without removing nutrients, is

not acceptable due to the nutrients‘ negative effects on the environment, such as

eutrophication, contamination of groundwater, ammonia toxicity, etc.

The main objective of this project was to examine Intermittent Bio- Filtration (IBF) as a

simple and cost effective decentralized secondary wastewater treatment process.

The IBF studied systems were fed with effluents of either up flow anaerobic sludge blanket

system (UASB) or chemically enhanced pre-treatment (CEPT). In addition, factors such as

media type (sand and Volcanic Tuff), filter depth, organic and hydraulic loading rate were

tested for improvement of the removal efficiency of organic matter, pathogens and nutrients.

In addition, microbial tests were conducted and included protein, polysaccharide, aerobic

activity, RtPCR and DGGE.

The results show that 93% of the COD after chemically enhanced pre-treatment (CEPT) were

removed by the IBF (Tuff), where the hydraulic and organic loading rates were 160 Ld-1

m-2

and 73 g BOD d-1

m-2

respectively. However, the COD removal was 88% after the same IBF,

which was fed with the effluents of UASB. The hydraulic and organic loading rates were 85

Ld-1

m-2

and 15 g BOD d-1

m-2

, respectively. The microbial tests showed higher protein and

polysaccharide concentration throughout the tuff filter's length. In addition, higher aerobic

activity of the tuff bio-filter was seen at all sections of the filter.

The tuff bio-filter provides more stable effluent quality than the sand bio-filter, under extreme

fluctuations of the inlet. The fecal removal of treated wastewater by the combined UASB-IBF

(Tuff) was higher than the CEPT-IBF (Tuff), where the removal was 4 and 3 orders of

magnitudes for UASB-IBF and CEPT-BIF, respectively. The average removal of the

introduced phosphate to the tuff bio-filter was 50%, where no removal was observed using

sand bio-filter.

The microbial results were expected, based on the observed higher and more stable removal

activity of tuff rather than sand. Furthermore, the DGGE analysis indicated that the diversity

of ammonia oxidizing bacteria species was higher in the tuff than the sand. This result could

be attributed to the higher surface area and special composition of the tuff.

Page 7: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

6 | Biotechnology Engineering

Cleaning the Anode Gas Emissions from Tarry Substances in

Electrolytic Aluminum Production

Sergei G. Shakhrai¹, Vyacheslav V. Korostovenko², Natalia M. Kaplichenko³

¹Cand. Sc., Assistant Professor, Department of Thermal Engineering and Technosphere Safety in

Mining and Metallurgical Production, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk,

Russia 660041, Tel. 8-(391) 2-76-00-47, E-mail: [email protected]

²Doctor of Engineering, Professor, Department of Thermal Engineering and Technosphere Safety in

Mining and Metallurgical Production, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk,

Russia 660041, Tel. 8-(391) 265-54-63, E-mail: [email protected]

³Senior lecturer, Department of Thermal Engineering and Technosphere Safety in Mining and

Metallurgical Production, Siberian Federal University, 79 Svobodny Ave., Krasnoyarsk, Russia

660041, Tel. 8 (391) 201-30-70, E-mail: [email protected]

Keywords: Aluminum production, anode spike, emissions, reduction of tarry substances.

Aluminum production in top-worked cells with self-baking anodes is realized with

considerable amount of contaminants, 9.3 to 19.8 kg tar per a ton of aluminum is ejected into

the environment. Among them, benzopyrene is of the most danger; its emission is 0.015-

0.032 kg per ton of aluminum. Increasing the current capacity during the last 10-15 years has

increased the cell capacity, but the emissions of tar also increased during the process of spike

pulling directly from anode wells and from the spikes surface. In particular during spike

setting, emissions increased by 1.5-1.6 times, and equaled 3.0-4.8 kg per ton of aluminum;

the total amount 10.3-21.6 kg, while the total increases emissions of tar was 10-20%.

The aim of this study is to find technical solutions for capturing tar emissions from anode

spikes extracted for cooling.

Currently spikes are cooled in open cassettes, which are like an open "basket" with cells

fixing spikes in a vertical position. The extracted and cooled spikes are either returned back

into the anode, or transferred for surface cleaning from scale and dirt. It is considered that an

anode paste layer of 3 mm, cakes on the hot part of spikes, if the spikes are not corroded. In

fact, the thickness of caked anode paste and accordingly, the level of emissions may

considerably exceed the indicated values because of high corrosion wear of spikes, leading to

the increase of roughness, and due to deformation of the spikes as a result of high

temperature in the anode. Cooling the spikes with fluids significantly reduces the time of

operation and allows capturing all tar emissions, but it has several disadvantages connected

with dangerous concentrations of new compounds, the complexity of coolant disposal, etc.

It is possible to solve the problem of capturing tar pollutants during cooling anode spikes if

we use the aspirate cassette, developed by the authors (68 512 patent of the Russian

Federation). The cassette is an airproof container with an extraction fan and a filter. Coke

grains can be used as filtering material, they are later returned to the anode paste, or it can be

fibrous combustible material, that can be disposed in coke ignition furnaces.

The use of aspirate cassettes would reduce tarry substances emission in an average of 5%.

Page 8: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Biotechnology Engineering | 7

Investigating the Apoptotic Pathway in Human Ovarian Cancer

Cells, Following Exposure to an Ethyl Acetate Extract of

Coprinus comatus

Amal Toubi - Rouhana

1, 2, Solomon P. Wasser

2, Fuad Fares

3

1Prof. Ephraim Katzir

Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901907, Fax: 972-4-9901839, E-mail: [email protected]

2Department of Evolutionary and Environmental Biology, University of Haifa, Mount Carmel, Haifa

31905, Israel. Tel: 972-4-8249218, Fax: 972-4-8688828, E-mail: [email protected]

3Department of Human Biology, Faculty of Natural Sciences, University of Haifa, Mount Carmel,

Haifa 31905, Israel. Tel: 972-4-8288781, Fax: 972-4-8288763, E-mail: [email protected]

Keywords: Ovarian cancer, Coprinus comatus, medicinal mushrooms, apoptosis

In former studies, an Ethyl acetate extract of Coprinus comatus was found to inhibit

proliferation and induce apoptosis in three lines of human ovarian cancer cells (ES-2, SKOV-

3 and SW-626).

The main aims of this research are:

1) To investigate the mechanism of the apoptotic pathway in human ovarian cancer cells,

following exposure to the Ethyl acetate extract of C. comatus.

2) To purify the extract and try to identify the active compounds.

SKOV-3 cells were exposed to an effective concentration of the extract (150 µg/ml) for 24 or

48 hrs. Protein concentrations were determined in the lysates of treated and untreated cells

and they were analyzed for caspase-3 and caspase-9 activities using western blots. For

Chemical analysis, we used a silica gel column to fractionate the extract. The loaded sample

was washed with a gradient of Hexane (100 % - 0 %) and Ethyl acetate (0% - 100 %). 16

fractions were collected and analyzed using TLC. Dimethyl sulfoxide was added to each vile

until the concentration of 100 mg/mL was reached. Several dilutions of these fractions were

tested for their effect on viability of human ovarian cancer cells (ES-2 and SKOV-3). In an

attempt to identify the active compounds, we sent the active fraction for further analysis on

GC-MS and LC-MS.

Both analyzed enzymes were found to be active in the treated cells. The last fraction

collected from the silica gel column was identified as the active fraction. This fraction was 50

-100% more active than the original extract. GC-MS analysis revealed the presence of several

fatty acids and their methyl esters in the active extract. The mass spectra obtained from the

peaks on the LC-MS chromatogram were compared to the spectra of active compounds from

mushrooms (based on literature) but they did not match any known compound

Western blot analysis lead to the conclusion that Ethyl acetate extract of C. comatus induces

apoptosis in human ovarian cancer cells SKOV-3, through the mitochondrial pathway.

Although there is some evidence in the literature that some fatty acids have an anti-cancer

activity, more analyses are needed to confirm that these are the main active compounds in the

extract.

Acknowledgement: This work was supported by a grant from the Planning and Budgeting

Committee of the Council for Higher Education in Israel and a grant from the Israel Ministry

of Science, Culture and Sport.

Page 9: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

8 | Biotechnology Engineering

Effect of Copper Oxide Nano-Particles on

Cholesterol Levels in the Liver

Iris S. Weitz

1, Nelli Bar-Guy

2, Maria Grozovski

2

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel, 21982 Israel, Tel: 972-4-9901896, Fax: 972-4-9901839, E-mail: [email protected]

2Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel, 21982 Israel, Tel: 972-4-9901895, Fax: 972-4-9901839, E-mail: [email protected]

Keywords: Nano-particles, copper oxide, nano-toxicology, cholesterol, liver, statins

Lowering cholesterol in a diet has very little effect on blood cholesterol levels. Thus, using

statins, a family of drugs that inhibit HMG CoA reductase, an enzyme involved in cholesterol

synthesis, may reduce the levels of cholesterol. However, the statins have known side effects

such as gastrointestinal effects, headache and dizziness, unexplained muscle pain, tenderness

and weakness.

Copper oxide (CuO) is an essential nutrient in food supplements (3 mg/day). Copper ions

play an important role in human health. They are necessary for the absorption and utilization

of iron, and aid in the formation of red blood cells. Copper ions also help in proper bone

formation and maintenance, and are needed for the activity of multiple enzymes.

The CuO is transported directly and stored in the liver, the organ where cholesterol synthesis

mainly occurs (80 %). Bulk CuO particles are insoluble in water and CuO bio-vialability is

mainly dependent on the solubilization in the gastrointestinal system. In earlier work, a

synthetic route was well established for the preparation of spherical CuO nano-particles (CuO

NPs) with narrow size distribution (5nm), which are readily soluble in water, hence

increasing their bio-availability. The water soluble CuO NPs were used for direct

functionalization with pravastatin. The pravastatin molecules were self-assembled via ionic

interaction of the carboxylate moiety with the CuO NPs.

Sprague-Dawley rats were subjected to a high cholesterol diet for four weeks, causing

hypercholesterolemia. Then the rats were treated with three different aqueous solutions of

CuO NPs (according to 3 mg/day dose in human food supplements); two solutions contained

statin functionalized CuO NPs and one solution contained pure CuO NPs. A control

experiment was performed by treatment of the rats with soluble pravastatin.

Biochemical analysis revealed that treatment with either bare or functionalized CuO NPs

lowered cholesterol levels in the liver, increased antioxidant Paraoxonase (PON) and

Superoxide-Dismutase activities (SOD), and increased enzymatic activities of Lactate

Dehydrogenase (LDH) and Alkaline Phosphatase (ALP). Liver histology supported the

biochemical results described and verified no copper accumulation in the liver tissue.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 10: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Biotechnology Engineering | 9

The Potential for Using CuO Nano-Particles as a Wood Preservative

Iris S. Weitz

1, Michal Maoz

2, Camille Freitag

3, Jeff J. Morrell

4

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901896, Fax: 972-4-9901839, E-mail: [email protected]

2Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901901, Fax: 972-4-9901839, E-mail: [email protected]

3Wood Science and Engineering, Oregon State University, Corvallis, OR 97331 USA. Tel 541-737-

4236, Fax: 541-737-3385, E-mail: [email protected]

4Wood Science and Engineering, Oregon State University, Corvallis, OR, 97331 USA. Tel 541-737-

4222, Fax: 541-737-3385, E-mail: [email protected]

Keywords: Nano-particles, copper oxide, minimum inhibitory concentration, minimum

fungicidal concentration, Gloeophyllum trabeum, Trametes versicolor, wood preservative

Copper-based biocides for wood protection have been used in solubilized forms either in

water or complexes with organic ligands. Micro-dispersed copper systems have the

advantage of reducing the need for a co-solvent. Small spherical nano-particles (5-20 nm) can

more readily move through a wood matrix and penetrate more deeply (10-298 mm) into the

wood layers under treatment, thus reducing the use of pressure impregnation. Metal oxides

acquire an antimicrobial activity: Nano CuO was shown to be over two orders of magnitude

more bio-available than bulk CuO.

The aim of this research is to examine the fungicide effects of aqueous solutions of copper

oxide nano-particles (<10 nm diameter) on selected wood decay fungi.

Antifungal activity of the copper oxide nanoparticles aqueous solution was examined by

exposing wood decay fungi to media containing various concentrations (0.12; 0.09; 0.06;

0.03 and 0.015%) of the nano-particles. The agar dilution method in mini-agar slants was

used, where Gloeophyllum trabeum or Trametes versicolor, as tested fungi, were inoculated.

A wood block test was used; pine wafers were treated under vacuum for 20 minutes or, in a

second test, were submerged in the solutions for 2 days, until they sank.

The brownish CuO nano-particles aqueous suspensions were stable for more than 6 months at

room temperature; however, they rapidly precipitated when exposed to wood. The nano-

copper system was effective against both G. trabeum and T. versicolor in agar at fairly low

levels. Minimum fungicidal concentration (MFC) values were essentially the same as the

minimum inhibitory concentration (MIC) values, suggesting that once the copper affected the

fungus, the effect was lethal. Wood block results, in both impregnation methods, supported

these results.

Nano-copper has potential as a wood protectant, due to aqueous colloidal stability of the

particles which penetrate wood cell walls. Further investigation will be required to

understand the mechanism of the nano-size material and to assess the potential health and

environmental risks.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 11: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

10 | Electrical and Electronic Engineering

Fuel Cells in Braude – Improvements during 2010-2011

Eugenia Bubis

1, Hana Faiger

2, Pinchas Schechner

3

1Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-9580289, E-mail: [email protected]. il

2Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901901, Fax: 972-4-9901839, E-mail: [email protected]

3Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Alkaline Fuel Cell, glucose, Alkaline Fuel Cell, (AFC) power multiplexing

The objective of the ORT Braude College Fuel Cells Research Group is to develop an

alkaline fuel cell fuelled by glucose. Fuel cells are electrochemical devices that allow the

direct production of electricity from fuels. Their theoretical efficiency reaches up to 83%.

Glucose was selected as a renewable fuel because it has unique properties that make it the

ideal fuel. The suggested cell is unique because it includes a combination of electronic

circuits to overcome inherent disadvantages of glucose as a fuel.

The aim of this study is to increase the power of the cell in the following aspects:

1) Temperature;

2) Effect of the concentration of glucose;

3) Multiplexing Power Supply Multiplexed and

4) Use of super-capacitors.

The Peak Power Density, PPD [W/m2] of the self- designed and homemade AFC fueled with

glucose using KOH as the electrolyte was investigated by self-designed and homemade

Automatic Fuel Cell Measurements System. The aimed goals of the investigation were tested

in four different appropriated experimental set-ups.

The results are:

1) There is a maximum in the PPD at a temperature of 47 0C. At this temperature we get a

PPD = 5.4 W/m2 versus = 0.4 W/m

2 at a temperature of 20

0C.

2) There is a maximum in the PPD when the glucose concentration is 0.4 M.

3) The use of Power Multiplexing enables the abstraction of more energy from the system.

4) The loading of a super-capacitor with 4 fuel cells allows an Effective Peak Power Density,

EPPD of approximately 700 W/m2.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee under Grant number 5000.838.1-41.

Page 12: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Electrical and Electronic Engineering | 11

Temperature Dependence of Effective Mobility

for NMOS Transistor

Radu Florescu

1, Nisim Sabag

2

1Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901864, Fax: 972-4-9580289, E-mail: [email protected]

2Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: NMOSFET, threshold voltage, effective mobility, I/V characteristic, BSIM

Simulation Model, temperature dependence

Extensive studies on the experimental determination of effective mobility in NMOSFET have

been carried out by a large number of researchers.

The concept of mobility, resulting from an analysis of stationary transport where carrier

velocity is limited yb scattering phenomena, has been used in microelectronics as a

measurable factor and as a parameter of analytical models developed to predict device

performance.

For the new CMOS technologies, dramatic reduction of the mobility measured at short gate

length has been observed. Moore's Law states that the number of transistors per integrated

circuit doubles every two years. The current 90 nm generation node produces CMOS devices

with Lg of ~50 nm, and a Lg of ~10 nm it is projected.

The aim of this study is the development of a new model of Effective Mobility temperature

dependence for NMOSFET.

In our paper, the results of laboratory tests are compared with the theoretical model and with

the semi empirical BSIM4 Simulation Model (Berkeley Short Channel MOSFET Model),

over a wide temperature range from 25 0C to 125

0C. In order to accurately obtain the

mobility for use in an I-V model a suitable determination of inversion charge and electrical

potential is demonstrated for high-performance, low power CMOS applications (sub-micron

gate length and SiO2 low thickness).

The results are:

1) Numerical simulation is a powerful tool to carry out mobility extraction.

2) The linear model from BSIM4 is not accurate enough, especially for high temperatures.

3) Our theoretical model is better that BSIM, especially for high temperatures.

A more accurate and efficient method to BSIM modeling temperature dependence for VLSI

circuits is needed.

Page 13: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

12 | Electrical and Electronic Engineering

Supercapacitor: an Alternative for Energy Storage

Radu Florescu

1, Rona Sarfaty

2

1Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901864, Fax: 972-4-9580289, E-mail: [email protected]

2Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901971, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Supercapacitor, simulation model, power availability, solar panel, fuel cells

The world consumes approximately 15 terawatts of power each year. The amount of annual

have stable solar energy has been estimated at 50 terawatts.

Suprecapacitors are common today in solar panel and fuel cell energy storage.

Supercapacitors are components for energy storage, dedicated for applications where both

energy and power density are needed. Even if their energy density is ten times lower than the

energy density of batteries, supercapacitors offer new alternatives for applications where

energy storage is needed.

The aim of the present study is to evaluate the energy storage capability of supercapacitors.

In our paper, we present some results about a simulation model of supercapacitor, in addition

to efficiency of energy storage. The results of laboratory tests are compared with the "two

branches model", and a good concordance is proved.

Supercapacitors as energy storage systems are recommended for real power injection and

reactive power injection for stabilization of fuel cells or solar energy systems.

Page 14: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Electrical and Electronic Engineering | 13

Quantum-Dot Cellular Automata Serial Adder for Novel

Architecture of Nano-Computer

Michael Gladshtein

Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-72-2463668, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Nano-computer architecture, quantum-dot cellular automata, decimal adder,

Johnson-Mobius code

Quantum-dot Cellular Automata (QCA) is a new technology for nano-electronic computers.

Several QCA arithmetic unit designs have been published in technical literature. However, all

these designs use the fundamental information principles inherited from microelectronic

computers such as the binary number system and bit-parallel data transfer/processing.

Presently, at the transition stage from microelectronic technology to nano-technology, the

basic computer elements essentially vary. Besides, the application area of computers extends.

Computers process large volumes of decimal information in financial, commercial, and

Internet-based applications, which cannot tolerate errors from converting between decimal

and binary formats.

Hence, alternative principles should be chosen for novel nano-computer architecture. The

feasibility of the principles can be tested by experimental design of the main functional

computer unit—an adder—and examination its technical parameters.

First of all, it is preferable to use serial data transfer/processing because the cost function and

delay of the computation QCA elements and communication QCA elements are comparable.

The signal propagation through QCA elements is similar to the signal propagation through a

conventional shift register. Secondly, the growing market of computer applications requires

using of binary coded decimal encoding for direct processing of decimal information without

representation and conversion errors. It is clear that QCA price will be reduced as nano-

technology develops. Besides, there are significant problems in providing high reliability of a

nano-computer due to small device sizes and self-assembly fabrication processes. Therefore,

it is preferable to choose the decimal encoding that supports arithmetic processing by shift

operation and allows detecting and correcting errors.

It is shown that among possible decimal encodings, the Johnson-Mobius code is the most

interesting because it is a biquinary code, which is composed of only 5 bits and supports the

simplest arithmetic processing by two operations: INVERT and TWISTED-RING ROTATE.

Moreover the code redundancy presupposes the error detection and correction capability. The

original serial decimal Johnson-Mobius addition algorithm provided the shortest input-to-

output delay is suggested. The block diagram of the serial decimal Johnson-Mobius adder on

QCA and its full circuit are designed. The required number of QCAs is 1130 and the delay is

10 clock cycles. The simulation results demonstrate that the proposed adder works correctly.

The proposed adder confirms a possibility of novel decimal nano-computer design, which

allows us to avoid both base-conversion errors and machine time losses due to these

conversions as well as simplify programming languages and compilers.

Acknowledgement: This study was supported by a scholarship ―Conversion of hours of

teaching to hours of research‖ from the ORT Braude College Research Committee.

Page 15: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

14 | Electrical and Electronic Engineering

Reliability of Modified Color Edge Detector Supporting Heavily

Compressed JPEG Images Obtained by Inexpensive Cameras

Samuel Kosolapov

Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Edge detector, color edge detector, image processing, JPEG compression

Classical Color Edge Detector techniques are basically derived from standard one-color edge

detectors. However, images obtained from inexpensive cameras in "auto‖ mode (designed

for presenting images to Human observer) are available only after the step of JPEG

compression. Camera noise, automatic camera parameter settings, and JPEG compression,

create a number of artifacts making detection of the color edge position non-reliable,

especially for low resolution images.

The aim of this research is to evaluate influence of camera parameters settings and JPEG

compression on the edges quality, and modify the classical edge detector in order to enable

support for heavily compressed JPEG images obtained from inexpensive cameras.

Model test images emulating defects of JPEG compression were created by using MAPLE

simulation software. Test images contain only a limited number of colors (referred later as

Original Color Set – ―OCS‖). Well-known defects of inexpensive cameras like Uneven Field

Illumination, shot noise, digitization errors and Automatic White balance defects were

simulated. Additionally, colors from ―OSC‖ were ―intentionally distorted‖ (in accordance

with real images appearance). The first modification of the Color Edge Detector was in a

creation of test images which have dedicated self-calibration color zones (SCCZ). Modified

Color Set Colors (MCSC) was extracted from SCCZ. The second modification was a ―vote

selection algorithm‖ using MCSC instead of OSC. Quality of modified color edge detector

was evaluated by using a special test routine, counting outliners as in relevant as in non-

relevant zones.

Modified color edge detector creates significantly less outliners then original one, both for

test images and for real images. Most of the outliners were in the region of edges between

different colors.

Addition of dedicated regions designed for Color Edge Detector self-calibration enables to

use modified color edge detectors even for heavily compressed images grabbed by low-

quality cameras. Obtained results may be useful for other general purpose image processing

algorithms utilizing elements of color recognition.

Page 16: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Electrical and Electronic Engineering | 15

Characterization of a Non-Linearity and Interrelations

Shmuel Miller

Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Non-linearity, spurii, intermodulation, cross-modulation, harmonics, saturation

Any electronic system will eventually reach a non-linear region depending on its input

signals. Non-linearities affect signals in several ways, including the generation of several

types of unwanted spurii. The technical characterization and measurement of non-linear

performance of systems has been vastly treated over the last seventy years, with basic

compression and intercept point parameterization of systems being a common engineering

practice. This topic is becoming relevant these days in multiple-standards communication

systems that involve the acceptable simultaneous operation of more than one régime in one

system.

The aim of the present study is to re-evaluate the non-linearity performance parameters and

their inter-dependencies, develop close-form expressions for arbitrary-order of harmonics and

intermodulation products. Finally, clarify the implications of industrial measurement methods

on the non-linear system performance evaluation, and suggest new ways for accurate

characterization of a memory-less non-linear system.

Employ multiple harmonic input signals that are represented using complex exponentials.

Develop the response of a non-linear system using a polynomial approximation, and using

the common definitions of several performance parameters, derive new inter-relationships

amongst them. These results enable to clearly conclude the implications of additional

measurements on the accuracy of the modeled non-linearity.

General close-form expressions are obtained for the harmonic and intermodulation terms.

Thus are provided for a two-tone input. The inter-relations among harmonics, compression

points and intermodulation terms are obtained, where applicable. The dependence of certain

performance parameters on a subset of the spurii is shown explicitly. The implications of

measurement procedures of improved characterization of the memory-less non-linearity are

analyzed and discussed.

The classical topic of memory-less non-linear systems is re-visited and analyzed. General

expressions for the spurii and performance parameters that are suitable for higher-order

characterization are derived. Several conclusions that provide insight on measurements that

may improve the assessment of system performance are provided. These rely on higher-order

approximations of the non-linear system characteristics.

Acknowledgement: This study was motivated by several questions raised by Mr. Nadav

Nissan Roda in relation to a radio chip operating Wi-Fi and Bluetooth standards

simultaneously.

Page 17: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

16 | Electrical and Electronic Engineering

New Compact Printed Antennas for Medical Applications

Albert Sabban

Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-8759111, E-mail: [email protected]

Keywords: Antennas, medical applications, micro strip antennas

The biomedical industry is in continuous state of growth in recent years. Low profile

compact antennas are crucial in the development of human bio-medical systems.

This paper presents a study of compact printed antennas for medical applications. Design

considerations, computational results and measured results of several compact micro strip

antennas with high efficiency for medical applications are presented in this paper.

Environment effects on the antennas electrical performance are also presented.

A new compact micro strip loaded dipole antenna has been designed to provide horizontal

polarization. The antenna consists of two layers. The first layer consists of FR4 0.25mm

dielectric substrate. The second layer consists of Kapton 0.25mm dielectric substrate. The

substrate thickness determines the antenna bandwidth. However, with thinner substrate we

may achieve better flexibility. We also designed a thicker double layer micro strip loaded

dipole antenna with wider bandwidth. A printed slot antenna provides a vertical polarization.

The proposed antenna is dual polarized. The printed dipole and the slot antenna provide dual

orthogonal polarizations. The antenna dimensions are 4.5x4.5x0.05cm. The antenna may be

attached to the patients‘ shirt, in the patient stomach or back area.

Usually in medical applications the distance separating the transmitting and receiving

antennas is less than 2D²/λ, where D is the largest dimension of the antenna. In these

applications the amplitude of the electromagnetic field close to the antenna may be quite

powerful, but because of rapid fall-off with distance, it does not radiate energy to infinite

distances, but instead its energies remain trapped in the near region. Thus, the near-fields

only transfer energy to close distances from the receivers. The receiving and transmitting

antennas are magnetically coupled. Change in the current flow through one wire induces a

voltage across the ends of the other wire through electromagnetic induction. The amount of

inductive coupling between two conductors is measured by their mutual inductance.

The antenna bandwidth is around 10% for VSWR better than 2:1. The antenna beam width is

around 100º. The antenna gain is around 2 to 4dBi. The antennas‘ S11 results for different

belt thickness, shirt thickness and air spacing between the antennas and human body are

given in this paper. The effect of the antenna location on the human body should be

considered in the antenna design process. If the air spacing between the sensors and the

human body is increased from 0 mm to 5 mm the antenna resonant frequency shifts by 5%.

Page 18: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Electrical and Electronic Engineering | 17

Vertical Multi-Junction (VMJ) Si Micro-Cells

Rona Sarfaty

1, Roni Pozner

2, Gideon Segev

2, Abraham Kribus

3,

Yossi Rosenwaks2

1Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901971, Fax: 972-4-9580289, E-mail: [email protected]

2Department of Physical Electronics, Faculty of Engineering, Tel Aviv University, Ramat-Aviv 69978,

Israel, Tel: 972-3-6406248, Fax: 972-3-6423508, E-mail: [email protected]

3School of Mechanical Engineering Faculty of Engineering, Tel Aviv University, Tel Aviv 69978,

Israel, Tel: 972-3-6405924, Fax: 972-3-6407334, E-mail: [email protected]

Keywords: Vertical p-n Junction, high-voltage and efficiency, pre- patterned macro-pores

Autonomous MEMS require similarly miniaturized power sources. Micropower

environmental energy harvesting generators offer an alternative source of renewable energy.

Photovoltaic cells with high voltage and low current would be very desirable in general and

in particular for light concentrator applications where currents are high and current-

associated losses are a major bottleneck for cell performance. High-voltage silicon Vertical

Multi-Junction (VMJ) cells have been proposed since the 1970‘s, and have shown capability

to accept high concentration with peak efficiency of slightly above 20%. These cells have

been produced by a hybrid process of stacking multiple wafers followed by orthogonal

cutting. We are developing an alternative approach for producing the VMJ structure

monolithically on a single wafer, where the junction geometry can be optimized with greater

freedom than the stacking approach and micro cells with wide of few tens of microns can be

produced.

The aim of this research is to develop and demonstrate a novel Vertical Multi- Junction

(VMJ) cell which consists of series-connected vertical p-n junctions. The proposed device

offers significant advantages over conventional cells: lower series resistance loss allows

maximization of the efficiency under high concentration, higher voltage and smaller inactive

area loss; decoupling of optical and electronic effects into orthogonal dimensions, allowing

better optimization of junction dimensions.

We present comprehensive numerical 2D modeling of a Si vertical junction using Synopsys

TCAD Sentaurus device simulator. Preliminary results of 2D realization process will be

presented. We present here a comprehensive analysis, optimization and measurements of a

monolithically structured VMJ cell, taking into account a wide range of realistic parameters

and concentration levels. A large increase in the active layer photoconductivity, usually

negligible in most PV cells, drastically lowers the cell's series resistance under high

concentration. As a result, the VMJ device exhibits efficiencies well above 30% for

concentration of around 1000 suns.

Preliminary results of 2D realization process will be presented.

The optimal junction dimensions are much smaller than the dimensions used in previous

VMJ cell tests and analyses.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee and from the Israel Ministry of National Infrastructures.

Page 19: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

18 | Electrical and Electronic Engineering

Models in Electronics: How Students Use Them?

Elena Trotskovsky

1, Nissim Sabag

2

1Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901709, Fax: 972-4-9580289, E-mail: [email protected]

2Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901942, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Models in electronics, mistakes and misunderstandings, engineering thinking

One interesting issue concerning the development of engineering thinking among students

during their academic studies is the characterization of mistakes and misunderstandings. We

focus on the use and understanding of models by students in electronics studies. Our multiple

years of experience in academic field of Electrical and Electronic Engineering education,

allows us to claim that the knowledge on this issue is insufficient.

The current paper describes a case study in which the relationship of parameters such as type

of model, maturity of student, learning discipline, lecturer, future student specialization, and

the way of understanding and using models in problem solving in Electronic Engineering

disciplines.

A special set of problems was developed for every type of model including three kinds of

problems: problems related to routine use of the model, problems concerned with

understanding of the model‘s purpose, and application problems which demand a deep

understanding of model usage. Nearly two hundred students from the Departments of

Electrical and Electronic Engineering and Mechanical Engineering participated in the study.

First results show that there is no significant difference between students' performance in the

routine problem solving, but significant differences could be observed in the understanding

of model purpose and in the application problems. The salient result is that the students'

grades in the solving of two last kinds of problems in the courses which were taught by

lecturers, who have second and third academic degrees in science and technology education,

were significantly higher than the grades in the courses which were taught by lecturers who

have a Ph.D. in Science and Engineering. This effect must be tested broadly in future study.

Page 20: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Industrial Engineering and Management | 19

ORDANOVA: The Analysis of Ordinal Variation

Emil Bashkansky

1, Tamar Gadrich

2

1Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901827, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901923, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Ordinal data, ANOVA, dispersion measures, repeatability, reproducibility study

Consider an object that is measured using an ordinal scale with K ordered categories. Since

only comparisons of ―greater than‖, ―less than‖, ―equal to‖ and ―unequal to‖ can be made

among ordinal variable values, all statistical measures of such ordinal variables must be

based on these limitations.

The focus of the article is on ORDANOVA (Ordinal Data Analysis of Variation), i.e.:

analysis of the variation of ordinal data aimed to utilize such analysis for practical engineering

applications.

In order to fulfill practical engineering applications such as quality/failure classification,

uncertainty evaluations, repeatability and reproducibility (R&R) analysis, distinguishing

feature identification and so on, the desirable properties of such a variation measure were

precisely defined. Based on a literature survey, we assembled a number of ordinal dispersion

measures. Our study showed that all the above mentioned properties are best satisfied by

Blair and Lacy‘s measure. We have developed a method allowing to split this dispersion

measure to ―within‖ and ―between‖ components and discussed how this decomposition looks

while considering the following three engineering applications: 1) making inference about

the multinomial proportions from small samples, 2) analyzing the accuracy of the ordinal

measuring/classification system and 3) searching for a distinguishing factor.

The utilization of ORDANOVA decomposition to the abovementioned engineering

applications led to the following results: 1) If the null hypothesis that all samples come from

the same population characterized by the set { }kp is true, then the total variation is split on the

average in full accordance to the ―within‖ and ―between‖ amounts of degrees of freedom

(and this splitting does not depend on the amount of categories!). Fluctuations in the splitting

are, of course, possible; for each case they may be simulated and their likelihoods can be

assessed. 2) After classification the variation may increase, decrease or remain unchanged

relative to the incoming variation. The result depends on the metrological properties of the

classification matrix. 3) If the factor under study does not yield to segregation between

groups formed according to different levels of the factor, the ―between‖ to ―within‖ variation

parts ratio is expected to be no more significant than the ―between‖ to ―within‖ ratio of

degrees of freedom. We have proposed to use, as an indicator of the segregation power of the

factor under study, the fluctuation of the empirical ratio between these two dispersion

components from its expected 0H value.

The Blair and Lacy's dispersion measure is the most desirable measurement that fulfilled the

essential features for variation measure. Moreover, this measure allowed us to decompose the

total variation to between and within components.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 21: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

20 | Industrial Engineering and Management

Setting Release Gates for Activities in Projects with

Stochastic Activity Durations

Illana Bendavid

1, Boaz Golany

2

1Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology,

Technion City, Haifa 32000, Israel, Tel: 972-4-8121390, Fax: 972-4-8121390, E-mail:

[email protected].

2Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology,

Technion City, Haifa 32000, Israel, Tel: 972-4-8294512, Fax: 972-4-8295688, E-mail:

[email protected].

Keywords: Project management, gates, cross-entropy, scheduling

This work addresses the problem of controlling the scheduling of activities in projects with

stochastic activity durations. A first approach is to set a gate for each activity i.e., a time

before it the activity cannot begin. Since the resources required for each activity are

scheduled to arrive according to its gate, we incur a ―holding‖ cost when an activity is ready

to be processed, but the resources required for it were scheduled to arrive at a later time; or a

―shortage‖ cost when the required resources have arrived on time but the activity cannot start

because its predecessors are not yet finished.

Our objective is to set the gates so as to minimize the sum of the expected holding and

shortage costs.

A second approach is to set, for each activity, an interval of time and in this way introduces

more flexibility in the contracts with the subcontractors. The subcontractors would now be

expected to start their respective activities within a certain time interval rather than exactly at

a gate. Our objective is to set the intervals so as to minimize the sum of the expected holding,

shortage and interval costs.

In the first approach, all the gates are determined in a ―static‖ way, at ―time zero‖. In this

way, all the risk induced by the uncertainty is assumed by the project manager (PM) only. A

dynamic approach would be to solve the problem at ―time zero‖ to obtain a basic guideline

for the contract and then, each time more information is obtained, the PM can solve the

problem for the remaining activities and to adjust his future decisions in a dynamic way,

allowing him to reduce uncertainty, thus to reduce his costs.

We use a general heuristic method to solve the problem. We chose the Cross-Entropy

method. We applied it on small projects using a discrete generation distribution function. We

extend the method for larger problems, using a continuous generation distribution function.

We checked the performance of the algorithms developed and compared them to other

heuristic methods.

In all examples, the CE-based algorithms developed outperformed other methods to which

they were compared. The advantage of the algorithms developed grows as the size of the

project grows.

The CE algorithms developed in this work proved useful for the basic problem and its

extensions. This work also provides important insights for contract negotiations with

subcontractors.

Page 22: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Industrial Engineering and Management | 21

The Question of Truth in Knowledge Management

Doron Faran

Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901824, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Knowledge management, truth, justification, epistemology

Knowledge is traditionally defined as "justified true belief"; hence the truth is an inherent

part, inseparable from the knowledge. However the definition of truth has changed

significantly along the centuries: from "one and only" and "out there" in ancient Greece to the

multi-facets notion of the post-modernism. Similarly the means for justification has

developed, spanning from rationalism through pragmatism to empiricism. The disciplines

involved in the discussion were mainly Philosophy and Science.

In the last two decades the issue of knowledge has been drawing much more attention in the

management circles than ever before. Once the concept of Knowledge Management (KM)

was introduced in the early 1990's it generated a wave of practical and academic interest that

has kept growing ever since. The topics that the field addresses are diversified: there are

activities such as knowledge creation, transfer, sharing, utilization and retention, to name just

a few; there are issues of essence, as whether "knowledge" is an asset or a process, and there

are questions of the encompassed content.

The research raises two questions: one about the references to truth in the KM literature,

academic and practical alike; the second is more fundamental and asks what in fact the

attitude toward truth and justification is in organizations.

This is a critical, literature-based research. Data will be collected through literature survey.

Both qualities with which we opened, namely truth and justification, have been relatively

marginalized, all the more so in the practical writing.

Since the research is in early stage, the presentation will focus on the background, the

questions and the method.

Page 23: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

22 | Industrial Engineering and Management

Learning Patterns in Procedural Skills Acquisition with

Enriched Information

Nirit Gavish

Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901923, Fax: 972-4-9901842, E-mail: [email protected]

Keywords: Procedural skills, enriched information, learning

Several past studies have demonstrated that that when a learner of a new procedural skill is

provided with an elaborated knowledge which is called "how-it-works" knowledge or

"context procedures", in addition to the "how-to-do-it" knowledge or "list procedures", his

performance becomes more accurate, faster, and more flexible. However, in some cases the

enriched information did not facilitate performance. Although several assumptions were

made, there is no conclusive agreement about when and why enriched information during

procedural task training improves performance

The goal of this study was to investigate the difference in training patterns which developed

when the trainee is encountered with enriched information compared to learning a procedure

without this information.

Thirty nine undergraduate students from the Technion served as participants, using a training

program for assembling a 76-step Lego helicopter model. The software presented step-by-

step assembly instructions and also included a short task to be performed in each step

(selecting the relevant brick and pointing on its position). Trainees were required to learn

how to assemble the model in order to be able to build the real Lego model in the test phase.

Training time was divided to two segments: In the first segment, trainees actively interacted

with the program, and in the second segment they could only watch the steps. Trainees

decided when to terminate the first segment and move on to the second one, but the entire

training was limited to 19 minutes. A Control group (10 males, 10 females) was trained with

the program only, and a Model group (9 males, 10 females) was also given the real final

Lego model, and they could watch it during the training.

Performance measures were not significantly different among groups. However, Control and

Model groups differed in the correlations between training pattern and performance. While

no significant correlations were found for the Control group in the first segment, in the Model

group two training patterns were negatively correlated with performance: performing each

training step very slowly, which was correlated with longer time to assemble each brick

(r=0.500, p=0.029), and performing each step very fast, which was correlated with a larger

number of errors (r=0.545, p=0.016). Intermediate training time for each step was found to be

the best strategy for this group, and was correlated with shorter performance time (r=-0.577,

p=0.010), shorter time to assemble each brick (r=-0.552, p=0.014) and less errors (r=-0.451,

p=0.053).

Procedural training that is accompanied by additional information becomes more complex,

and hence trainees may wrongly choose training strategies which are not optimal and can

impair their potential performance. Special awareness to this possible danger should be given

to assure that the enriched information will be successfully used to accelerate learning.

Acknowledgment: This research was supported in part by the European Commission

Integrated Project IP-SKILLS-35005.

Page 24: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Industrial Engineering and Management | 23

Generating and Evaluating Simulation Scenarios

to Improve Emergency Department Operations

Maya Kaner

1, Tamar Gadrich

1, Shuki Dror

1, Yariv Marmor

2

1Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901854, Fax: 972-4-9901852, E-mails: [email protected],

[email protected], [email protected]

2Division of Health Care Policy and Research, Mayo Clinic, 200 First Street SW Rochester, MN

55905, Minnesota, USA, E-mail: [email protected]

Keywords: Simulation, emergency department, design of experiments, scenarios

The literature describes different problems in Emergency Department (ED) operations. To

resolve some of these problems researchers analyze various operational scenarios through

discrete-event simulation. However, defining possible scenarios is usually not schematic and

depends on the designer‘s intuition.

This paper suggests a framework for schematic generation and evaluation of simulation

scenarios to improve ED processes in real-life environments.

First, we assemble a set of main generic components required for designing simulation

experiments: factors, performance measures and interactions. Second, we develop a

questionnaire for gathering information regarding the relevant factors and their possible

levels and interactions in a specific ED. Using the answers to the questionnaire, we build a

scenario tree whose branches represent possible scenarios. Then we use discrete-event

simulation to simulate the scenarios for the ED and analyze the results, using a Generalized

Linear Model (GLM) for nested (simulation) experiments and Scheffe‘s post hoc test to

group different scenarios.

We illustrate the application of our methodology in a specific ED operating without a Fast

Track (FT). The simulation scenarios were schematically generated based on the answers of

this ED‘s management. Operational alternatives (e.g., FT opening) as well as uncontrollable

changes (e.g., possible increase in the percentage of non-critical patients) were tested. The

average patient length of stay (ALOS) for the defined scenarios varied from 206 minutes to

337 minutes. Among the results, we found: an FT should not be opened under the given

conditions, possible increase in the percentage of non-critical patients decreases the ALOS by

at least 40 minutes and adding a nurse slightly increases (about 3%) the percentage of

patients waiting for first examination by a physician up to the given threshold.

Our methodology can support ED management in improving ED operations through enabling

them to analyze possible scenarios in a simulation environment. We contribute to the area of

ED computer simulation by suggesting a methodology that offers several advantages.

1) Simulation scenarios can be schematically formulated rather than based on trial-and-error

experiments.

2) Heterogeneous dependent and independent ED factors can be handled to avoid the risk of

missing important factors that should be taken into consideration and to analyze complex

interrelationships among these factors.

3) Scenario development can be integrated in the different stages of simulation model

development to support designers and management in understanding ED problems,

improvement goals and data that should be collected and operational changes that should be

applied.

Page 25: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

24 | Industrial Engineering and Management

Managing Operations in a Smart Grid Environment

Yevgenia Mikhaylidi

1, Liron Yedidsion

1, Hussein Naseraldin

2

1Faculty of Industrial Engineering and Management, Technion-Israel Institute of Technology,

Technion City, Haifa, Israel, E-mail: [email protected]

1Faculty of Industrial Engineering and Management, Technion-Israel Institute of Technology,

Technion City, Haifa, Israel, E-mail: [email protected]

2Department of Industrial Engineering and Management, ORT Braude College, P.O.B. 78, Karmiel

21982, Israel, Tel: 972 -4- 9901977, Fax: 972-4- 9901852, E-mail: [email protected]

Keywords: Smart grid, operations management, approximation algorithms, electricity

storage

Technology development has led to a new form of the electricity network, named Smart Grid.

The basic notion behind the concept of Smart Grid is to improve the overall efficiency of

electricity production, delivery, and consumption, while increasing reliability and security of

the electrical grid. Variations in the electricity consumption rate had led to the development

of varying pricing schemes of electricity. It is plausible to assume that consumers

(individuals and businesses alike) will adapt to the new pricing schemes thus, postpone usage

of some electronic devices until off-peak periods. As a result, costs are reduced.

We consider a finite planning horizon for electric consuming operations that need to be

completed and are available for processing at predetermined periods throughout the planning

horizon. We assume a capacity constraint on the total power consumed in each period due to

infrastructure or provider limitations. Postponing an operation incurs a cumulative penalty for

each time period. Each operation is unique and has its own workload and concave electricity

consumption function. Preemptions of operations are allowed. However, each preemption is

considered as a new operation and may require an additional setup cost.

The aim of this research is to determine when to process each operation within the time

horizon so as to minimize the total electricity consumption and operations postponement

penalty costs, given an exogenous electricity cost for each time period.

We term this as the electricity consumption plan (ECP) problem, in which one determines the

completion of multiple independent operations that share a single constrained capacity

resource throughout a finite time horizon. We show the resemblance between the ECP

problem and the capacitated lot sizing problem. The ECP problem could be regarded as lot

sizing on a single manufacturing facility. We expand the classic lot sizing problem variations

by adding the state- and time-dependent setup cost, which is incurred for a period that starts a

sequence of consecutive periods with non-zero electricity consumption. The setup cost is

different in each period (time dependent) but whether or not it is charged depends on the

previous state of electricity consumption (state dependent). That is, the setup cost for a given

operation is incurred only if that operation was not processed in the previous period.

We consider several special cases of the ECP problem. We show that the single-operation

problem with uniform capacity is solvable in polynomial time, whereas the single-operation

problem with general capacity, as well as the multi-operation problem with uniform capacity,

is both NP-hard. Therefore, the more general multi-operation problem with general capacity

is obviously NP -hard as well.

Page 26: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Industrial Engineering and Management | 25

Robust Facility Location under Service Constraints

Hussein Naseraldin

1, Opher Baron

2

1Department of Industrial Engineering and Management, ORT Braude College, P.O.B. 78, Karmiel

21982, Israel, Tel. 972-4-9901917, Fax: 972-4-9901852, E-mail: [email protected]

2Rotman School of Management, University of Toronto, 105 St. George Street, Toronto, Ontario, M5S

3E6, Tel. 1-416-978-4164

Keywords: Facility location, robust optimization, healthcare, service constraints, Lambert

function

Facility location problems are often strategic in nature and are thus very important. Location

problems involve long-term decisions, such as the number of facilities to open, the location

and capacity of each facility. Furthermore, in the healthcare sector, the design of the facilities

must take into account service considerations. The above decisions are determined at the

beginning of the horizon and their values constrain the performance of the system in each of

the consequent periods. Therefore, it is desirable to approach facility location problems in a

two-stage nature -- design followed by evaluating the resulting performance and service. A

major complicating aspect of a facility location problem is data uncertainty. For example, a

healthcare clinic may be designed to serve a specific customer arrival rate; but it is plausible

that the arrival rate would change over time. If the system is not designed to cope with such

uncertainty, public health might be at risk. Hence, locating facilities in the healthcare sector

requires robust solutions under all realizations of the uncertain demand.

The aim of this study is to plan the capacity of a healthcare facility in a robust manner such

that the service level requirements are maintained at a minimal cost.

We adopt the Robust Optimization approach in which one searches for a feasible solution

that is at least as good as all other feasible solutions for most data realizations. We search for

the optimal service rate that minimizes the total costs, which include the fixed cost of

establishing a facility, the capacity cost and the service level cost. The latter is defined as the

possible cost of not meeting the required service level. The methodology of robust

optimization allows either to search for the worst-case solution – solutions that are immune to

all realizations of the uncertain parameters – or a globalized robust solution – a solution in

which one takes into account realizations that don‘t typically happen but when they happen

the impact is huge.

We propose analytical results with regards to the relation between the optimal capacity in

case we ignore the uncertainty in the demand rate (nominal case) versus the optimal robust

capacity in two sub cases: worst-case robust solution and globalized robust solution.

It is imperative to incorporate uncertainty effects into the decision making process, especially

when it pertains to healthcare planning. Moreover, adopting a robust optimization approach

has positive impacts on the system performance.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 27: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

26 | Industrial Engineering and Management

Flexible Work Arrangements, National Culture and Organizational

Absenteeism and Turnover: a Longitudinal Study across

Twenty One Countries

Hilla Peretz

1, Yitzhak Fried

2, Ariel Levi

3

1Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901849, Fax: 972-4-9901852, E-mail: [email protected]

2Whitman School of Business, Syracuse University, 721 University Avenue, Syracuse, NY 13244-2450

U.S.A, Tel: 1-315- 443-3639, Fax: 1-315- 442-1449, E-mail: [email protected]

³School of Business Administration, Wayne State University, Detroit, Michigan 48202, U.S.A, Tel: 1-

313- 577-4581, Fax: 1313- 577-4525, E-mail: [email protected]

Keywords: Flexible work arrangements, national values, human resource

In an attempt to attract and retain talented employees and enhance competitive advantage,

employers world-wide have increasingly implemented flexible work arrangements (FWAs)

for both the benefits of the organization and its employees. Most commonly, these FWAs

have included job sharing (in which the job is divided up by two or more employees),

flextime (in which some working hours are determined by the employees), home-based work

(in which employees‘ normal workplace is at home), teleworking (in which employees can

link electronically to the workplace), and job compression (in which employees‘ standard

number of hours is compressed into a reduced number of days). The literature provides

support for the success of FWA programs, by showing their positive effects on psychological

and behavioral outcomes such as burnout, retention, and organizational performance

indicators.

However, a major limitation of previous studies is that they were conducted in organizations

based in the US, neglecting to take into consideration the fact that FWAs have gained

popularity in other countries as part of the global competitive work environment. To date

there is little research on the prevalence and effects of FWAs in different countries in the

global environment.

Therefore, in the present study we will aim to close the gap on this issue by examining the

following: 1) the degree to which organizations located in countries with different national

cultures are likely to implement FWAs and 2) how congruence or lack of congruence

between national cultures and FWAs affect the organizational performance variables of

absenteeism and turnover. We analyze data from two different time periods: 2004 and 2009,

following the global economic crisis of 2007-2008. This enables us to test the stability of the

results with data collected prior to and shortly after the economic downturn.

Page 28: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Industrial Engineering and Management | 27

Student Loans - Stated versus Perceived Attitude

David Pundak

1, Arie Maharshak

2

1Web-Learning Unit, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-9901843,

Fax: 972-4-9901886, E-mail: [email protected]

2Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-72-2463666, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Students loans, perceived attitude, value curve, Norm theory

It has turned out that higher education constitutes an essential tool for those who wish to cope

successfully in the global market. Research findings show that acquisition of higher

education engenders income gaps, which have become far more meaningful in developing

countries due to the difficulty of funding education costs. The increasingly high academic

fees have become a prevalent phenomenon in many world states, and students are more likely

to work during their studies and apply more frequently for loans.

In Israel, engineering students endure a heavy load of studies during their four year course

and many find it difficult to comply with these demands. Despite the heavy academic burden,

many students choose to work during their studies thus increasing their overall burden. This

research focuses on the Israeli engineering student and explores his attitudes regarding the

dilemma: financing the education by loans, or by work. Since many funds offer comfortable

loans to students during their studies, the following two questions become pertinent as the

research aim.

The aims of this study are:

1) Is the decision to work not a necessity but rather a social norm?

2) Is there a fear of taking loans and creating a worrying debt?

An attitudes questionnaire comprised of 35 statements was answered by a sample of 170

students. The participants were asked to indicate their attitudes by rating the statements on a

Lickert scale of 1-5.

The research results indicate several prevalent attitudes amongst the students:

1) Working during studies harms academic achievements.

2) Working during studies does not constitute an essential part of student life.

3) Taking a loan is a responsible act.

Simultaneously, this study reveals that, despite the benefits of loans, most of the engineering

students in Israel search for work during their studies and only a slight minority of them takes

loans during studies.

According to the 'Value Curve' theory there is a tendency to ascribe a larger risk to small

loans in comparison to the chance of earning a similar sum. Taking a loan is perceived by

students as a large risk in comparison to their anticipated income on graduation. The ‗Norm

Theory‘ may offer additional insight into the apparent contradiction between the research

findings concerning students' stated attitudes and their actual behavior. This theory argues

that if choosing to act is considered as a norm, then inaction is considered as a fault that may

reflect a passive image. A student will decide to work during the period of studies if it seems

to him that this is the accepted norm in his social circle.

Page 29: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

28 | Industrial Engineering and Management

State Dependent Priorities for Service Systems with

Exchangeable Items

Rachel Ravid

1, David Perry

2, Onno J. Boxma

3,4

1Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901849, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Statistics, University of Haifa, Mount Carmel, Haifa, 31905, Israel, Tel: 972-4-

8249153, Fax: 972-4-253849, E-mail: [email protected]

3Department of Mathematics and Computer Science, Eindhoven University of Technology, P.O. Box

5600 MB Eindhoven, The Netherlands, Tel: +31 (40) 247 2858, E-mail: [email protected]

4EURANDOM

, P.O. Box 513, 5600 MB Eindhoven, The Netherlands, Tel. +31 40 2478100, Fax: +31

40 2478190, E-mail: [email protected]

Keywords: Exchangeable items, longest queue, Markov renewal theory

Exchangeable-item repair systems have received considerable attention in the literature

because repairable-type items are often essential and expensive. Many organizations

extensively use multi-echelon repairable-item systems to support advanced computer systems

and sophisticated medical equipment The main focus of the studies to date has been on the

number of backorders (mainly its expectation), which is the customer queue size in the

terminology used here. Another important performance measure is the customer's sojourn

time distribution. We consider a repair facility which consists of one server and two arrival

streams of failed items, from bases 1 and 2. The arrival processes are independent Poisson

processes with different rates. The service times are independent, exponential random

variables with equal rates. The items are exchangeable, and a failed item from base 1 could

just as well be returned to base 2, and viceversa. The items are admitted to one line but the

customers wait and are marked according to their sources. We assume that the system is in

steady state.

We are interested in key performance measures of this repair facility, like the joint queue

length distribution of failed items of both types, and customer's sojourn time distribution.

First we derive the balance equations for the joint steady-state queue length distribution. We

then study their generating function in order to derive some special probabilistic results.

Differences busy periods and Markov renewal theory are used in order to get some recursive

relations between the equality states.

The steady states distribution of the equality states serves as boundary conditions for the

balance equations. In order to derive the Laplace transform of customer's sojourn time we

build recursive partial difference equations and solve them using the generating function

method.

We provide explicit results for equality state steady state probabilities we also give an

iterative method for obtaining all queue length probabilities. On the continuation, the Laplace

transform of customer‘s sojourn time is determined.

Page 30: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Industrial Engineering and Management | 29

Multi-criteria Optimization-based Dynamic Scheduling

for Controlling FMS

Boris Shnits

Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901926, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: FMS control, dynamic scheduling, multi-criteria decision making

This study deals with controlling Flexible Manufacturing Systems (FMS) operating in

volatile production environments. To cope with such environments, most studies recommend

some sort of adaptive scheduling, which on the whole, enables the system to better cope with

randomness and variability. This type of scheduling is usually based on using simple

dispatching rules. However, the dispatching rules do have disadvantages. Their use can result

in myopic decisions based on limited information, and non-delay type scheduling. With non-

delay scheduling no machine is kept idle when it could begin processing some operation.

Such an approach does not guarantee the best scheduling decisions. In addition, the shop

floor control systems presented in most studies are only partially adaptable to changes in the

production environment because they do not consider dynamic changing of the operational

decision criteria in accordance with changes in the system state.

The aim of the present study is to develop a multi-criteria optimization-based dynamic

scheduling methodology for controlling FMS, that combines dynamic selection of the

appropriate decision criterion and solving, subject to this criterion, the optimization-based

local scheduling problem.

The suggested scheduling and control scheme comprises a two-tier decision making

hierarchy. Tier 1, driven by a rule-based algorithm, is used to determine a dominant decision

criterion based on the production order requirements, actual shop floor status and

manufacturing system priorities. On the basis of the designated decision criterion, the

objective function for the local scheduling problem is chosen. Thereafter, Tier 2 is used to

determine the best schedule for the current system state by solving a mixed integer linear

programming (MILP) optimization model. This model takes into account jobs existing on the

shop floor at the time that the decision is made as well as the new jobs that are expected to

arrive at the shop during the nearest scheduling period. The objective function for this

scheduling problem is that determined in Tier 1. The solved model defines the next jobs to be

processed on the available machines.

The performance of the proposed methodology was evaluated by comparing it to some

known scheduling rules/policies using the average flow time and average tardiness

performance measures. The results obtained for the proposed control methodology form an

efficiency frontier, i.e. the proposed methodology outperforms the other tested scheduling

rules/policies.

A multi-criteria optimization-based dynamic scheduling methodology for controlling FMS

was suggested and evaluated. The results obtained demonstrate the superiority of the

proposed methodology.

Page 31: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

30 | Industrial Engineering and Management

The Main Factors that Lead to Unidentified Risks in Software-

Intensive Projects

Meir Tahan

1, Tsvi Kuflik

2, Efrat Yuval

3

1Department of Industrial Engineering and Management, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-54-2873197, Fax: 972-4-9901852, E-mail: [email protected]

2Information Systems Department, University of Haifa, Mount Carmel, Haifa 31905, Israel, Tel: 972-

4-8288511, Fax: 972-4-8288283, E-mail: [email protected]

3Information Systems Department, University of Haifa, Mount Carmel, Haifa 31905, Israel, Tel: 972-

4-8288511, Fax: 972-4-8288283, E-mail: [email protected]

Keywords: Risk management, risk identification, software projects

In recent years, there has been a growing need for Risk Management in organizations.

Related software, that registers, assesses, monitors and controls the risks, has been purchased

or developed. During the project course, problems pops-up. Some of them were identified as

risks and handled according to the Risk Management Plan defined at the beginning of the

project. Some of them do not appear as a risk, and the question is: what was the reason that

caused the unidentified risk? Was it a management problem? A design problem (Known-

Known or Known-Unknown category) or was it a risk that we could not predict in any way,

―force majeure‖ (Unknown-Unknown category). Unidentified and therefore unmanaged risks

are clearly unchecked threats to a project's objectives, which may lead to significant overruns.

The identification process influences the effectiveness of risk management.

The aim of this research is to reveal the main factors that lead to unidentified risks and to

offer optional directions for preventing such failures in the future.

A semi-constructed interview was built. Four Project managers from four companies

("RAFAEL", "SAP", "Elop" and "Seraphim Optronics Ltd") were interviewed regarding the

risk identification process conducted in their organization. They were asked to describe the

tools and techniques that they used to identify risks. Each project manager shared an example

of a problem raised during a project which was not identified as a risk. He described the

problem, how they handled it and what caused the unidentified risk. All seven projects

ranged between the years 1998 – 2010 (ongoing research).

Eleven factors that caused unidentified risks were found. They were divided to three groups:

Managerial factors, behavioral factors and external factors. Several projects contained more

than one factor.

Page 32: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mathematics | 31

Self-Learning of a Theorem and its Proof

Buma Abramovitz

1, Miryam Berezina

2, Abraham Berman

3,

Ludmila Shvartsman4

1Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901806, Fax: 972-4-9901802, E-mail: [email protected]

2Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901829, Fax: 972-4-9901802, E-mail: [email protected]

3Department of Mathematics, Technion-Israel Institute of Technology, Technion City, Haifa 32000,

Israel, Tel: 972-4- 8294101, Fax: 972-4- 8293388, E-mail: [email protected]

4Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901829, Fax: 972-4-9901802, E-mail: [email protected] Keywords: E-learning, self-learning, understanding of a theorem

Nowadays e-learning has become an important part of the educational process. A number of

universities around the world offer online courses. An important benefit of e-learning is that

it encourages students to learn independently. Self-learning is a significant part of the

learning process at university, but it is not a simple task for freshmen students, particularly

future engineers studying Mathematics. Self-learning, furthermore, is an especially difficult

way of learning for students with a poor mathematics background. Many students come to

higher education from schools that emphasize the calculating part of Mathematics and

neglect its theoretical part. A lot of students take a break, long or short, before entering

university and this hiatus may make it that much harder to learn Mathematics on a high level.

We know that students need strong theoretical knowledge in Mathematics in order to

creatively solve practical nonstandard problems. Many students, however, have difficulties in

learning theory, and as a result, are frightened of it. Possible reasons may be the abstractness

of mathematical concepts, and the special language of a theorem or definition. Students

without a profound comprehension of the theory have difficulties in solving problems that

they have not yet seen. We wanted students to become active in their learning and, if

possible, to "discover" concepts and theorems, plus be given the feedback they need, as they

work on their own.

In order to overcome students' problems in learning theory, particularly theorems, we

developed an approach for self-learning a theorem. Usually, a theorem and its proof are

presented to students during the lecture. Students are shown how to apply the theorem to

solving problems. At home, they are expected to study the theorem and its proof and solve

more problems. The question students often ask us is how are they supposed to study a

theorem? We provide students with a set of web based assignments (convenient for self-

learning) that are written in a way that does not appear unusual to students, yet is intended to

teach them how to understand theoretical problems. The given assignments focus on the

following questions: what are the assumptions of a theorem and what are the conclusions?

What happens when one or more of the theorem's assumptions are not fulfilled? Which

assumptions are necessary and which are sufficient? We use theorems from Calculus to show

students how to comprehend a theorem as a set of conditions that are needed to reach the

theorem's conclusions. Normally, we present the proof of a theorem written as a chain of

logical steps in the lecture, following which students receive different types of online

assignments to work on the proof. We applied our approach on several occasions in Calculus

course and received encouraging results.

Page 33: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

32 | Mathematics

On Linear-Fractional Relations and Images of Angular Operators

Tomas Y. Azizov

1, Victor A. Khatskevich

2, Valerii A. Senderov

3

1Department of Mathematics, Voronezh State University, Universitetskaya Pl. 1, Voronezh, 394000,

Russia, E-mail: [email protected]

2Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Fax: 972-4-

9901801, E-mail: [email protected]; [email protected]

3Pyatnitskoe highway, 23-2-156, Moscow 125430, Russia, E-mail: [email protected]

Keywords: Plus-operator, Banach indefinite space, linear fractional relation, operator ball,

chain rule

For plus-operators in a Banach indefinite space, we consider a linear fractional relation

defined on a subset of the closed unit operator ball. The classes of operators with the empty

domain of definition for such a relation are described. The sufficient (and necessary, in some

means) conditions for the chain rule to be valid are given.

In particular, we consider the special case of linear-fractional transformations.

In the case of Hilbert spaces 1H and 2H , each linear-fractional transformation of the closed

unit ball K of the space 21 , HHL is of the form 1

12112221

KTTKTTKF

and is generated by the plus-operator T.

We consider application of our results to the well-known Krein-Phillips problem of invariant

subspaces of special type for sets of plus-operators acting in Krein spaces.

Acknowledgement: The work of T. Azizov was supported by the Russian Foundation for

Basic Research.

Page 34: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mathematics | 33

Extension Operators via Semigroups

Mark Elin

Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901974, Fax: 972-4-9901802, E-mail: [email protected]

Keywords: Starlike and spirallike mappings, semigroups

The Roper-Suffridge extension operator and its modifications are powerful tools to construct

biholomorphic mappings with special geometric properties.

The first purpose of this work is to analyze common properties of different extension

operators and to define an extension operator for biholomorphic mappings on the open unit

ball of an arbitrary complex Banach space. The second purpose is to study extension

operators for starlike, spirallike and convex in one direction mappings.

We study the problems above and develop a general approach to extension operators, which

enables us to obtain new properties even for known operators. In particular, we show that the

extension of each spirallike mapping is -spirallike for a variety of linear operators.

Our approach is based on a connection of special classes of biholomorphic mappings defined

on the open unit ball of a complex Banach space with semigroups acting on this ball.

Page 35: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

34 | Mathematics

Robust Tracking Problem:

Linear-Quadratic Differential Game Approach

Valery Glizer

1, Vladimir Turetsky

2

1Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901828, Fax: 972-4-9901802, E-mail: [email protected]

2Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-72-

2463670, Fax: 972-4-9901802, E-mail: [email protected]

Keywords: Trajectory tracking, robust control, linear-quadratic differential game, cheap

control

The problem of tracking a given trajectory under uncertainties (trajectory planning, path

following etc.) is an important practical control problem in different applications. For

example, in guidance, this problem admits several formulations to:

1) intercept the missile,

2) intercept it with zero relative velocity (rendez vous),

3) reach some prescribed points during the engagement

4) track a prescribed relative separation profile, etc.

All these problems can be formulated in terms of some quadratic cost functional.

In this presentation, a general tracking problem is considered. In this problem, a tracking

criterion is chosen as a Lebesgue-Stilties integral G of squared discrepancy between the

system motion and a given vector function (tracked trajectory), calculated over the mixed

discrete-continuous measure. The problem is solved by using an auxiliary zero-sum linear-

quadratic differential game, where the state term of the cost functional is represented by G.

Both the minimizer's and the maximizer's controls in this game are cheap, meaning that

penalty coefficients for their control expenditure are small. Novel, cheap control, solvability

condition is established. It is shown that, subject to some additional conditions, the optimal

cheap control strategy also solves the original tracking problem.

A boundedness of the minimizer's control is also analyzed. Necessary conditions for the

control boundedness in C[t0, tf] and sufficient conditions for the control boundedness in

L2r[t0, tf], are established.

Page 36: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mathematics | 35

On the Euler-Poisson System

Lavi Karp

Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901844, Fax: 972-4-9901802, E-mail: [email protected]

Keywords: Density, equation of state, hyperbolic systems

The motion of gaseous stars can be described by the Euler-Poisson system:

0

4

ρ+ ρu =

t

uρ + ρ u u+ P= ρ Φ

t

ΔΦ= πρ

where ρ is the density, u is a velocity vector in 3R , P is the pressure and Φ is the

Newtonian potential. The unknowns of the system ρ,u,Φ are functions of 3Rx and

0t . The system is closed by an equation of state, and here we consider the polytrophic

type equation of stateγP = Kρ , where1< γ and K is a positive constant. These types of

equations of state serve as a model for perfect gas and the values of the adiabatic exponent γ

correspond to certain physical significances.

Our main interest is existence theorems for the initial value problem which include stationary

solutions. The common way to obtain it is by means hyperbolic systems. However, from

astrophysical context we cannot assume that the density ρ is uniformly strictly positive, and

the vanishing of ρ causes serious difficulties in the transformation of the Euler-Poisson

equations into uniformly hyperbolic systems.

The talk will discuss several mathematical and physical aspects of this system. This is a work

in progress jointly with U. Brauer.

Page 37: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

36 | Mechanical Engineering

Controlling the Migration of POSS Nano-Particles in Polypropylene

by Computational Modeling

David Alperstein

1, Menachem Lewin

2

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901944, Fax: 972-4-9902088, E-mail: [email protected]

2Polymer Research Institute, Polytechnic Institute of NYU, 6 Metrotech Center, Brooklyn, NY 11201

Keywords: Nano-composites, Octaisobutyl POSS, Octamethyl POSS, computational

modeling, solubility parameter

The migration of Octaisobutyl POSS (Oib-POSS) and Octamethyl POSS (Om-POSS) is

compared experimentally, as well as by using computational modeling tools. The migration

of POSS particles in nanocomposites is governed by the following: 1) the Gibbs adsorption

isotherm; 2) entropy considerations; 3) the cohesive energy between the matrix chains and

the additives particles and 4) the kinetic energy imparted to the additives particles by the

relaxation of the chains. The results described in the present paper provide a plausible

explanation for the difference in the migration behavior between Oib-POSS which migrates

freely and Om-POSS which, although of much smaller molecular size, does not migrate at

all. Solubility parameter calculation of the pristine constituents and their blend, enable

judgement whether the Gibbs excess concentration is positive or negative. Pair correlation

function is used to estimate the interaction intensity between the polymeric chains and the

nano-particles. The two systems which were examined in this study represent two extremes:

the non- migrating system – Om-POSS with relatively large carbon-carbon interactions – and

the migrating system Oib-POSS with weak carbon-carbon interactions.

The aim of this research is to explore the characteristics of POSS migration mechanism in

PP, using computational tools and experimental data. The migration of Oib-POSS and

Methyl POSS (m-POSS) is compared using various computational analysis tools.

The interactions of Oib-POSS with the PP chains in all three temperatures are very weak and

probably do not disturb the diffusion in PP. The curve has a shallow wide peak at around 6Å.

The PC curve at 369oK is a little narrower and has a maximum value of more than a unit

indicating that, at this temperature, the carbon carbon interaction is maximal.

On the other hand, the PC results for Om-POSS are different. The curves have pronounced

peaks, they are narrower and they are all above a unit. At 298oK the PC value is more than

1.2. Moreover, the peak distance value is 5Å indicates a much stronger carbon carbon

interaction in Om-POSS.

In this work, the explanation of the reasons for migration or non-migration of the nano-

particles of the nano-composites is elucidated for the first time. Further experimental work is

necessary on nano-composites, based on many polymers and on a large number of nano-

particles. We believe that further work will enable the establishment of general rules by

which migration in nano-composites can be predicted and controlled.

Page 38: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mechanical Engineering | 37

Handling Changes of Performance-Requirements in

Multi Objective Problems

Gideon Avigad

1, Erella Eisenstadt

2, Oliver Schütze

3

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901767, Fax: 972-4-9901868, E-mail: [email protected]

2Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901755, Fax: 972-4-9901868, E-mail: [email protected]

3Computer Science Department, CINVESTAV-IPN Departamentode Ingeniería Eléctrica Sección de

Computación, México, D.F. 07300, E-mail: [email protected]

Keywords: Multi objective optimization, evolutionary computation, robustness, family of

designs

A successful product may be attained by scaling the platform in one or more dimensions to

target specific market niches. Yet, considering the fact that as there are more changes needed

in order to meet the new market demands, higher costs are involved, even though the

platform is scaled. Minimizing the needed changes, within a scaled platform, in order to

achieve the new reconfiguration, has not been attempted before

In this study, the need for rapid, low cost changes in a design, in response to changes in

performance requirements, within multi-objective problems, was considered. The purpose is

to design a set of solutions so that once the performance requirements change, the changes

needed in order to adapt the existing product (one member of the set) to the new

requirements, is minimal, while maintaining the aspiration for optimal performances.

In order to design a robust set, a way to search for it, by way of evolutionary multi objective

optimization is suggested. The fitness, which is the competency of the set, is measured with

respect to optimality of the set (in objective space), through the hyper volume measure and

with respect to the distance (in design space) between the members of the set.

It appears that, small distances are gained at the expense of losing optimality within the

original objective space (in the Pareto sense). So the decision maker faces another dilemma,

whether or not to prefer optimality for current markets over meeting future market' demands.

In the current algorithm, we resolved this dilemma by introducing the good enough notion to

the problem definition. This means that if a solution is good enough (not necessarily optimal)

it is a satisfying solution, and no further improvements are required.

The suggested approach allows representing a set of optimal solutions that involve tradeoff

between optimal solutions and distance with in sets. It has been highlighted that striving for

small changes might result in that the performances of the adapted solution, are not as good

as if the small distance requirement has not been imposed. Such a tradeoff between

optimality and robustness to changes of performance requirements, which are accounted by

small changes in the design, has been suggested here.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee under Grant number 5000.838.1.4-11.

Page 39: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

38 | Mechanical Engineering

The Pareto Layer for Multi Objective Games and its Search by Set-

Based Evolutionary Multi Objective Optimization

Gideon Avigad

1, Erella Eisenstadt

1, Miri Weiss Cohen

2

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901767, Fax: 972-4-9901868, E-mail: [email protected]

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901755, Fax: 972-4-9901868, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901849, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Multi objective optimization, evolutionary computation, games

Although games on the one hand and Multi Objective Optimization (MOO) on the other have

been extensively studied, Multi Objective Games (MOGs) are scarcely attended. The existing

studies, mainly deal with the mathematical formulations of the optimum. However, a

definition and search for the representation of the optimal set, in the multi objective space,

has not been attended. Moreover hard MOGs (which involve nonlinear, discrete and concave

spaces), were not solved before.

The aim of this study is to define the solution for MOGs and to suggest a generic set based

multi objective evolutionary algorithm to search for it.

Generally a MOG may be described as choosing a strategy for opponent 1 who aims at

minimizing a set of objectives, while encountering opponent 2 who chooses another strategy,

and aims at maximizing the same objectives. Here, a definition for the representation of the

optimal set in the objective space is suggested. It is related to the way players should be

supported while playing MOGs. In the MOO setting, each strategy taken by one opponent

might be encountered by a set of optimal strategies taken by the other opponent. If this set is

optimized, it is possible to represent an opponent strategy by a Pareto optimality set, which

might be chosen by the other opponent. These representations have now to be evolved in

order to find the best strategies for one opponent by considering all optimal strategies that

might be taken by the other opponent.

The resulting optimality related front, which is shown to be a layer rather than a clear cut

front, may support players in taking decisions on strategies while involved in MOGs. Two

examples are utilized for demonstrating the applicability of the algorithm. The results show

that artificial intelligence may open the way for solving complicated MOGs and moreover,

highlight a new and exciting research direction.

In this study we have made two main contributions to the state of the art of MOGs. The first

involves the definition of the PL, which is a rational representation of optimal strategies for

MOGs. It is rooted in the idea of Pareto in the sense that, for each strategy taken by one

opponent, there might be a set of optimal strategies to encounter it. Moreover, the currently

solved problems are confined to rather simple continuous problems. Thus, the second

contribution is that an initial attempt to solve MOGs by using artificial intelligence tools.

This should open the way for solving hard MOGs.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee under Grant number 5000.838.1.4-11.

Page 40: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mechanical Engineering | 39

Novel Crowding Algorithm for Evolutionary-Based Function

Optimization

Gideon Avigad

1, Alex Goldvard

2, Shaul Salomon

1

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901767, Fax: 972-4-9901886, E-mail: [email protected]

2Department of Mathematics, ORT Braude College , P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-

9901844, Fax: 972-4-9901802, E-mail: [email protected]

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Fax: 972-4-9901886, E-mail: [email protected]

Keywords: Evolutionary multi-objective, crowding

Optimization of solutions that are associated with transient responses (each solution is

associated with a time dependent function), is scarcely attended. Such an optimization might

evade the need to set auxiliary objectives (e.g. minimization of the square error, overshoot,

etc.) and therefore, tradeoff solutions are not overlooked. In order to find tradeoff functions,

their dissimilarities should be highlighted. In evolutionary based search approaches, this

dissimilarity is related to crowding. To date, there are no adequate crowding methods to

distinguish between different functions that outperform each other, at some instance in time.

In order to preserve elite solutions (for the evolutionary search), which are diversified, we

aim at developing an algorithm that selects an elite group of individuals from a larger group,

in a way that the sub-group represents the original one, while preserving its diversity. The

main research aims are to define and quantify this diversity.

Each function is represented here by its related samples in time. These sampling results in a

function are represented by an ordered set (the set of samples). Therefore, in order to

distinguish between functions, dissimilarity of their samples, serve for assessing their

crowding. Initially the algorithm splits the larger group into sub-groups with individuals as

similar as possible, i.e. individuals that their samples are close by. The division into a specific

number of sub-groups is done by repeatedly splitting one group into two sub-groups, until the

desired number of sub-groups (the size of the elite population) is attained. The group that is

to be split is the one that has the largest span over all sub-groups and samples. Once there are

enough sub-groups, the algorithm chooses one member of each sub-group. Finally, it assigns

each individual with a crowding measure according to the number of members in its sub-

group.

Primary results show that the new method chooses diversified functions from a large group of

functions. It also appears to be very efficient in computations, and is robust with respect to

the number of samples. Apart from the direct utilization of the suggested measure to the

evolution of functions, we envisage its utilization for problems with many objectives. This

would be performed by representing each sample as an objective.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee under Research Grant number 5000.838.1.4-11.

Page 41: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

40 | Mechanical Engineering

Acoustic Emission as Tool for Identifying the Drill Position in Fiber

Reinforced Plastic and Aluminum Stack

Uri Ben-Hanan

Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901830, Fax: 972-4-9901886, E-mail: [email protected]

Keywords: Composite material, drilling, acoustic emission, signal processing

Drilling a stack of carbon fiber reinforced plastic (CFRP) material and aluminum or titanium

is a common practice in the aircraft industry and is also gaining an important role in the

automotive industry. Making holes in both materials with the same drill under the same

process parameters, causes the need to compromise the quality of the holes and the drilling

performance in the CFRP layer or in the metal part.

The aim of this research is to develop a method for finding the place of the drill in a stack

made of several different layers of materials including CFRP, aluminum and titanium.

There is a good correlation between the force and the AE signals. As it is easier to measure

the AE signal than the force one and the material changes appear better in the AE signal, the

AE signal was used for identifying the position of the drill in the material. The AE signal

was analyzed with a special algorithm developed to find the different shapes in the signals

corresponding to the time the drill enters the first layer, the time it enters the second layer and

the time it exits the stack.

Four drilling experiments in stack made of CFRP and aluminum layers were performed. Two

experiments with a regular drill and two with a step drill. In one experiment, the drilling

started in the CFRP layer and in the second, from the aluminum layer. With the regular drill,

it was found that it is possible to recognize the time when the drill exited the CFRP layer and

entered the aluminum and it is also possible to get an indication when the drill is exiting the

CFRP layer when it was the second one to be drilled. With the step drill there were better

identification capabilities of the changes in the AE signal, because it took a longer time to

change from one layer to the other.

The proposed method seems to be robust and additionally, the training of the system was not

optimal. Using a more sophisticated tool for identifying the changes in the AE signal, as well

as developing a tool for automatically training the system, are the goals of the future work. In

future work, the developed method will be implemented for changing the drilling parameters

and the resulting drill wear out and holes properties will be analyzed and compared to the

regular used drilling method.

Acknowledgment: This work was done during the sabbatical stay of the author at Fraunhofer

IWU, Chemnitz, Germany.

Page 42: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mechanical Engineering | 41

Analyzing the Vibration Signature of Dental Bur

Orit Braun Benyamin1, Uri Ben Hanan

2, Michael Regev

3,

Shmuel Miller4, Rinat Simchon

4

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901786, Fax: 972-4-9901886, E-mail: [email protected]

2Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901830, Fax: 972-4-9901886, E-mail: [email protected]

3Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-990134, Fax: 972-4-9901886, E-mail: [email protected]

4Department of Electrical and Electronic Engineering, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901905, Fax: 972-4-9580289, E-mail: [email protected]

Keywords: Vibration analysis, dental bur wear

Tooth preparation is affected by a number of factors such as the bur type, handpiece type and

the cooling method. Vibration analysis of dental handpieces as a diagnosis/identification tool

is used for identification of worn out burs. We have found that there is a difference between

the airborne vibration acoustic signals generated by of a new bur and that of a worn out one.

An on-line application of such a method seems to have diagnostic capabilities, alerting the

dentist in real time of a developing wear in the bur.

The aim of this study is to develop procedures for detection and identification of bur wear

based on processing airborne acoustic vibration.

An experimental system, consisting of a pneumatic dental handpiece, was instrumented by

accelerometer and two microphones. The dental handpiece was subjected to a constant feed

rate, for a first order simulation of the actual working conditions of a dental bur. The wear of

the dental bur was monitored by measuring the applied grinding force and the recorded

acoustic signals during the tooth processing. An offline measurement of the bur wear-out was

conducted by means of optical microscopy and Scanning Electron Microscopy. The offline

measurements were used as reference measurements in order to estimate the accuracy of the

online methods and their prediction capabilities. The acceleration and acoustic sensors were

sampled and subjected to various signal processing procedures.

A preliminary frequency analysis showed various spectral peaks with no major differences

between the sensor readings. It was found that the first vibration peak in the frequency

spectrum of the handpiece corresponded to its angular velocity. A qualitative analysis was

conducted on the air-turbine handpiece vibrations to obtain and verify our strategy for

identification of bur wear. The experimental results were compared to the measured drilling

force. Previous studies, conducted with the same experimental apparatus have shown that bur

wear is associated with a sharp increase of the drilling force. The experimental results

indicated that the angular velocity of an air-turbine handpiece shifted with the measured

applied drilling force.

Further data acquisition and analysis are indicated. Repeatability of the results, statistical

robustness and control of results need to be ascertained. Advanced signal processing

procedure should be investigated, in order to study a possible signal generation model. If

successful, this would improve any final detection/classifying system.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 43: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

42 | Mechanical Engineering

Improving Functional Performance Neurological and Orthopedic

Trauma: Monitoring the Load Bearing Distribution between the

Lower Extremities

Orit Braun Benyamin

1, Yocheved Laufer

2

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901786, Fax: 972-4-9901886, E-mail: [email protected]

2Department of Physical Therapy, University of Haifa, Mount Carmel, Haifa, Israel.

Keywords: Rehabilitation, biofeedback systems, VGRF

Mobility reestablishment is one of the main goals of a rehabilitation program for individuals

after a stroke. Rising from a chair and sitting down are common activities, yet they may turn

into a mechanically demanding functional task. Among the different strategies for training of

these individuals, is the use of biofeedback systems.

The aim of this study is to design of a pressure sensor measuring both ground reaction forces,

and the flexion-extension of the knee joint, including visual feedback. The objective is to

analyze these forces as a function of varying foot positions and knee angles during sit to

stand and stand to sit tasks.

A pressure sensor measuring two vertical ground reaction forces (heal and foot cushion) will

be designed. The flexion – extension angle of the knee joint will be measured by a 2-D

Goniometer (BioPac) during stand up and sit down activities. The end of the Sit to Stand

(SitTS) task and the onset of the Stand to Sit (StandTS) task, will be determined via the knee

angles.

Preliminary experiments were conducted to test the insole pressure sensor and verify its

ability to measure the vertical ground reaction force (VGRF) as the body weight is shifted to

the foot. The sit to stand experiment was conducted according to a study conducted

previously by Roy G. (Montreal University), where twelve subjects with chronic hemiparesis

were asked to stand up and sit down. The following events could be identified: Sit-to Stand

(SitTS) and Stand to sit (StandTS) when the subject is just leaving the seat and is near to

establishing contact with the seat, respectively. The onset of the SitTS task was defined as the

first perceptible change of the VGRF (heel and /or foot cushion), whereas the completion of

this task corresponded with the gaining of stable extension of the knee in the standing

position. The events for the StandTS task were determined by the first observable change in

the knee extension towards flexion and the last perceptible changes in the VGRF (heel and

foot cushion).

This biofeedback system can provide sufficient information concerning weight bearing

shifting during SitTS and StandTS tasks. Implementing this system during the rehabilitation

process may assist patients in regaining these important functions. Following initial testing

this system maybe further developed to assist in gait training as well.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 44: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mechanical Engineering | 43

Use of Art Media in Engineering Education

Alec Groysman

1, 2

1Department of Mechanical Engineering and Prof. Ephraim Katzir Department of Biotechnology

Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, E-mail:

[email protected]

2Oil Refineries Ltd., P.O. Box 4, Haifa 31000, Israel, Tel: 972-4-8788623, Fax: 972-4-8788371,

E-mail: [email protected]

Keywords: Engineering education, art, science, technology

How many engineers like art? How many engineers are interested in Science and its

achievements? Why is there no progress in art, but there is progress in Engineering and

Science? What is the relationship between Art and Engineering? How can engineers use art

in their inspiration and creativity? Thus, we will explain what Art media and Engineering

education are. In spite of the differences in the terms technology, engineering and industry

we will use one of them Technology or Engineering. Engineering is the ―third culture‖ in

addition to the ―two cultures‖ Art and Science. There is a mutual influence between these

―three cultures‖. What is the general meeting point in them? We speak about interdisciplinary

and humanistic thinking of engineers.

The aim of this study is to show how Art media can help in Engineering education. The

philosophy of our work is establishing of interrelationships between the “three cultures‖,

studying new inspirations and creativity in engineering, searching and determining common

aspects and differences in the ―three cultures‖ in order to show the young generation of

engineers and educators how learning, education and our very existence may be interesting,

fascinating, creative, productive, exciting, attractive, rich, and as a result beautiful.

The examples of use of different arts (music, painting, literature, poetry, sculpture) in

curricula of materials science, thermodynamics, and corrosion of metals are shown.

Analogies, interrelations, metaphors, common aspects and differences between Art and

Engineering disciplines are used in engineering education for the third and fourth year

students. Their understanding is compared with students and engineers who did not receive

such education.

Students and young engineers who received explanations of engineering disciplines in

comparison with humanistic aspects showed more creativity and satisfaction in their job and

life. They have another approach to apprehending engineering disciplines and their very

existence, now seems beautiful.

Humanistic aspects should be included more and more in Engineering education. Namely we

can talk about ―beautifying‖ engineering. Learning and education of students and educators

using art media results in attractiveness of engineering disciplines, their ―beauty‖,

inspirations and creativity of young generation of engineers.

Page 45: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

44 | Mechanical Engineering

Creep Properties Study of Mg Alloys Friction Stir Welding

Michael Regev

1, Stefano Spigarelli

2, Marcello Cabibbo

2

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-990134, Fax: 972-4-9901886, E-mail: [email protected]

2Dipartimento di Meccanica, Università Politecnica delle Marche, I 60131, Ancona, Italy

Keywords: Friction stirs welding, Magnesium alloys, Aluminum alloys

During FSW, the frictional heat that is generated is effectively utilized to facilitate material

consolidation and eventual joining with the aid of an axial pressure. In this process, a non-

consumable rotating tool is inserted into the abutting edges of the material to be joined until

the shoulder of the tool is in direct contact with the two pieces of plates or sheets that are to

be butt-welded. The tool is then progressively traversed along the line of the weld using a

normal load. The highly localized and concentrated heating, aids in softening of the material

around the tool. This coupled with a combination of tool rotation and a gradual translation of

the tool through the material to be joined, results in movement of metallic material from the

front of the rotating pin to the rear of the rotating pin. The process is, therefore, a non-fusion

welding one.

The aim of this research is to study the microstructure and creep properties of a friction stir

welded joint.

A FSW machine has recently been built at the Machining Research Lab of ORT Braude

College and 3.2 mm thick AZ31 plates were successfully welded. Creep specimens were

machined from the welded plates and tested under different loads and temperatures, in

addition to metallographic characterization.

As for non-welded AZ31 specimens - the grain size effect on the high temperature

mechanical properties of the AZ31 Magnesium alloys, has been investigated by comparing

the creep data obtained in the current study with the results available in the literature. The

results of this analysis confirm the existence of a weak but not negligible effect of the grain

size, both in the high stress regime, typical of climb-controlled creep, and in the low stress

region, where deformation is controlled by viscous glide of dislocations in an atmosphere of

solute atoms. The creep response of the welded specimens is still under investigation;

however, creep results of these specimens, indicate that a completely different behavior will

be shown.

Even though the effect of grain size is in general weaker than that of the chemical

composition (in particular of the Al content of the alloy) it deserves the attention of the

investigators; its existence should be related to the occurrence of grain boundary sliding

accommodated by dislocation creep.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee.

Page 46: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mechanical Engineering | 45

Applied Nonlinear Optimal-Quadratic Control and Estimation

Moti Shefer

Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-8110646, Fax: 972-4-8110647, E-mail: [email protected]

Keywords: Nonlinear, optimal, quadratic, stochastic, control, hover, maneuver

Optimal nonlinear quadratic stochastic (NLQS) control and estimation methodology for

discrete-time analytic systems has been developed since the 1980‘s and by now has gained a

quite mature and easy to apply and verify status. This methodology was initially made for

digital control, so that with the emerging and further developing of MATLAB, the herein

presented NLQS has become a must-have asset in the tool-box of every system engineer.

Unfortunately, the last 3 decades of control engineering have been characterized by quite

harsh and unjustified abandonment of optimal-control, for all kinds of heuristically generated

control and estimation processes where hyper-intensive online computations with no solid

theoretical basis, replaced rigorous synthesis and analysis.

The main two results of this fashion are:

1) Very expensive control SW solutions that take very long times to develop and

2) No safe way to make a reliable prior performance evaluation and validation of the

solutions, such as statistical moment's analyses.

The aim of this study is to demonstrate the utility and power of digital NLQS by applying it

to case-study projects that have not been attempted so far, in either the literature or the

industry.

The first project case-study selected in OBC is: ―Jumper‖- a small, unmanned flying machine

that can take off and land at any angle between zero and ninety degrees, pass instantaneously

to horizontal flight at over 100 Km/hr, or to a hovering state with about 2g of vertical or

circular maneuvering capability. The main power of the Jumper is provided by 2 electrically

driven tilt able propellers whose optimal control constitutes a highly nonlinear problem.

The typical mission profile of the Jumper is acquisition and transmission of video

intelligence in heavily crowded urban areas and inside deep narrow canyons. The Jumper is

designed to takeoff weight of 10 Kg, including an advanced FLIR and guidance system. Its

goal mission time is 2 hours in a 10 Km radius of operation and 3 Km of service-ceiling. Its

goal production cost is $3,000.

Page 47: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

46 | Mechanical Engineering

The Kinematics of the Rolling Eccentric Mechanism

Evgeniy Sinenko

1, Olga Konischeva

2

1Prof. Department of Applied Mechanics, Polytechnic Institute of Siberian Federal University,

660074, Kirenskogo st. 26, Krasnoyarsk, Russia, Tel: (3912) 2497-263, E-mail: [email protected]

2Associate Prof. Department of Applied Mechanics, Polytechnic Institute of Siberian Federal

University, 660074, Kirenskogo st. 26, Krasnoyarsk, Russia, Tel: (3912) 2497-263, E-mail:

[email protected]

Keywords: Eccentric mechanism, eccentricity, ratio, efficiency

Usually, known frictionless bearings, roller bearings and ball bushing are used as shafts

bearings in the various mechanisms of machines and devices. However, small structural

changes can make use them as mechanisms.

The aim of this study is to evaluate the nature and effectiveness of the mechanism of rolling

eccentric mechanism, obtained on basis of a standard frictionless bearing.

Using the standard parameters of frictionless bearing dependences for diameters definition of

rolling bodies according to the angle of their location at the given axes shift of cone and outer

race and one of their diameters were obtained. In this case the rolling bodies were given

different sizes and were subjected to certain law. Thus the batch of eccentric mechanism for

converting rotary motion of the driving ring to the reciprocating motion driven ring was

designed and built.

Considering the get device as a mechanism, a mathematical model of which is dependence

),cos1()(u

eS

here e is an eccentricity shift of cone and outer race; is a cone angle of

rotation;

1

21R

Ru is geometrical contact ratio; R1 and R2 are radii of the outer and inner

races. At the result of mechanism research eccentric mechanism contact ratio by speed

equaled to22

2

)sin(2

2sinsin

eR

eeuv

, power transmission coefficient ),ctg( прpu

where is an eccentric lifting angle, пр is a reduced rolling friction angle, efficiency

.tg)ctg( пр

Thus, the analytical dependence and made research allow to judge all the kinematic and

power parameters of the rolling eccentric mechanism: maximum radial shaft of the driven

link is equal to 2e; radial oscillations frequency of the driven link is ,12

u

where

1 is a

driving link angular velocity; ratio by speed can reach uv = 8 at certain values of e and R;

power transfer coefficient reaches up = 25−30, efficiency reaches 0,96−0,98. The results

confirm the feasibility and effectiveness of the using eccentric mechanism as a device for

mixing, grinding, polishing device, roller bearings of armored personnel carrier and others.

Acknowledgement: This design and research were used in the design and manufacture of a

radial piston pump Н-403А.

Page 48: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Mechanical Engineering | 47

Mobile Hybrid Kinematic Chain - Utilizing Two Small Robots for

Climbing Stairs

Avi Weiss

1, Gideon Avigad

2, Roee Mizrahi

1

1Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901945, Fax: 972-4-9901886, E-mail: [email protected]

2Department of Mechanical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901767, Fax: 972-4-9901868, E-mail: [email protected]

Keywords: Kinematic chain, mobile robot, cooperating robots, climbing robots

One of the main problems in urban robotics is combining fast motion in open areas with the

ability to overcome obstacles, climbing stairs, and maneuvering in indoor limited spaces.

Currently, there is no appropriate solution to the problem. On the one hand, the use of chains

allows for stair climbing but is very limiting when it comes to open spaces. On the other

hand, the use of wheeled robots allows for fast motion in open areas, but stair climbing

abilities are extremely limited. In addition, all of the above are not suitable for indoor

maneuvering in limited spaces.

The aim of this project is to utilize two separate robotic platforms, each allowing for maximal

maneuvering capabilities indoors and in open areas that connect to each other to combine for

a robot with stair climbing abilities.

First, modeling of a complex kinematic chain (robot-manipulator-robot) will be modeled.

Next, optimization of the manipulator kinematics as a function of the robots motion will take

place. Connection trajectory and control using the manipulator will commence. Kinematics

and dynamics of the system during stair climbing will pursue. Finally, a system will be

designed and built to affirm the theoretical results. Several kinematic concepts were

considered for the individual robots, as well as for the connecting arm. It was concluded that

a two-link manipulator arm would provide the desired amount of degrees-of-freedom for the

system, such that a one to one mapping between the control input and system output would

be feasible. In addition, equations of motion for the system were developed for the

benchmark case of climbing stairs with no turns. Conditions for completing climbing were

obtained using a design optimization process and a simulation, which yielded required design

parameters to enable climbing.

Climbing stairs combining two small robots is possible, providing appropriate design

parameters values are used. Since some of the design parameters depend of the shape of the

stairs, configuring them in real time is important. The important parameters that affect the

ability to climb the most were the vertical location of the connection of the manipulator, and

the distance between the wheels. The robots are now in the mechanical design stage, which

allows these two parameters to be controlled during operation.

We plan to build the robots and the arm in order to validate the results. Next, climbing stairs

while turning will be analyzed and tested. Finally, control issues and optimization will be

addressed.

Acknowledgement: This study was supported by a grant from the ORT Braude College

Research Committee under Grant number 5000.838.1.2-12.

Page 49: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

48 | Physics

High Energy Neutrino Sources: Challenges and Prospects

Dafne Guetta

Department of Physics and Optical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982,

Israel, E-mail: [email protected]

Keywords: High energy astrophysics, neutrinos

The fundamental scientific motivation for high-energy neutrino astronomy is that neutrinos

are nearly massless particles that interact very weakly with matter and therefore can peer

into sources that would be opaque to high energy photons and protons. Unlike high energy

photons and protons, neutrinos can travel cosmological distances without being absorbed or

deflected. Therefore neutrinos can provide information on astrophysical sources like

Gamma Ray Bursts that cannot be obtained with high energy photons and charged particles.

However the weak interaction of neutrinos with matter also implies that they are very

difficult to detect, requiring detectors with at least kilo-tons of detecting medium. Neutrino

astronomy has so far been limited to the detection of neutrinos from only two sources, the

sun and supernova SN1987a.

In order to extend neutrino astronomy to the extra-Galactic scale, large volume detectors of

>1 TeV neutrinos have been constructed (Halzen 2005, Phys. Scripta T121, 106).

Four decades ago, researchers first aspired to build a kilometer-scale neutrino detector.

On April 28th, 2011, the Ice Cube Collaboration met in Madison, Wisconsin to celebrate

the completion of the detector that is an extension of the previous detector, AMANDA, able

to reach a 1 Gton effective detector mass. Ice Cube is able to detect >1 TeV neutrinos from

astrophysical sources allowing to reveal and study astrophysical neutrino sources such as

Gamma Ray Bursts (GRBs), Active Galactic Nuclei (AGN) and Microquasars (MQs). Given

the detector Ice Cube at work, it is very timing to carry out research in the neutrino field.

The main aims of this study are to:

1) Solve the long standing problem of the content of the jet of astrophysical sources like

GRBs, AGN and MQs

2) Find the sources of the ultra-high energy cosmic rays

3) Constrain the GRB progenitor and emission mechanism models with neutrinos

We have estimated the flux from individual GRBs observed by the BATSE experiment. The

individual fluxes can be directly compared with coincident observations by the Ice Cube

telescope at the South Pole. Having these observations in mind, we specialize on neutrino

emission coincident in time with the GRB. Because of its large statistics, our predictions are

likely to be representative for future observations with Ice Cube telescope. Individual

neutrino events within the BATSE, Swift and GBM time and angular window are a

meaningful observation.

We estimate a few muons per year in a kilometer-square detector. Fluctuations in fluence and

other burst characteristics enhance this estimate significantly.

Page 50: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Physics | 49

Relations between Chaos and Controllability in the Quantum World

Shimshon Kallush

Department of Physics and Optical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982,

Israel, Tel: 972-0722463655, Fax: 972-4-9883429, E-mail: [email protected]

Keywords: Quantum control, quantum chaos, controllability

Control in the quantum world is based upon interferences of several paths to reach a desired

task. In the macroscopic world, the presence of chaos in a system tends to increase the

complexity of its dynamics and decrease the ability to control it. We will ask the question

whether this is also true in the quantum world, if there might be a case where disorder would

be able to ease the way to control the system. Recently, we and others have outlined the

difference between two kinds of control within the quantum regime:

1) Classical-like control in which the system remains localized and similar to classical

distribution throughout the whole process. These systems remain controllable even

when the size of the Hilbert space of the system increases. Control of such processes

can be performed with both local and non-local control operators.

2) Quantum control which make extensive use of interferences. This solution for this

kind of processes is unique for each system's size. The process could be performed

only with non-local operators.

The aim of this study is to demonstrate a quantum system in which the increase of disorder

increases also the controllability.

The model system will be the Hennon-Hilles 2D potential. This potential exhibits chaotic

behavior and a rapid divergence of initially localized distributions, as the initial energy of the

state increase. A control term with varied span over the phase space of the system will be

taken to demonstrate the difference between quantum and classical like control.

Controllability is shown to depend on the locality of the operator that generates the control. A

local operator is characterized by a single control path, and will suffer tremendously from

increase of chaos. Non local operators control the system in a quantum fashion and enjoy an

increase of available paths with the increase of chaos. Thereby, increase the controllability

with the more chaotic behavior.

Page 51: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

50 | Physics

Measuring the Self-Inductance of a Metal Ring

Eli Raz

Department of Physics and Optical Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982,

Israel, Tel: 972-4-9901940, Fax: 972-4-9883494, E-mail: [email protected]

Keywords: Electrodynamics, measurements, laboratory procedures; Physics education

Self- inductance is one of the most difficult physical quantities to measure. If the inductance

is larger than 10 mH, the ordinary AC voltmeter and ammeter can be used. However, in order

to measure inductance of the order of H78 1010 , a method based on direct measurement

of impedance is useless due to the limitations of the measuring devices.

A theoretical formula for the inductance of a ring with circular cross-section is well known,

however, few works were done to measure it and with an accuracy of 10%. Those works

based on measuring the phase difference between induced EMF and induced current in a

ring, when the ring is exposed to an alternating magnetic flux.

The aim of this project is to measure the self-inductance of a metal ring with high accuracy,

by using ordinary students' lab equipment.

Measurement the EMF induced by a coil carrying alternating current on a ring was carried

out, as a function of the distance between the coil and the ring. This was done by using an

identical ring except for a short segment which was removed, making the ring open. In a

second experiment, the force between the coil and a closed ring was measured for the same

distances as done with the open ring. The force was measured by using a scale with a

resolution 0f 0.01 gr.

The metal ring is placed on the polystyrene block, rather than directly on the scale, therefore

the coil is kept at a distance above the scale, to eliminate a direct effect on the scale‘s

reading. The self-inductance can be deduced using magnetic Gauss‘s Law and Faraday‘s

Law. The resistance of the ring‘s material was measured by using the 4 terminal methods.

A linear graph between the force and the derivative of the EMF squared with respect to the

distance between the coil and the ring is obtained with an accuracy of 2%. Analysis of the

graph and taking into account all the uncertainties of the experiment, a value of

H71013.1 for the self-inductance of the ring is obtained with accuracy better than 4%

and in accordance with the known theory.

Page 52: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Software Engineering | 51

Combining Height Reduction and Scheduling for VLIW Machines

Enhanced with Three-Argument Arithmetic Operations

Fadi Abboud

1, Yosi Ben-Asher

2, Yousef Shajrawi

3, Esti Stein

4

1Department of Computer Science, Haifa University, Mount Carmel, Haifa 31905, Israel, Tel: 972-4-

8240111, E-mail: [email protected]

2Department of Computer Science, Haifa University, Mount Carmel, Haifa 31905, Israel, Tel: 972-4-

8240338, E-mail: [email protected]

3IBM R&D Labs in Israel, Haifa University Campus, Mount Carmel, Haifa, 31905, Israel, Tel: 972-4-

8296211, E-mail: [email protected]

4Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901740, E-mail: [email protected]

Keywords: Height reduction, VLIW scheduling, MRK algorithm, algebraic circuit, three-

argument operation

Height Reduction (HR) are algebraic transformations used to reduce the height (critical path)

of a given sequence of arithmetic operations. We assume that the computation is represented

as an algebraic circuit (AC). As such, HR can parallelize the computation by transforming an

AC to a "wider'' and shorter circuit. VLIW scheduling is the problem of determining a

minimal sequence of VLIW instructions that, when executed, evaluates a given AC.

Obviously the critical path of the AC bounds from bellow by the number of VLIW

instructions needed to evaluate it. However the number of VLIW instructions for a given AC,

can be larger than the AC's critical path. Typically, compilers apply HR and later on VLIW

scheduling, however as we argue next, these two problems must be combined and solved

"together'' to obtain improved VLIW scheduling.

We consider the need to combine two known optimizations which are usually regarded as

non-related: Height Reduction and VLIW Scheduling.

We consider a technique to automatically extract three-argument instructions from sequential

arithmetic code. The instructions include: multiply and add (MAD), three argument additions

(ADD3) and three argument multiplications (MUL3). The proposed solution combines a

height reduction technique that generates three-argument instructions and a VLIW scheduling

that can benefit from these instructions.

Our results show that for arithmetic benchmarks the proposed technique can improve the

VLIW scheduling, while emitting three-argument instructions. The contribution of this work

includes: the modified MRK algorithm as a new technique for height reduction optimizations

and the study of the potential usefulness of three-argument instructions. Though our results

are for a non-existing hardware they show the usefulness of adding such instructions to

VLIW CPUs.

We show that HR and VLIW scheduling should be combined rather than be executed as two

separate passes. The proposed technique also emits three argument instructions (TAIs), thus

optimizing the resulting VLIW code even further. The method was implemented in the

LLVM compiler and the usefulness of the modified MRK as a height reduction technique

was demonstrated. The experiments showed a clear improvement of the VLIW scheduling

and a significant use of ADD3 and MAD operations.

Acknowledgement: This research was supported by THE ISRAEL SCIENCE FOUNDATION

grant No. 585/09, and by Israel Ministry of Science and Technology (MOST) grant No. 3-

6496.

Page 53: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

52 | Software Engineering

Acoustic-Phonetic Analysis of Fricatives for Classification using a

SVM Based Algorithm

Alex Frid

1, 2, Yizhar Lavner

1

1Department of Computer Science, Tel-Hai Academic College, Upper Galilee, Israel

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901902, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: SVM, phonemes classification, fricatives, audio processing

Classification of phonemes is the process of assigning a phonetic category to a short section

of speech signal. It is a key stage in various applications such as spoken term detection,

continuous speech recognition and music to lyrics synchronization, but it can also be useful

on its own, for example in the professional music industry, and in applications for the hearing

impaired.

In this study we present an effective algorithm for classification of one group of phonemes,

namely the fricatives, which are characterized by a relatively large amount of spectral energy

in the high frequency range. The classification between individual phonemes within this

group is fairly difficult due to the fact that their acoustic-phonetic characteristics are quite

similar.

A three-stage classification algorithm between fricatives, either voiced or unvoiced, is

utilized. In the first stage, the preprocessing stage, each phoneme segment is divided into

consecutive non-overlapping short windowed frames. Each phoneme is then represented by a

15-20 dimensional feature vector. In the second stage, a support vector machine (SVM) is

trained, using radial basis kernel function and an automatic grid search for optimizing the

SVM parameters. Lastly, a tree-based algorithm is used in the classification stage, where the

phonemes are first classified into two subgroups according to their articulation: sibilants (/s/

and /sh/ or /z/, /zh,) and nonsibilants (/f/ and /th/ or /v/ and /dh/). Each subgroup is further

classified using another SVM. This is compared to another procedure in which the frames are

classified to their phoneme groups in one stage.

For the evaluation of the performance of the algorithm, we used more than 11000 phonemes

extracted from the TIMIT speech database. Using a majority vote for the feature vectors of

the same phoneme, the overall accuracy of 85% and 80% is obtained for the unvoiced and

voiced fricatives, respectively. These results are comparable and somewhat better than those

achieved in other studies.

The results shown here indicate that the procedure could be utilized in technologies for the

hearing impaired, for example by differential processing of the unvoiced fricatives, to

improve their discriminability.

Page 54: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Software Engineering | 53

Statistical Based Grading of Cipher Texts

Mati Golani

1, Renata Avros

2, Zeev Volkovich

3

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-72-2463664, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901862, Fax: 972-4-9901852, E-mail: [email protected]

3Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901994, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Cipher text, encryption, re-sampling, clusters

Various text encryption algorithms use an iterative approach for encrypting text. The quality

of such an algorithm is measured in terms of time complexity and encryption efficiency, i.e.

how difficult it is to decrypt the encrypted text. A cipher-text produced by a good encryption

algorithm should have an almost statistically uniform distribution of characters.

The aim of this study is to measure the randomness and convergence of encryption

algorithms and evaluate their efficiency in terms of clustering metrics.

Two encrypted files are taken. In one scenario, the files are actually different iteration

outcomes of the same file, and in the second scenario, they are outcomes of different

algorithms applied to the same file. Each file is fragmented into 128 bit fragments. A sample

set of size N is generated from each file, by randomly picking a fragment, N times. Each set

is provided to a clustering algorithm, and then a merged set is provided to the same clustering

algorithm. Next, we measure the distance between members of the same cluster. By

employing this re-sampling procedure, we produce a distances' collection which includes the

distance values that are calculated within, and between the samples. Our approach suggests,

that the "between" distances are much longer than the "within" ones, in the case of different

randomness texts.

Better algorithms are expected to converge faster into a uniform distribution across the text

Euclidean space. Thus, our approach suggests that when considering lower graded

algorithms, cipher-text in the preliminary (first few) iteration phases, is expected to provide a

smaller distance between samples. We also expect that each encrypted file is characterized by

a different "location", which means that using the two sample test technique, the distance

between a sample taken from one file, and a sample from the second file is much larger than

the distance between two samples taken from the same encrypted file.

Page 55: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

54 | Software Engineering

Predicting Fault Location Based on Current and Past Test Results

Katerina Korenblat

1, Avi Soffer

2, Zeev Volkovich

3

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901845, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901720, Fax: 972-4-9901852, E-mail: [email protected]

3Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901994, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Software testing, categorization algorithms, software fault correction, debug

Software testing enables achieving a measure of software quality. Although the ultimate goal

of testing is to find software faults, additional measures are taken in order to correct the faults

and achieve the high-quality software systems. Hence, the process known as ―debug‖ -

finding and correcting software faults – is one of the most important components of software

development. Although finding bugs is hard by itself, fixing them is another challenge, which

involves locating the defect that causes the failure. Due to software complexity, the debug

process could be very difficult, especially when the tested software runs concurrent threads.

Typical debugging process includes scope reduction (by scenario simplification), and run-

time tracing to associate each test with related software modules. Ability to predict which

software module is likely to include the bug may reduce the large efforts involved in the

debug process.

One of the approaches to fault root-cause analysis is applying and adjusting categorization

algorithms in the field of testing. Such an algorithm enables, for example, to analyze the

structure of successful tests and to detect differences between them and the given failed test.

That difference may indicate a reason for the test failure and point out the faulty module.

Any set of tests can be non-homogeneous and therefore, describing it becomes a complex

task, which can be simplified by categorizing the test under consideration, and then

comparison only to tests of such type.

The aim of this research is to predict, for a given failed test case, a location of software faults

after capturing the software execution profile and behavior and analyzing together with the

previously gathered test results.

Our research includes two main paths: 1) Developing a testing environment that enables

executing a set of test cases and gathering results for further analysis. In each run, associated

module execution is traced and captured, in order to establish a mapping between the test

steps and the corresponding software modules. 2) Based on known categorization

approaches, developing an algorithm that provides the desired prediction. According to this

algorithm, the run information for failed tests is analyzed together with previously gathered

test data, to identify the software module most likely to be defective.

We intend to show that effective prediction of fault locating is possible.

Our research contributes to enhancing the debug process, which is a major factor in software

quality. Possible future research includes improving the prediction accuracy by further

developing the analysis and prediction algorithm.

Page 56: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Software Engineering | 55

A Farthest-Point Approach for Graph Clustering in

Large PPI Networks

Nissan Levtov

1, Zeev Volkovich

2

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901902, Fax: 972-4-9901529, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901902, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Clustering algorithms, graph clustering, cluster analysis

A protein-protein interaction (PPI) network is generally represented as a graph, with proteins

as nodes and two nodes which are connected by an edge, if and only if the corresponding

proteins interact physically. Protein-protein interactions are fundamental for virtually every

process

in a living cell. Information about these networks improves understanding of diseases and can

provide the basis for new therapeutic approaches. An investigation of PPI mechanisms begins

with the characterization of the PPI graph structure. In particular, clustering is a fundamental

task. Clustering in PPI networks is to group the proteins into sets (clusters) which

demonstrate greater interaction among proteins in the same cluster than in different clusters.

Good clustering methods are necessary to reveal biological relevant functions and cellular

processes.

The aim of this study is to develop a graph clustering algorithm suitable for clustering graphs

which represent large protein-protein interaction networks. The algorithm should also be able

to efficiently decide on the right number of clusters.

Our new clustering algorithm is based on the farthest-point approach of Gonzalez, modified

to deal with large PPI networks. The Gonzalez algorithm provides a clustering of points in a

metric space, aiming to minimize the maximum distance between points in the same cluster.

With the number of desired clusters given, the method attains a 2-approximation ratio for

clustering points in metric spaces. Several issues prevent this approach to be effective for our

data as is. Firstly, we consider PPI networks where the representing graph is not weighted

and therefore one has to define a distance measure between the points in the network.

Another issue to be considered is that some proteins are involved in two or more different

functions, causing several clusters to overlap. Also, we do not assume that the number of

desired clusters is given, but rather would like the algorithm to set the best number of

clusters. Our clustering method takes all these issues under consideration.

The proposed algorithm was tested on real PPI data containing interactions among around

17,000 proteins. During each iteration, the maximum distance of a node to its center was

measured, as well as the amount of change from the last iteration (the distance derivative). If

the measure and algorithm are effective, one would expect to clearly identify the best number

of clusters by a significant decrease in the derivative measure. The results we obtain support

this assumption and the clusters are clearly identified.

The proposed algorithm is effective in identifying (possibly overlapping) clusters in a large

PPI network.

Page 57: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

56 | Software Engineering

Lower Bounds on the Minimum Average Distance of Codes

Beniamin Mounits

Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901754, Fax: 972-4-9901839, E-mail: [email protected]

Keywords: Code, minimum average distance, association scheme, linear programming

In 1977, Ahlswede and Katona proposed the isoperimetric problem of the extremal

combinatorics of Hamming space of determining the minimum average distance of binary

codes. In 1994, Ahlswede and Althöfer observed that this problem also occurs in the

construction of good codes for write-efficient memories, introduced by Ahlswede and Zhang

at the same time, as a model for storing and updating information on a rewritable medium

with cost constraints. This problem is also equivalent to a cocycle covering problem in graph

theory. In 1998, Xia and Fu developed lower bounds on the minimum average distance of

binary codes. The bounds were obtained using Delsarte's linear programming approach for

bounding the size of cliques in an association scheme. A recent development in this area took

place in 2007 when Xia, Fu and Jiang had considered the problem of bounding minimum

average distance of constant weight codes.

The aim of this study is to find a general framework for obtaining the lower bounds on the

minimum average distance of codes in arbitrary P-polynomial association scheme.

The lower bounds we derive are divided into two types. The first one is combinatorial where

we consider connection between spectrum of the code of interest with the one defined by

"holes" of the code. This technique was recently developed by Mounits, Etzion and Litsyn for

metric association schemes. The second type bounds are linear programming bounds. Being

encouraged by the success of the method in the case of constant weight codes, we try to

generalize the LP approach for a wide class of association schemes, namely P-polynomial

schemes.

By exploiting the connection between the code and its holes we had succeeded in obtaining

an explicit connection between the minimum average distance of a code and that of its

complement, for an arbitrary P-polynomial scheme. As for the optimization bounds, we had

shown that the bounds in binary/non-binary Hamming scheme, as well as in binary/non-

binary Johnson scheme are obtained as particular cases of our generalized approach.

However, we didn't succeed in proving that our approach fits all P-polynomial association

schemes. The work in this direction is still in progress.

The results we obtained so far, especially a generalization of the known bounds and putting

them in to same framework, make us believe that it is possible to reach our aim, or at least to

characterize a class of association schemes suitable to our approach.

Page 58: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Software Engineering | 57

Parallel Computations on Decomposable Systems

Elena Ravve

Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901902, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: MSOL, translation Schemes, sum-like systems, Incremental Computation

In many cases, we find the situation, when we do some computation on a system that is

composed from several components. On the other hand, computation systems with several

computation units are widely used, even in personal computers. We try to exploit this

modularity of the systems under consideration as well as the computation tools to make the

computation more effective.

The aim of the present study is to give the precise mathematical definition of the modularity

of the system, as well as to elaborate the computational complexity, in order to measure the

effectiveness of the proposed approach.

We use Monadic Second Order Logic in order to formulate the problem to be solved. We

justify the choice from both an expressive power and a computational complexity point of

view. We use the technique of translation schemes in order to define sum-like systems as the

basic concept of the modularity. In complexity analysis, we consider several scenarios of

computations. For each scenario, we give a complete complexity analysis. We use the

Feferman-Vaught theorem and the technique of translation schemes for the precise definition

of sum-like systems and give several examples of such systems in different fields of

computer science. We also discuss the limitations of the proposed methodology, as well as

the possible directions of further investigations.

Our main result shows that under the used assumptions, the proposed methodology allows to

reduce the complexity of computations on sum-like systems as well as to minimize the

communication between their components. As a negative result, we show some synchronized

systems, where the approach cannot be applied.

We show how the generalization of the Feferman-Vaught theorem may be used for parallel

incremental computation on decomposable systems, which are defined using the notion of

sum-like systems. We give different applications of the methodology in Formal Verification

and in Data Bases. We give complexity analysis, effectiveness and limitations of the

methodology.

Page 59: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

58 | Software Engineering

Discrete Modeling of Regulation Networks and its Application to

Elucidating Bi-Stability

Amir Rubinstein

1, Yona Kassir

2, Ofir Hazan

3

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901902, Fax: 972-4-9901852, E-mail: [email protected]

2Faculty of Biology, Technion-Israel Institute of Technology, Technion City, Haifa 32000, Israel, Tel:

972-4-8294214, Fax:972-4-8225153, E-mail: [email protected]

3Department of Mathematics, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-50-

9319646, Fax: 972-4-9901802, E-mail: [email protected]

Keywords: Regulation networks, discrete model, meiosis in budding yeast

The modeling and analysis of genetic regulatory networks is essential both for better

understanding their behavior as well as for elucidating and refining open issues. Many

methods for simulating and inspecting the properties of such pathways have been devised,

borrowing from a variety of techniques such as differential equations, algebraic calculi, and

Flux Balance Analysis. Most of these methods are quantitative in nature and require data that

is often not fully revealed; moreover, some of them are computationally intensive, requiring

significant time and resource.

The aim of this research is two-fold:

1) To extend a discrete computational model that allows for qualitative analysis of gene

regulatory pathways, enabling the examination of biological characteristics such as bi-

stability, transient gene expression, robustness, and sensitivity to initial conditions, in an

effective manner. One of the main advantages of this model is its predictive capabilities

to pinpoint missing regulatory elements in a network.

2) To elucidate the mechanism that promotes a bi-stable switch from the cell-cycle to

meiosis in budding yeast.

We apply our model to analyze the manner by which bi-stability is achieved in a

representative developmental pathway, meiosis in budding yeast. Simulation results are

validated with experimental data in the lab. Throughout this process, we are able to both make

predictions as to the mechanism that governs the biological system under research, and

enhance the computational model with abilities that are relevant to similar biological systems.

Our model is a state-graph, in which nodes represent proteins/mRNA, and may assume

discrete states from a finite range {0,…,N}; edges represent regulation effects; hyper-edges

(edges from nodes to other edges) represent conditional regulation. A transition function

determines the states of all nodes in the next time steps in a synchronous fashion.

To this end, we have examined several hypothesized regulation networks with our

computational model. Simulations suggest a few biological insights, one of which is that the

regulation on Ime2, a key player in meiosis in budding yeast, is conducted through its activity

and not quantity. Throughout this process we identified a meaningful extension to the

transition function of the computational model.

Page 60: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Software Engineering | 59

A Comparative Assessment of Outlier Detection Methods in Wireless

Sensor Networks (WSN)

Peter Soreanu

1, Zeev Volkovich

2

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901845, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901994, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Wireless sensor networks, WSN, outliers, outlier detection methods

Wireless sensor networks consist of small processing and communication units with sensing

capabilities, deployed in a field to be monitored. The gathered data is transmitted using ad

hoc wireless links, without a fixed infrastructure, ultimately reaching a remote computer

outside the field. This data has to be interpreted, with one of the main tasks being the

detection of significant events. This detection is based on measurements reported by sensors,

where deviations of successive measurement are tested towards a threshold and the history of

previous received data. However, these techniques cannot distinguish between a real event

and a local anomaly, possible due to a sensor failure or a malicious attack. Various methods

have been proposed to deal with this problem, generally looking at sensed data in a spatial-

temporal manner. Every proposed method works better under certain assumptions, and no

method is suitable to every outlier scenario found.

The aim of this study is to evaluate the efficiency of different outlier detection methods in

various real-world scenarios, and find out which ones are best suited for classes of possible

outliers detected by the WSN.

Six outlier detection methods were implemented, each one simulating basically the same

scenarios. The following methods have been used: Gaussian distribution assumption, kernel-

based algorithm, K-nearest neighbors, Support Vector Machine, Bayesian belief networks

and Kalman filtering. Each implementation developed a different simulator, so the final

results were normalized for comparative analysis.

We have found that the least efficient was the algorithm based on the assumption of Gaussian

distribution of the measurements' values, that the Support Vector Machine-based algorithm

was sensible to the parameters chosen, and the Kalman filter acted very efficiently on some

scenarios, and was practically ineffective in others.

The specific suitability of various outlier detection methods in a sensed field suggest two

kinds of uses: if the outliers behavior may be predicted, the best detection method found in

our research for this scenario may be recommended for implementation; if no outlier

behavior may be predicted, a possible solution would be the parallel implementation of more

than one algorithm. The later approach seems very promising in real-time situations, and may

be the object of our further research.

Page 61: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

60 | Software Engineering

A K-NN Approach for Cluster Configuration Assessment

Dvora Toledano-Kitai

1, Zeev Volkovich

2

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901862, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901862, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Clustering, cluster stability, K-Nearest Neighbours

Cluster analysis is a tool for discovering hidden ‘groups‘ or ‘clusters‘ in collections of items

commonly described by numerical, linguistic or structural features. The elements of each

cluster are highly similarly to each other in their behaviour (with respect to the given data)

and the groups are expectantly well separated. A key and effectively unsolved ―ill posed‖

cluster analysis problem is the estimation of the appropriate ―true‖ number of clusters. The

conclusion can be influenced by the data scale chosen. Although many methods were

proposed over the years to overcome the aforementioned problem, none have been agreed to

be superior.

In the current talk we present a new viewpoint to the cluster validation problem founded on

the K-Nearest Neighbours (K-NN) approach which is based on the stability of the partitions'

geometrical configuration which is estimated within a resampling procedure.

Triples disjoint subsets of the data under investigation are randomly drawn, and presented to

a cluster algorithm in two unions of samples‘ pairs. We estimate the partition‘s readiness

using probabilistic features of the K Nearest Neighbors (K-NN) approach, focusing on at the

portion of the K-NN belonging to its own sample for each point from the mutual part in the

pairs. We presume that the cited K-NN fractions appear to be independent realizations of the

same random variable in the case of the ‖true‖ number of clusters. These actualizations are

compared by means of a simple probabilistic metric, such that the empirical distributions of

the distance magnitudes and their observed p-values are created. Suitably, concentrated

distribution indicates the true number of clusters.

In order to validate the proposed model, several numerical experiments on synthetic and on

real data were performed. In each experiment, the computations are performed on a

substantial quantity of samples to allow achieving accurate conclusions. Empirical

distributions of the distortion function are formed for several possible numbers of clusters in

the purpose of determining the ―true‖ quantity of clusters by means of the one most suitably

concentrated distribution. In all of the experiments the suggested model succeeds in detecting

the ―true‖ numbers of clusters.

The proposed novel manner demonstrates high abilities in handling the detection of the true

number of clusters.

Acknowledgement: This study is supported by a grant from the ORT Braude College

Research Committee.

Page 62: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Software Engineering | 61

An Information Criterion for Selecting the

Optimal Number of Components

Zeev Volkovich

1, Zeev Barzily

2, Gerhard-Wilhelm Weber

3

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901862, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel,

Tel: 972-4-9901862, Fax: 972-4-9901852, E-mail: [email protected]

3Institute of Applied Mathematics, Middle East Technical University, 06531 Ankara, Turkey

University of Siegen (Germany), University of Aveiro (Portugal), Universiti Teknologi Malaysia,

Skudai, E-mail: [email protected]

Keywords: Clustering, cluster stability, Gaussian Mixture Model

Cluster analysis is a tool for discovering hidden ‘groups‘ or ‘clusters‘ in items or collections

commonly described by numerical, linguistic or structural features. The estimation of the

suggested number of clusters in dataset is an ‗ill posed‘ problem of essential relevance in

cluster analysis. A group (cluster) is characterized by relatively high similarity among its

elements in addition to a relatively low similarity to elements of other groups. We claim that

sequences of samples can be interpreted as Gaussian distributed i.d. samples drawn from the

same source. Consequently, mixture, especially Gaussian, cluster models are an essential

statistical modeling instrument at the heart of many applications in computer vision, machine

learning and text classification.

In this paper, we present a novel efficient algorithm for selecting the optimal number of

components for a Gaussian Mixture Model. Efficiency is an important attribute here, since

clustering serves as a preprocessor in many data mining applications and clustering

algorithms tends to require long computing times.

We formulate the criterion as an optimization problem of minimizing the information learned

distance, between two partitions, obtained for disjoint samples. The problem is resolved by

constructing the distance empirical distributions such that the one concentrated at the origin

depicts the optimal components quantity. Our procedure can, now, roughly be described as a

generation of an empirical distribution of the test statistic with the application of a

concentration test.

A new efficient methodology for the number of components estimation is offered and

evaluated across synthetic and real-world data. The results obtained demonstrate high

capability of the proposed approach.

The new method exhibits the potentiality of the applications of the distance learning

methodology in the cluster validation problematic.

Page 63: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

62 | Software Engineering

An Efficient Way for Determining the Number of Clusters

in a Graph

Orly Yahalom

1, Zeev Volkovich

2

1Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901845, Fax: 972-4-9901852, E-mail: [email protected]

2Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901994, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Graph clustering, cluster validation

Cluster analysis is a central task in machine learning, aimed to identify meaningful,

homogenous data groups, named clusters. Cluster analysis includes two main tasks:

clustering and validation. Clustering methods deal with finding a partition in the data

elements into a given number of clusters. No additional external information in the clusters

process is available. Validation methods deal with determining the optimal (―true‖) number

of clusters in a dataset. Here, the clustering solutions are compared according to the

predefined rule and the optimal number of clusters is chosen as the one which yields the

optimal quality.

Our research is devoted to cluster validation in graphs, which has various applications in the

study of natural and manmade networks. Graph clustering is normally done by applying

spectral algorithms, using the eigenvectors of matrices derived from the graph.

The aim of this study is to determine the optimal number of clusters in graphs in order to

fully utilize graph clustering algorithms.

Given a graph G and a number of clusters k, we sample two sub graphs G1 and G2 (using

existing sampling methods based on the random walk technique) and compute partitions 1

and 2 of G1 and G2, respectively. If k is indeed the correct number of clusters, we expect to

obtain "similar" partitions, in the sense that every cluster in the natural partition would have a

corresponding sub cluster in 1 as well as in 2.

For a vertex u and a cluster C we define d(u,C) as the average graph distance between u and

vertices in C. Now, for any vertex u of G1, let C1(u) and C2(u) be the "closest" and "second

closest" clusters to u in 2, with respect to the aforementioned distance function.

Equivalently, for any vertex u of G2, let C1(u) and C2(u) be the "closest" and "second closest"

clusters to u in 1. If the partitions 1 and 2 are of high quality, then for every cluster C in

either of the partitions, we expect C1(u) to be a subset of the cluster to which u belongs, and

thus d(u, C1(u)) should be significantly smaller than d(u, C1(u)). Hence, we define the

average of ratio between d(u, C1(u)) and d(u, C2(u)) as an index for the clustering quality,

where a minimum value of the index should indicate the correct value of k.

Currently we are building an application to employ our method. Once done, we will perform

our method on various synthetic and real world datasets, to test its efficiency and robustness.

Acknowledgement: This study is supported by a grant from the ORT Braude College

Research Committee.

Page 64: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Teaching and Learning | 63

Chemistry: From the Nano-Scale to Microelectronics

Learning Quantum Chemistry via Visual-Conceptual Approach

Vered Dangur

1, Yehudit Judy Dori

1, 2, Uri Peskin

3

1Department of Education in Technology and Science Technion - Israel Institute of Technology,

Technion City, Haifa 32000, Israel;Teaching and Learning Center, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901724, Fax: 972-4-9901738 , E-mail: [email protected].

2Division of Continuing Education and External Studies, Technion - Israel Institute of Technology,

Technion City, Haifa 32000, Israel. Tel: 972-4-8294524, Fax: 972-4-8236022, E-mail:

[email protected]

3Schulich Faculty of Chemistry, Technion -Israel Institute of Technology, Technion City, Haifa 32000,

Israel Tel: 972-4-8292137, Fax: 972-4-829570, E-mail: [email protected]

Keywords: Quantum chemistry, thinking skills, visualization, chemistry understanding

levels

In the last decade, a new chemistry curriculum for Israeli High School students was

developed. The focus of most of the learning units was the development of higher order

thinking skills and conceptual understanding. Along these lines, one of the new learning units

which focused on Quantum Chemistry, was developed at the Technion. The title of this unit

is Chemistry – From “the hole” to “the whole”: From the nano-scale to Microelectronics.

Undergraduate Quantum Mechanics courses are taught mostly via a mathematics-oriented

approach. Our unit aims at 12th grade students, and is based on a more qualitative approach

which integrates real-life applications and visualization - the visual-conceptual approach. The

unit was also partially implemented in an undergraduate Quantum Chemistry course.

The research objectives were to investigate the effect of the unit on undergraduate and High

School Chemistry students' understanding of Quantum Mechanical concepts, as well as

visual and textual chemical understanding.

Research participants were 198 High School students who studied the unit, and 82

undergraduate students who learned Quantum Mechanics and volunteered to participate in

the study. About a third of the undergraduates participated in a short-term enrichment

workshop that included the topics of the learning unit.

The research tools included pre and post-questionnaires designed to assess conceptual and

visual understanding, as well as feedback questions. The High School students' responses to

the post-questionnaire showed a better understanding of the quantum models in comparison

to the pre-questionnaire. However, few of them still demonstrated misconceptions, such as

previous naïve and hybrid models. Both High School and undergraduate students improved

their scores in visual and textual chemical understanding. Comparison between the sub-

groups which participated in the research revealed that in questions related to the visual and

textual chemical understanding of the unit‘s topics, the 12th graders improved their scores

more than undergraduate students who participated in the enrichment course that included the

topics of the learning unit. In addition, students who took the enrichment workshop that had

exposed them to the unit, improved their scores significantly more than students who

participated in a short mathematical-oriented enrichment workshop.

The research emphasizes the contribution of a visual-conceptual approach to the teaching and

learning of Quantum Mechanics at both High School and undergraduate levels. It also

contributes to the body of knowledge of the four chemistry understanding levels-

macroscopic, microscopic, symbol and process, via the addition of the Quantum Mechanical

level as a fifth level of Chemistry understanding.

Page 65: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

64 | Teaching and Learning

"Personal Coaching" Program -

Students' Academic Achievements and Lecturers' Experience

Ita Donner

1*, Miri Shacham

2, Rivka Weiser Biton

3, Orit Herscovitz

4

1Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901907, Fax: 972-4-9901886, E-mail:[email protected] 2Department of Teaching, Teaching and Learning Center, ORT Braude College, P.O. Box 78,

Karmiel, 21982 Israel, Tel: 972-4-9901895, E-mail: [email protected]

3Prof. Ephraim Katzir Department of Biotechnology Engineering, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901875, Fax: 972-4-9901886, E-mail:[email protected]

4Department of Teaching, Teaching and Learning Center, ORT Braude College, P.O. Box 78,

Karmiel, 21982 Israel and the Department of Education in Technology and Science, Technion- Israel

Institute of Technology, Technion City, Haifa, Israel, 32000, Tel: 972-4-9901750, Fax: 972-4-

9987839, E-mail: orither@ technion.ac.il

Keywords: "Personal Coaching", "under achieving" academic status, lecturers experience

During the last three academic years, project MALAT at ORT Braude College has been

focusing on assisting students who are in "under- achieving" academic status by the end of

their first year in the College. As a part of the project, the students undergo an interview with

a psychologist at the Teaching and Learning Center and some of them are recommended to

have "Personal Coaching" carried out by a trained lecturer from the College.

The goals of the "Personal Coaching" are twofold: on the personal level - to promote

students' learning skills, self-efficacy and academic achievements, and on the College level -

to decrease the amount of learning failures and drop outs in the first academic year. The

coaching process includes a one hour meeting between the student and his/her coacher every

week during the semester. The main issues which are discussed during the coaching process

includes: setting goals, time management, coping with academic tasks, stress management

and personal learning style. The coaching process is accompanied by a longitudinal research.

This part of the research examines the students' academic achievements in the end of the

coaching process, after a year and after two years. The research also focuses on the effect of

the "Personal Coaching" on the lecturers - has the experience of being a coacher for College

students contributed to the professional development of the lecturers?

During the 2009-2011 academic years, 30 students received personal coaching by lecturers

from ORT Braude College. The lecturers participated in a 120 hour training course for a

"Personal Coachers‖ diploma with a special focus on under-achieving students.

Questionnaires and in-depth interviews were held with 15 lecturers who were coachers for

students.

In this presentation we will present our findings regarding the students and the lecturers.

*Deceased

Page 66: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Teaching and Learning | 65

Moving Towards Distance Learning - Literature Review and Two

Examples of Using Interactive Forums in Science Education Courses

Orit Herscovitz

Department of Teaching, Teaching and Learning Center, ORT Braude College, P.O. Box 78, Karmiel,

21982 Israel and the Department of Education in Technology and Science, Technion- Israel Institute

of Technology, Technion City, Haifa, Israel, 32000, Tel:972-4-9901750, Fax: 972-4-9987839, E-

mail: orither@ technion.ac.il

Keywords: Distance learning, forums, Moodle environment

The nature of the 21st century classroom is changing rapidly. Online education is growing at

about 30 percent annually. Distance learning is defined as an interactive teaching-learning

process, of which at least part is carried out online via text, audio, and/or video. Distance

learning occurs when the student and the teacher are physically separated and technology is

used to connect them.

Distance learning is conducted synchronously or asynchronously. Synchronous learning can

take place when the learner has the possibility to interact with the instructor and/or other

learners in real time. Asynchronous learning takes place when the learner is not in real-time

communication with the instructor or with other learners, and learners control the rhythm and

rate of their responses and the pace at which they assimilate the learning materials.

Distance learning provides for life-long learning. It enables the learner to be flexible with

respect to time and distance and for individual adaptation of the domains of interest, the pace,

and the academic level. For the institution administering the teaching, distance learning

enables catering to a large number of learners of diverse backgrounds, offering a variety of

learning subjects, and employing a large team of experts at relatively low costs. Research has

shown that most of the difficulties associated with distance learning are related to reading

text in digital formats, a sense of loneliness, social disconnect and lack of cognitive skills for

effective use of technology for learning. This might lead to ineffective use of technologies for

distance learning and to lack of pedagogical approaches to distance learning and teaching

processes.

Teaching online is different from teaching face-to-face and needs its own set of pedagogies.

In order to explore these pedagogies, interactive forums were integrated in two courses in

Science Education. The first course was "Action Research" conducted at the Technion,

including 12 undergraduate and graduate students. The second course was "Teaching

Methods" conducted at ORT Braude College, including 23 undergraduate students. In each of

the courses, the forums were managed in a Moodle environment and the activity was part of

the graded assignments in the course. In both cases, the students were divided into small

working groups, received a set of instruction and a specific period of time for the activity, in

which the face-to-face meeting in the course was converted to a virtual meeting. The

Technion forum was based on free discussion between the groups on preparing a research

proposal. The lecturer was actively involved in the dissections. The ORT Braude College

forum was based on posing questions by each one of the groups (after reading articles) and

managed the discussion with their colleagues on these questions. The lecturer was not

actively involved in the discussions.

The research tools included an open ended attitudes questionnaire given at the end of the

activity and full documentation of the activity in the forums. A preliminary analysis of the

questionnaire reveals mixed attitudes towards the importance and effectiveness of usage of

forums. The full analysis of the attitudes questionnaire and the content analysis of the

discussion in the forums will be presented at the conference.

Page 67: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

66 | Teaching and Learning

Program for Supporting Underachieving Students - Continuous

Evaluation

Orna Muller

1, Vered Dangur

1, 2, Daud Daud

1, Merav Rosenfeld

1

1Teaching and Learning Center, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-

4-9901724, Fax: 972-4-9901738, E-mail: [email protected]; [email protected];

[email protected]

2Department of Education in Technology and Science, Technion – Israel Institute of Technology,

Technion City, Haifa 32000, Israel;Teaching and Learning Center, ORT Braude College, P.O. Box 78,

Karmiel 21982, Israel, Tel: 972-4-9901724, Fax: 972-4-9901738, E-mail: [email protected].

Keywords: Underachieving students, supporting tools for learning, learning skills

A student in an academic underachieving status is identified at ORT Braude College (OBC)

as a student who accumulated three failures in courses or achieved a low average grade (less

than 65) over two consecutive semesters; according to College policy, the termination of this

student's studies is considered . OBC tries on one hand, to cut down the percentage of failing

students in advanced stages of their studies in the College, and on the other hand, to lower the

drop-out rate.

The program for supporting underachieving students (called MALAT) was design by the

Teaching and Learning Center (TLC), to provide support to students which their departments

evaluated as having a good chance to overcoming their difficulties, to improve their academic

status and to succeed in their studies.

The MALAT program includes two main stages: 1) Student‘s difficulties and possible

reasons for their failures are revealed through collection of personal information and an

extensive personal interview with a psychologist. 2) The student is offered several

supporting tools that may help them overcome difficulties and improve their achievements.

Among the tools offered are: tutoring by a peer-student, coaching by College lecturers

(trained and certified), emotional counseling, group meetings for time management, personal

counseling to improve learning skills, courses for learning strategies and team-leaders‘

workshops.

The MALAT program is documented and evaluated for the purpose of both assessing the rate

of success and evaluating the process of supporting the students. The main questions that are

evaluated are: What are the sources of difficulties and the reasons for failure among students

who participated in the MALAT program? What is the rate of students that improved their

academic status after participating in the program? What is the correlation between a

supporting tool, the degree of implementation of a recommended tool and students'

achievement during the 3rd

semester? Which supporting tools are most effective?

Research tools include continuous documentation of students' participation in supporting

activities, and interviews with students who have completed the program.

This is the third year of evaluating the program. Analysis of data collected on 125 students

who participated in the MALAT project during the winter semesters of 2009-2010 and 2010-

2011 academic years, shows that 50% of them moved to a satisfactory academic status.

Other results will be described in our presentation.

Page 68: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Teaching and Learning | 67

Enriching the Quality of Personal-Social Expression in

Digital Environment

David Pundak

1, Miri Shacham

2

1Web-Learning Unit, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-9901843,

Fax: 972-4-9901886, E-mail: [email protected]

2Department of Teaching, Teaching and Learning Center, ORT Braude College, P.O. Box 78,

Karmiel, 21982 Israel, Tel: 972-4-9901895, E-mail: [email protected]

Keywords: Personal-social discourse, WIKI, peer feedback, meaningful learning

Research has indicated that attempts to promote on-line collaboration between students via

web in academic courses, encounter many difficulties. One of them is the difficulty in

managing a meaningful discussion between the students during the preparation of conceptual

framework. The research tracks the computerized discourse in a WIKI environment over a

period of three years in a "Technology in Education" course. During the period of the

research, alterations were applied in order to deepen the student's personal expression on the

net, as a preliminary stage for their social discussion. Findings indicate an improvement in

students' involvement in the editing of their personal site and a reduction in the percentage of

meaningless feedback. It would be valuable to examine these research directions in larger

and broader populations.

The research focused on the bonding of personal and group knowledge in the WIKI

environment, including the influence of a personal site on constructing a conceptual

framework.

During the three years in which the WIKI environment was examined in the studied course,

efforts were made to enhance the student's involvement: in writing their personal site,

improving willingness to accept and provide feedback and accepting responsibility to write a

concept related to their knowledge and construct it according to accepted criteria. Difficulties

in attainment of these goals were described in detail in previous studies. In brief, these

include: reticence regarding exposure on the net, fear of criticism, avoiding the presentation

of criticism, resistance to intervention in personal products, providing meaningless feedback,

a long period of assimilation to the new environment and a tendency to invest minimal

efforts. In order to cope with these difficulties, each year, alterations were introduced in the

instructions given to the students when they dealt with the tasks in the WIKI environment.

The findings and conclusions of the research will be presented at the conference.

Page 69: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

68 | Teaching and Learning

A New Skills Course - Logical Thinking and Argumentation

Amir Rubinstein

Department of Software Engineering, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel:

972-4-9901902, Fax: 972-4-9901852, E-mail: [email protected]

Keywords: Logical thinking, argumentation skills

Computer Science and Software Engineering curricula require, among other skills, high

logical reasoning ability. Insufficient logical thinking skills result in poor and illogical

argumentation processes, disconnected, imprecise and ambiguous statements, and incorrectly

used terms. Logical thinking is essential for handling abstractions in a precise manner, which

is fundamental in Computer Science and Software Engineering. However, introductory

Mathematics and programming courses frequently emphasize mastering specific

Mathematics content and a programming language, while thinking processes are dealt with in

an unstructured and implicit manner.

This study proposes that logical thinking needs to be introduced separately from Mathematics

and Computer Science courses, but in parallel to them; consequently, connections and the

implementation of ideas can constantly be made. The Logical Thinking and Argumentation

course has two main goals: 1) to develop logical and precise thinking and 2) to enhance

students' skills in formal argumentation techniques. The primary theme of proof and disproof

is always in the spotlight. Topics are taught not in a less formal manner, subtle issues are

discussed to enhance students' awareness of them and raising difficulties and examples for

discussion is encouraged.

Reflective written questionnaires were presented to the students at the end of the course. The

questionnaires which consist of open questions, allow learning about students‘ attitudes

regarding the influence of the course on their argumentation skills and changes in their

perception of argumentation in general. Students were also asked to submit a summary of the

course rationale in their eyes and give personal feedback.

Since Logical Thinking and Argumentation is only being taught in our College for the second

time, we have relatively few impressions at the moment (but will have more at the end of the

current semester). In general, students report improvement in their confidence in the

rationality of logical argumentations. Many of them feel that they became more aware of

vague statements made by themselves or others, and therefore can make an effort to avoid

them. Some also find proofs they encounter "more familiar" and less threatening. Students

mentioned that in this course there was time and legitimacy to expose, argue about and iron

out subtle logical issues, which caused difficulties in other first year courses they took (e.g.:

the fact that in some universal statements the "for all" quantifier is omitted, the right way to

disprove an 'if a then b' statement, the rationale of proof by contradiction, etc.).

This feedback implies that an efficient, separate, introductory unit that explicates the

principles of logical reasoning and argumentation is of importance for freshmen in Science

and Engineering.

Page 70: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Teaching and Learning | 69

Evaluation of the Preparatory Semester Program

Miri Shacham

1, Orna Muller

2

1Department of Teaching, Teaching and Learning Center, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901895, E-mail: [email protected]

2Department of Teaching, Teaching and Learning Center, Department of Software Engineering, ORT

Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-9901895, E-mail:

[email protected]

Keywords: Preparatory semester, first academic year, algorithmic thinking

Numerous students, though eligible for academic studies, take up engineering without being

adequately prepared for academic studies, often occurring years after High School

graduation. The sharp entry into the intensive academic world is highly demanding and

challenging to students who lack learning and thinking skills, as well as solid scientific

grounds in Mathematics, Physics and Computer Science. Consequently, many students

experience difficulties and failure, have to re-attend courses and even drop out. In 2010-2011,

the Departments of Mechanical and Software Engineering offered a unique first semester

curriculum ("Preparatory Semester") seeking to enable a gradual adjustment to the studies

and decrease failure rates in the first year. The program‘s principles are:

Decreasing the number of required courses compared to the ―recommended‖

program

Providing introductory courses preparing the students for their studies in the

Department

Adding learning and thinking skills courses

Increasing the number of hours in basic courses considered ―difficult‖

Teaching in small groups giving more attention to each student

Option to complete the studies in four years by offering summer complementary

courses

Accompanying the program with research

The aims of this study are:

1) Examining the students‘ achievements after the Preparatory Semester versus those of

the other students in the Department

2) Examining the students‘ satisfaction with the program

In the first and second semesters of 2010-2011, the research population consisted of 14

Mechanical Engineering and 34 Software Engineering students. Participants filled out a

questionnaire examining their perceptions of the program and their first semester

achievements were compared to those of the other students in the faculty.

Analysis of the data reveals that most students feel their expectations of the "Preparatory

Semester" were met. The students explained their feelings by stating that they got good

grounds for their studies, as well as personal treatment, learning and thinking tools. The

"Preparatory Semester" enhanced their personal abilities and allowed for ―a smooth landing‖

and the development of learning and thinking skills required for academic studies.

The students have faith in their ability to cope with the academic demands and believe the

program is important mainly for students with no previous knowledge and those who have

enrolled in College long after their high school studies.

The participants‘ achievements will be presented and compared to those of the entire student

body, after one semester/two semesters. Their perceptions of the program‘s contribution will

also be presented.

Page 71: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

70 | Teaching and Learning

Intercultural Doctoral Learning

Yehudit Od-Cohen

1, Miri Shacham

2

1The English Department, Ohalo College of Katzrin, P. O. Box 222, Katzrin 12900, Israel, Tel: 972-4-

6825000, Fax: 972-4-6825011, E-mail: [email protected]

2Department of Teaching, Teaching and Learning Center, ORT Braude College, P.O. Box 78, Karmiel

21982, Israel, Tel: 972-4-9901895, Fax: 972-4-9901839, E-mail: [email protected]

Keywords: Intercultural learning, lifelong learning, adult education, academic discourse,

conceptual thinking

The research examined the learning experiences of social sciences doctoral students in the

international program in ARU University in the UK.. This research focused on learning

characteristics in a cultural and a community context, both during Ph.D. studies and after

completion. It draws on theories of communities of practice, intercultural learning, adult

learning and lifelong learning. We will present a part of a longitudinal research that was

conducted during the years 2005-2010.

The aim of the present study is to examine the learning experiences and the professional

development of the program graduates beyond their Ph.D.'s, and their transition from field

experts to researchers and theorists.

The research methods included interviews with open-ended questions that were conducted

with 30 graduates who completed their doctorates in the past five years. Interviews were

conducted both face to face and using E-mail. Rate of response was 85%. The research

population included adults aged 40-50, in their mid-career, most of them not affiliated with

academic institutions. Additionally, the research population field of expertise pertained to

Education and Business Management, Psychology, Nursing, History, and Law.

The content analysis yielded four elements that characterize learning and development after

completing the doctorate: Cognitive, Emotional, Interpersonal and Professional

Development. Additional findings show that the transition from field experts into researchers,

continuing their research beyond their Ph.D.'s and developing academic discourse.

Intercultural learning within cross-cultural supervision emerges from this study as an

interaction between the supervisor‘s and the Ph.D. student‘s cultures. As thinking is

expressed through language, the student‘s level of linguistic and socio-linguistic competence

on the one hand and the supervisor‘s recognition of this competence on the other hand, can

enhance conceptual thinking. Beyond Ph.D., conceptual thinking is used for more theory,

more research, more development as a researcher, and more contribution to society.

Page 72: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Teaching and Learning | 71

Content and Language Integrated Learning – the Implications for

College Teachers and Students

Linda Weinberg

1, Suzy Esquenazi Cohen

2

1English Studies Unit, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-9901985,

Fax: 972-4-9901832, E-mail: linda @braude.ac.il

2English Studies Unit, ORT Braude College, P.O. Box 78, Karmiel 21982, Israel, Tel: 972-4-9901985,

Fax: 972-4-9901832, E-mail: suzycohen @braude.ac.il

Keywords: English, content courses, policy, proficiency, CLIL, CEFR

Content courses taught in English at Israeli institutions of higher learning are usually found in

programs for overseas students and are not generally intended for local students. More

recently, local students have been able to consider taking a degree in English at Bar Ilan

University or at the Interdisciplinary Center in Herzliya. Although only at Bar Ilan University

are the degree courses advertised as an opportunity for the students to improve their language

skills and their ability to compete in the global community. In the past, the institutional

objection to teaching in English was largely based on fear of the English language usurping

the prestige of Hebrew as the revitalized and national language of Israel. More recently

however, an increasing number of institutions have begun to offer occasional courses in

English to Israeli students. The motivation for this is less clear than such initiatives in

Europe, where there is a clearly defined policy aimed at improving proficiency in English

with the aim of facilitating academic and professional mobility within the European Union

(E.U.) and the wider global market. Consequently, in Europe there is a Common European

Framework of Reference for language learning (CEFR) which is applicable throughout the

E.U., and which clarifies learners‘ objectives and achievements in language learning. At the

same time, there is a growing body of research into Content and Language Integrated

Learning (CLIL). While a variety of courses have been given in English at ORT Braude

College (OBC) over the last 15 or so years, within the college‘s U.S. exchange and semester

abroad programs, student and lecturer needs for learning and teaching in English have not yet

been systematically addressed and a consistent policy regarding the necessary preparation

and support within the curriculum have yet to be developed.

The aim of this study is to investigate the impact of teaching in English both on students and

on lecturers and to clarify the extent to which existing English language courses need to be

adapted in order to address new needs.

Within the framework of the TEMPUS-EFA project, in which OBC is a participant, we are

observing preparations for and implementation of, courses in English for our students. We

are collating feedback from students and lecturers who have been involved in content courses

taught in English, and we are examining current policies on English language learning at the

tertiary level in Israel, comparing outcomes of these courses with those of students at the

tertiary level in the E.U.

Page 73: The ORT Braude College

The ORT Braude College 7th

Interdisciplinary Research Conference

Authors' Index

17

27,67

28

57

50

41,44

66

17

58,68

11,18

5

16

39

12,17

10

17

23

64,67,69,70

51

6

45

29

37

31

41

46

54

59

44

51

30

60

7

18

34

53-55,59-62

7

61

71

46

47

38

8,9

62

24

30

Pozner R.

Pundak D.

Ravid R.

Ravve E.

Raz E.

Regev M.

Rosenfeld M.

Rosenwaks Y.

Rubinstein A.

Sabag N.

Sabbah I.

Sabban A.

Salomon S.

Sarfaty R.

Schechner P.

Segev G.

Senderov V.A.

Shacham M.

Shajrawi Y.

Shakhrai S.G.

Shefer M.

Shnits B.

Shutze O.

Shvartsman L.

Simchon R.

Sinenko E.

Soffer A.

Soreanu P.

Spigarelli S.

Stein E.

Tahan M.

Toledano-Kitai D.

Toubi A.

Trotskovsky E.

Turetsky V.

Volkovich Z.

Wasser S.P.

Weber G.W.

Weinberg L.

Weiser-Biton R.

Weiss A.

Weiss-Cohen M.

Weitz I.S.

Yahalom O.

Yedidsion L.

Yuval E.

.

4

43

2,8

48

58

65

5

49

23

4,6

4

35

58

23

3

46

54

4,6

14

17

1

30

42

52

26

55

36

27

2,9

23

5

24

15,41

3

47

9

56

66,69

3

24,25

5

70

26

28

63

Gron V.A.

Groysman A.

Grozovsky M.

Guetta D.

Hazan O.

Herscovitz O.

Jiryes M.

Kallush S.

Kaner M.

Kaplichenko N.M.

Kaplichenko E.G.

Karp L.

Kassir Y.

Khatskevich V. A.

Knani D.

Konischeva O.

Korenblat K.

Korostovenko V.V.

Kosolapov S.

Kribus A.

Kroll E.

Kuflik T.

Laufer Y.

Lavner Y.

Levi A.

Levtov N.

Lewin M.

Maharshak A.

Maoz M.

Marmor Y.

Masalha N.

Mikhaylidi Y.

Miller S.

Mironi-Harpaz I.

Mizrahi R.

Morell J.J.

Mounits B.

Muller O.

Narkis M.

Naseraldin H.

Nujedat A.

Od-Cohen Y.

Peretz H.

Perry D.

Peskin U.

.

51

31

3,36

2

37-39,47

35

1

32

5

8

25

61

19

51

20

40,41

31

31

1

28

41,42

10

44

63,66

66

46

63

23

37,38

33

71

10

21

7

11,12

9

52

26

19,23

22

13

34

35

20

39

Abboud F.

Abramovitz B.

Alperstein D.

Assy N.

Avigad G.

Avros R.

Azhari R.

Azizov T. Y.

Baransi K.

Bar Guy N.

Baron O.

Barzily Z.

Bashkansky E.

Ben Asher Y.

Bendavid I.

Ben Hanan U.

Berezina M.

Berman A.

Bonshtein I.

Boxma O.J.

Braun-Benyamin O.

Bubis E.

Cabibbo M.

Dangur V.

Daud D.

Donner I.*

Dori Y.J.

Dror S.

Eisenstadt E.

Elin M.

Esquenazy–Cohen S.

Faiger H.

Faran D.

Fares F.

Florescu R.

Freitag C.

Frid A.

Fried Y.

Gadrich T.

Gavish N.

Gladshtein M.

Glizer V.

Golani M.

Golany B.

Goldvard A.

*Deceased