Search results for: STEP.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1014

Search results for: STEP.

804 Properties of Biodiesel Produced by Enzymatic Transesterification of Lipids Extracted from Microalgae in Supercritical Carbon Dioxide Medium

Authors: Hanifa Taher, Sulaiman Al-Zuhair, Ali H. Al-Marzouqi, Yousef Haik, Mohammed Farid

Abstract:

Biodiesel, as an alternative renewable fuel, has been receiving increasing attention due to the limited supply of fossil fuels and the increasing need for energy. Microalgae are promising source for lipids, which can be converted to biodiesel. The biodiesel production from microalgae lipids using lipase catalyzed reaction in supercritical CO2 medium has several advantages over conventional production processes. However, identifying the optimum microalgae lipid extraction and transesterification conditions is still a challenge. In this study, the quality of biodiesel produced from lipids extracted from Scenedesmus sp. and their enzymatic transesterification using supercritical carbon dioxide have been investigated. At the optimum conditions, the highest biodiesel production yield was found to be 82%. The fuel properties of the produced biodiesel, without any separation step, at optimum reaction condition, were determined and compared to ASTM standards. The properties were found to comply with the limits, and showed a low glycerol content, without any separation step.

Keywords: Biodiesel, fuel standards, lipase, microalgae, Supercritical CO2.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2462
803 Distributed Estimation Using an Improved Incremental Distributed LMS Algorithm

Authors: Amir Rastegarnia, Mohammad Ali Tinati, Azam Khalili

Abstract:

In this paper we consider the problem of distributed adaptive estimation in wireless sensor networks for two different observation noise conditions. In the first case, we assume that there are some sensors with high observation noise variance (noisy sensors) in the network. In the second case, different variance for observation noise is assumed among the sensors which is more close to real scenario. In both cases, an initial estimate of each sensor-s observation noise is obtained. For the first case, we show that when there are such sensors in the network, the performance of conventional distributed adaptive estimation algorithms such as incremental distributed least mean square (IDLMS) algorithm drastically decreases. In addition, detecting and ignoring these sensors leads to a better performance in a sense of estimation. In the next step, we propose a simple algorithm to detect theses noisy sensors and modify the IDLMS algorithm to deal with noisy sensors. For the second case, we propose a new algorithm in which the step-size parameter is adjusted for each sensor according to its observation noise variance. As the simulation results show, the proposed methods outperforms the IDLMS algorithm in the same condition.

Keywords: Distributes estimation, sensor networks, adaptive filter, IDLMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1399
802 Evaluation of Negative Air Ions in Bioaerosol Removal: Indoor Concentration of Airborne Bacterial and Fungal in Residential Building in Qom City, Iran

Authors: Z. Asadgol, A. Nadali, H. Arfaeinia, M. Khalifeh Gholi, R. Fateh, M. Fahiminia

Abstract:

The present investigation was conducted to detect the type and concentrations of bacterial and fungal bioaerosols in one room (bedroom) of each selected residential building located in different regions of Qom during February 2015 (n=9) to July 2016 (n=11). Moreover, we evaluated the efficiency of negative air ions (NAIs) in bioaerosol reduction in indoor air in residential buildings. In the first step, the mean concentrations of bacterial and fungal in nine sampling sites evaluated in winter were 744 and 579 colony forming units (CFU)/m3, while these values were 1628.6 and 231 CFU/m3 in the 11 sampling sites evaluated in summer, respectively. The most predominant genera between bacterial and fungal in all sampling sites were detected as Micrococcus spp. and Staphylococcus spp. and also, Aspergillus spp. and Penicillium spp., respectively. The 95% and 45% of sampling sites have bacterial and fungal concentrations over the recommended levels, respectively. In the removal step, we achieved a reduction with a range of 38% to 93% for bacterial genera and 25% to 100% for fungal genera by using NAIs. The results suggested that NAI is a highly effective, simple and efficient technique in reducing the bacterial and fungal concentration in the indoor air of residential buildings.

Keywords: Bacterial, fungal, negative air ions, indoor air, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
801 The Use of Different Methodological Approaches to Teaching Mathematics at Secondary Level

Authors: M. Rodionov, N. Sharapova, Z. Dedovets

Abstract:

The article describes methods of preparation of future teachers that includes the entire diversity of traditional and computer-oriented methodological approaches. The authors reveal how, in the specific educational environment, a teacher can choose the most effective combination of educational technologies based on the nature of the learning task. The key conditions that determine such a choice are that the methodological approach corresponds to the specificity of the problem being solved and that it is also responsive to the individual characteristics of the students. The article refers to the training of students in the proper use of mathematical electronic tools for educational purposes. The preparation of future mathematics teachers should be a step-by-step process, building on specific examples. At the first stage, students optimally solve problems aided by electronic means of teaching. At the second stage, the main emphasis is on modeling lessons. At the third stage, students develop and implement strategies in the study of one of the topics within a school mathematics curriculum. The article also recommended the implementation of this strategy in preparation of future teachers and stated the possible benefits.

Keywords: Computer-oriented approach, traditional approach, future teachers, mathematics, lesson, students, education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
800 Obtaining Constants of Johnson-Cook Material Model Using a Combined Experimental, Numerical Simulation and Optimization Method

Authors: F. Rahimi Dehgolan, M. Behzadi, J. Fathi Sola

Abstract:

In this article, the Johnson-Cook material model’s constants for structural steel ST.37 have been determined by a method which integrates experimental tests, numerical simulation, and optimization. In the first step, a quasi-static test was carried out on a plain specimen. Next, the constants were calculated for it by minimizing the difference between the results acquired from the experiment and numerical simulation. Then, a quasi-static tension test was performed on three notched specimens with different notch radii. At last, in order to verify the results, they were used in numerical simulation of notched specimens and it was observed that experimental and simulation results are in good agreement. Changing the diameter size of the plain specimen in the necking area was set as the objective function in the optimization step. For final validation of the proposed method, diameter variation was considered as a parameter and its sensitivity to a change in any of the model constants was examined and the results were completely corroborating.

Keywords: Constants, Johnson-Cook material model, notched specimens, quasi-static test, sensitivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3553
799 Cell Biomass and Lipid Productivities of Meyerella planktonica under Autotrophic and Heterotrophic Growth Conditions

Authors: Rory Anthony Hutagalung, Leonardus Widjaya

Abstract:

Microalgae Meyerella planktonica is a potential biofuel source because it can grow in bulk in either autotrophic or heterotrophic condition. However, the quantitative growth of this algal type is still low as it tends to precipitates on the bottom. Besides, the lipid concentration is still low when grown in autotrophic condition. In contrast, heterotrophic condition can enhance the lipid concentration. The combination of autotrophic condition and agitation treatment was conducted to increase the density of the culture. On the other hand, a heterotrophic condition was set up to raise the lipid production. A two-stage experiment was applied to increase the density at the first step and to increase the lipid concentration in the next step. The autotrophic condition resulted higher density but lower lipid concentration compared to heterotrophic one. The agitation treatment produced higher density in both autotrophic and heterotrophic conditions. The two-stage experiment managed to enhance the density during the autotrophic stage and the lipid concentration during the heterotrophic stage. The highest yield was performed by using 0.4% v/v glycerol as a carbon source (2.9±0.016 x 10^6 cells w/w) attained 7 days after the heterotrophic stage began. The lipid concentration was stable starting from day 7.

Keywords: Agitation, Glycerol, Heterotrophic, Lipid Productivity, Meyerella planktonica.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
798 A Model of Market Segmentation for the Customers of Mellat Bank in Iran

Authors: Nader Gharibnavaz, Hossein Yazdi

Abstract:

If organizations like Mellat Bank want to identify its customer market completely to reach its specified goals, it can segment the market to offer the product package to the right segment. Our objective is to offer a segmentation model for Iran banking market in Mellat bank view. The methodology of this project is combined by “segmentation on the basis of four part-quality variables" and “segmentation on the basis of different in means". Required data are gathered from E-Systems and researcher personal observation. Finally, the research offers the organization that at first step form a four dimensional matrix with 756 segments using four variables named value-based, behavioral, activity style, and activity level, and at the second step calculate the means of profit for every cell of matrix in two distinguished work level (levels α1:normal condition and α2: high pressure condition) and compare the segments by checking two conditions that are 1- homogeneity every segment with its sub segment and 2- heterogeneity with other segments, and so it can do the necessary segmentation process. After all, the last offer (more explained by an operational example and feedback algorithm) is to test and update the model because of dynamic environment, technology, and banking system.

Keywords: market segmentation model, banking system, Mellat bank

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3242
797 On Solving Single-Period Inventory Model under Hybrid Uncertainty

Authors: Madhukar Nagare, Pankaj Dutta

Abstract:

Inventory decisional environment of short life-cycle products is full of uncertainties arising from randomness and fuzziness of input parameters like customer demand requiring modeling under hybrid uncertainty. Prior inventory models incorporating fuzzy demand have unfortunately ignored stochastic variation of demand. This paper determines an unambiguous optimal order quantity from a set of n fuzzy observations in a newsvendor inventory setting in presence of fuzzy random variable demand capturing both fuzzy perception and randomness of customer demand. The stress of this paper is in providing solution procedure that attains optimality in two steps with demand information availability in linguistic phrases leading to fuzziness along with stochastic variation. The first step of solution procedure identifies and prefers one best fuzzy opinion out of all expert opinions and the second step determines optimal order quantity from the selected event that maximizes profit. The model and solution procedure is illustrated with a numerical example.

Keywords: Fuzzy expected value, Fuzzy random demand, Hybrid uncertainty, Optimal order quantity, Single-period inventory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1974
796 The Role of Product Involvement Level in Consumer Tendency toward Online Review

Authors: Khashayar Jafari Kaliji

Abstract:

The paper aims to clarify the relationship between product involvement level and consumer tendency toward online review. It proposes the products in two classes and examines the level of user attention and significant difference between attribute-based areas and experience-based areas in each category. It uses an eye-tracking experiment to simulate the experience of online shopping behavior in order to view the consumers' shopping behavior. Thus, a scenario was designed, and 23 participants were asked step by step to purchase some products and add them to their shopping cart. The fixation durations are used to examine the amount of visual attention of the user in each area of interest (AOI) determined considering two classes of high involvement and low involvement products, and paired sample T-test was used to examine the effect of the product’s types on the online review content. The study results explained that users of high involvement products consider the attribute-based points more highly than the experience-based points.

Keywords: High-involvement products, low-involvement products, attribute-based review, experience-based review, eye tracking, fixation duration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 346
795 A Partially Accelerated Life Test Planning with Competing Risks and Linear Degradation Path under Tampered Failure Rate Model

Authors: Fariba Azizi, Firoozeh Haghighi, Viliam Makis

Abstract:

In this paper, we propose a method to model the relationship between failure time and degradation for a simple step stress test where underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to shorten failure time of products and a tampered failure rate (TFR) model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates (MLEs) of the model parameters are obtained through an expectation-maximization (EM) algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real example is analyzed to illustrate the application of the proposed methods.

Keywords: Expectation-maximization (EM) algorithm, cause of failure, intensity, linear degradation path, masked data, reliability function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036
794 3D Shape Modelling of Left Ventricle: Towards Correlation of Myocardial Scintigraphy Data and Coronarography Result

Authors: A. Ben Abdallah, H. Essabbah, M. H. Bedoui

Abstract:

The myocardial sintigraphy is an imaging modality which provides functional informations. Whereas, coronarography modality gives useful informations about coronary arteries anatomy. In case of coronary artery disease (CAD), the coronarography can not determine precisely which moderate lesions (artery reduction between 50% and 70%), known as the “gray zone", are haemodynamicaly significant. In this paper, we aim to define the relationship between the location and the degree of the stenosis in coronary arteries and the observed perfusion on the myocardial scintigraphy. This allows us to model the impact evolution of these stenoses in order to justify a coronarography or to avoid it for patients suspected being in the gray zone. Our approach is decomposed in two steps. The first step consists in modelling a coronary artery bed and stenoses of different location and degree. The second step consists in modelling the left ventricle at stress and at rest using the sphercical harmonics model and myocardial scintigraphic data. We use the spherical harmonics descriptors to analyse left ventricle model deformation between stress and rest which permits us to conclude if ever an ischemia exists and to quantify it.

Keywords: Spherical harmonics model, vascular bed, 3D reconstruction, left ventricle, myocardial scintigraphy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
793 New Features for Specific JPEG Steganalysis

Authors: Johann Barbier, Eric Filiol, Kichenakoumar Mayoura

Abstract:

We present in this paper a new approach for specific JPEG steganalysis and propose studying statistics of the compressed DCT coefficients. Traditionally, steganographic algorithms try to preserve statistics of the DCT and of the spatial domain, but they cannot preserve both and also control the alteration of the compressed data. We have noticed a deviation of the entropy of the compressed data after a first embedding. This deviation is greater when the image is a cover medium than when the image is a stego image. To observe this deviation, we pointed out new statistic features and combined them with the Multiple Embedding Method. This approach is motivated by the Avalanche Criterion of the JPEG lossless compression step. This criterion makes possible the design of detectors whose detection rates are independent of the payload. Finally, we designed a Fisher discriminant based classifier for well known steganographic algorithms, Outguess, F5 and Hide and Seek. The experiemental results we obtained show the efficiency of our classifier for these algorithms. Moreover, it is also designed to work with low embedding rates (< 10-5) and according to the avalanche criterion of RLE and Huffman compression step, its efficiency is independent of the quantity of hidden information.

Keywords: Compressed frequency domain, Fisher discriminant, specific JPEG steganalysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2116
792 Traditional Dyeing of Silk with Natural Dyes by Eco-Friendly Method

Authors: Samera Salimpour Abkenar

Abstract:

In traditional dyeing of natural fibers with natural dyes, metal salts are commonly used to increase color stability. This method always carries the risk of environmental pollution (contamination of arable soils and fresh groundwater) due to the release of dyeing effluents containing large amounts of metal. Therefore, researchers are always looking for new methods to obtain a green dyeing system. In this research, the use of the enzymatic dyeing method to prevent environmental pollution with metals and reduce production costs has been proposed. After degumming and bleaching, raw silk fabrics were dyed with natural dyes (Madder and Sumac) by three methods (pre-mordanting with a metal salt, one-step enzymatic dyeing, and two-step enzymatic dyeing). Results show that silk dyed with natural dyes by the enzymatic method has higher color strength and colorfastness than the pretreated with a metal salt. Also, the amount of remained dyes in the dyeing wastewater is significantly reduced by the enzymatic method. It is found that the enzymatic dyeing method leads to improvement of dye absorption, color strength, soft hand, no change in color shade, low production costs (due to low dyeing temperature), and a significant reduction in environmental pollution.

Keywords: Eco-friendly, natural dyes, silk, traditional dyeing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 514
791 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances

Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels

Abstract:

The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.

Keywords: Prediction Model, Sensitivity Analysis, Simulation Method, USMLE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1422
790 The Alliance for Grassland Renewal: A Model for Teaching Endophyte Technology

Authors: C. A. Roberts, J. G. Andrae, S. R. Smith, M. H. Poore, C. A. Young, D. W. Hancock, G. J. Pent

Abstract:

To the author’s best knowledge, there are no published reports of effective methods for teaching fescue toxicosis and grass endophyte technology in the USA. To address this need, a group of university scientists, industry representatives, government agents, and livestock producers formed an organization called the Alliance for Grassland Renewal. One goal of the Alliance was to develop a teaching method that could be employed across all regions in the USA and all sectors of the agricultural community. The first step in developing this method was identification of experts who were familiar with the science and management of fescue toxicosis. The second step was curriculum development. Experts wrote a curriculum that addressed all aspects of toxicosis and management, including toxicology, animal nutrition, pasture management, economics, and mycology. The curriculum was created for presentation in lectures, laboratories, and in the field. The curriculum was in that it could be delivered across state lines, regardless of peculiar, in-state recommendations. The curriculum was also unique as it was unanimously supported by private companies otherwise in competition with each other. The final step in developing this teaching method was formulating a delivery plan. All experts, including university, industry, government, and production, volunteered to travel from any state in the USA, converge in one location, teach a 1-day workshop, then travel to the next location. The results of this teaching method indicate widespread success. Since 2012, experts across the entire USA have converged to teach Alliance workshops in Kansas, Oklahoma, Missouri, Kentucky, Georgia, South Carolina, North Carolina, and Virginia, with ongoing workshops in Arkansas and Tennessee. Data from post-workshop surveys indicate that instruction has been effective, as at least 50% of the participants stated their intention to adopt the endophyte technology presented in these workshops. The teaching method developed by the Alliance for Grassland Renewal has proved to be effective, and the Alliance continues to expand across the USA.

Keywords: Endophyte, Epichloë coenophiala, ergot alkaloids, fescue toxicosis, tall fescue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 715
789 Malicious Route Defending Reliable-Data Transmission Scheme for Multi Path Routing in Wireless Network

Authors: S. Raja Ratna, R. Ravi

Abstract:

Securing the confidential data transferred via wireless network remains a challenging problem. It is paramount to ensure that data are accessible only by the legitimate users rather than by the attackers. One of the most serious threats to organization is jamming, which disrupts the communication between any two pairs of nodes. Therefore, designing an attack-defending scheme without any packet loss in data transmission is an important challenge. In this paper, Dependence based Malicious Route Defending DMRD Scheme has been proposed in multi path routing environment to prevent jamming attack. The key idea is to defend the malicious route to ensure perspicuous transmission. This scheme develops a two layered architecture and it operates in two different steps. In the first step, possible routes are captured and their agent dependence values are marked using triple agents. In the second step, the dependence values are compared by performing comparator filtering to detect malicious route as well as to identify a reliable route for secured data transmission. By simulation studies, it is observed that the proposed scheme significantly identifies malicious route by attaining lower delay time and route discovery time; it also achieves higher throughput.

Keywords: Attacker, Dependence, Jamming, Malicious.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1716
788 Development of Fuzzy Logic Control Ontology for E-Learning

Authors: Muhammad Sollehhuddin A. Jalil, Mohd Ibrahim Shapiai, Rubiyah Yusof

Abstract:

Nowadays, ontology is common in many areas like artificial intelligence, bioinformatics, e-commerce, education and many more. Ontology is one of the focus areas in the field of Information Retrieval. The purpose of an ontology is to describe a conceptual representation of concepts and their relationships within a particular domain. In other words, ontology provides a common vocabulary for anyone who needs to share information in the domain. There are several ontology domains in various fields including engineering and non-engineering knowledge. However, there are only a few available ontology for engineering knowledge. Fuzzy logic as engineering knowledge is still not available as ontology domain. In general, fuzzy logic requires step-by-step guidelines and instructions of lab experiments. In this study, we presented domain ontology for Fuzzy Logic Control (FLC) knowledge. We give Table of Content (ToC) with middle strategy based on the Uschold and King method to develop FLC ontology. The proposed framework is developed using Protégé as the ontology tool. The Protégé’s ontology reasoner, known as the Pellet reasoner is then used to validate the presented framework. The presented framework offers better performance based on consistency and classification parameter index. In general, this ontology can provide a platform to anyone who needs to understand FLC knowledge.

Keywords: Engineering knowledge, fuzzy logic control ontology, ontology development, table of contents.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
787 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: Marine sedimentology, seabed map, sediment classification, World Ocean.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 980
786 A Computer Aided Detection (CAD) System for Microcalcifications in Mammograms - MammoScan mCaD

Authors: Kjersti Engan, Thor Ole Gulsrud, Karl Fredrik Fretheim, Barbro Furebotten Iversen, Liv Eriksen

Abstract:

Clusters of microcalcifications in mammograms are an important sign of breast cancer. This paper presents a complete Computer Aided Detection (CAD) scheme for automatic detection of clustered microcalcifications in digital mammograms. The proposed system, MammoScan μCaD, consists of three main steps. Firstly all potential microcalcifications are detected using a a method for feature extraction, VarMet, and adaptive thresholding. This will also give a number of false detections. The goal of the second step, Classifier level 1, is to remove everything but microcalcifications. The last step, Classifier level 2, uses learned dictionaries and sparse representations as a texture classification technique to distinguish single, benign microcalcifications from clustered microcalcifications, in addition to remove some remaining false detections. The system is trained and tested on true digital data from Stavanger University Hospital, and the results are evaluated by radiologists. The overall results are promising, with a sensitivity > 90 % and a low false detection rate (approx 1 unwanted pr. image, or 0.3 false pr. image).

Keywords: mammogram, microcalcifications, detection, CAD, MammoScan μCaD, VarMet, dictionary learning, texture, FTCM, classification, adaptive thresholding

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
785 The Cost of Innovation in Software Development Projects

Authors: Mihai Liviu Despa

Abstract:

The paper tackles the topic of determining the cost of innovation in software development projects. Innovation can be achieved either in a planned or unplanned manner. The paper approaches the scenarios were innovation is planned for. As a starting point an innovative software development project is analyzed. The project is depicted step by step as it was implemented, from inception to delivery. Costs that are proprietary to innovation in software development are isolated based on the author’s personal experience in managing the above mentioned project. Innovation costs components identified by the author are then validated using open discussions with software development professionals and projects managers on LinkedIn groups. In order to receive relevant feedback only groups that focus on software development and innovation management are targeted. Additional innovation cost components suggested by software development professionals and projects managers are also considered. Based on the identified cost components an indicator is built. The indicator is meant to formalize the process of determining the cost of innovation in a software development project. The indicator aggregates all the innovation cost components that are identified in the research process. The process of calculating each cost component is also described. Conclusions are formulated and new related research topics are submitted for debate.

Keywords: Innovation cost, IT project management, software development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2017
784 Image Clustering Framework for BAVM Segmentation in 3DRA Images: Performance Analysis

Authors: FH. Sarieddeen, R. El Berbari, S. Imad, J. Abdel Baki, M. Hamad, R. Blanc, A. Nakib, Y.Chenoune

Abstract:

Brain ArterioVenous Malformation (BAVM) is an abnormal tangle of brain blood vessels where arteries shunt directly into veins with no intervening capillary bed which causes high pressure and hemorrhage risk. The success of treatment by embolization in interventional neuroradiology is highly dependent on the accuracy of the vessels visualization. In this paper the performance of clustering techniques on vessel segmentation from 3- D rotational angiography (3DRA) images is investigated and a new technique of segmentation is proposed. This method consists in: preprocessing step of image enhancement, then K-Means (KM), Fuzzy C-Means (FCM) and Expectation Maximization (EM) clustering are used to separate vessel pixels from background and artery pixels from vein pixels when possible. A post processing step of removing false-alarm components is applied before constructing a three-dimensional volume of the vessels. The proposed method was tested on six datasets along with a medical assessment of an expert. Obtained results showed encouraging segmentations.

Keywords: Brain arteriovenous malformation (BAVM), 3-D rotational angiography (3DRA), K-Means (KM) clustering, Fuzzy CMeans (FCM) clustering, Expectation Maximization (EM) clustering, volume rendering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1852
783 Diagnosis of the Abdominal Aorta Aneurysm in Magnetic Resonance Imaging Images

Authors: W. Kultangwattana, K. Somkantha, P. Phuangsuwan

Abstract:

This paper presents a technique for diagnosis of the abdominal aorta aneurysm in magnetic resonance imaging (MRI) images. First, our technique is designed to segment the aorta image in MRI images. This is a required step to determine the volume of aorta image which is the important step for diagnosis of the abdominal aorta aneurysm. Our proposed technique can detect the volume of aorta in MRI images using a new external energy for snakes model. The new external energy for snakes model is calculated from Law-s texture. The new external energy can increase the capture range of snakes model efficiently more than the old external energy of snakes models. Second, our technique is designed to diagnose the abdominal aorta aneurysm by Bayesian classifier which is classification models based on statistical theory. The feature for data classification of abdominal aorta aneurysm was derived from the contour of aorta images which was a result from segmenting of our snakes model, i.e., area, perimeter and compactness. We also compare the proposed technique with the traditional snakes model. In our experiment results, 30 images are trained, 20 images are tested and compared with expert opinion. The experimental results show that our technique is able to provide more accurate results than 95%.

Keywords: Adbominal Aorta Aneurysm, Bayesian Classifier, Snakes Model, Texture Feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1546
782 Optimization Based Tuning of Autopilot Gains for a Fixed Wing UAV

Authors: Mansoor Ahsan, Khalid Rafique, Farrukh Mazhar

Abstract:

Unmanned Aerial Vehicles (UAVs) have gained tremendous importance, in both Military and Civil, during first decade of this century. In a UAV, onboard computer (autopilot) autonomously controls the flight and navigation of the aircraft. Based on the aircraft role and flight envelope, basic to complex and sophisticated controllers are used to stabilize the aircraft flight parameters. These controllers constitute the autopilot system for UAVs. The autopilot systems, most commonly, provide lateral and longitudinal control through Proportional-Integral-Derivative (PID) controllers or Phase-lead or Lag Compensators. Various techniques are commonly used to ‘tune’ gains of these controllers. Some techniques used are, in-flight step-by-step tuning, software-in-loop or hardware-in-loop tuning methods. Subsequently, numerous in-flight tests are required to actually ‘fine-tune’ these gains. However, an optimization-based tuning of these PID controllers or compensators, as presented in this paper, can greatly minimize the requirement of in-flight ‘tuning’ and substantially reduce the risks and cost involved in flight-testing.

Keywords: Unmanned aerial vehicle (UAV), autopilot, autonomous controls, PID controler gains tuning, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3611
781 Climate Adaptive Building Shells for Plus-Energy-Buildings, Designed on Bionic Principles

Authors: Andreas Hammer

Abstract:

Six peculiar architecture designs from the Frankfurt University will be discussed within this paper and their future potential of the adaptable and solar thin-film sheets implemented facades will be shown acting and reacting on climate/solar changes of their specific sites. The different aspects, as well as limitations with regard to technical and functional restrictions, will be named.  The design process for a “multi-purpose building”, a “high-rise building refurbishment” and a “biker’s lodge” on the river Rheine valley, has been critically outlined and developed step by step from an international studentship towards an overall energy strategy, that firstly had to push the design to a plus-energy building and secondly had to incorporate bionic aspects into the building skins design. Both main parameters needed to be reviewed and refined during the whole design process. Various basic bionic approaches have been given [e.g. solar ivy TM, flectofin TM or hygroskin TM, which were to experiment with, regarding the use of bendable photovoltaic thin film elements being parts of a hybrid, kinetic façade system.

Keywords: Energy-strategy, photovoltaic in building skins, bionic and bioclimatic design, plus-energy-buildings, solar gain, the harvesting façade, sustainable building concept, high-efficiency building skin, climate adaptive Building Shells (CABS).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2720
780 Teager-Huang Analysis Applied to Sonar Target Recognition

Authors: J.-C. Cexus, A.O. Boudraa

Abstract:

In this paper, a new approach for target recognition based on the Empirical mode decomposition (EMD) algorithm of Huang etal. [11] and the energy tracking operator of Teager [13]-[14] is introduced. The conjunction of these two methods is called Teager-Huang analysis. This approach is well suited for nonstationary signals analysis. The impulse response (IR) of target is first band pass filtered into subsignals (components) called Intrinsic mode functions (IMFs) with well defined Instantaneous frequency (IF) and Instantaneous amplitude (IA). Each IMF is a zero-mean AM-FM component. In second step, the energy of each IMF is tracked using the Teager energy operator (TEO). IF and IA, useful to describe the time-varying characteristics of the signal, are estimated using the Energy separation algorithm (ESA) algorithm of Maragos et al .[16]-[17]. In third step, a set of features such as skewness and kurtosis are extracted from the IF, IA and IMF energy functions. The Teager-Huang analysis is tested on set of synthetic IRs of Sonar targets with different physical characteristics (density, velocity, shape,? ). PCA is first applied to features to discriminate between manufactured and natural targets. The manufactured patterns are classified into spheres and cylinders. One hundred percent of correct recognition is achieved with twenty three echoes where sixteen IRs, used for training, are free noise and seven IRs, used for testing phase, are corrupted with white Gaussian noise.

Keywords: Target recognition, Empirical mode decomposition, Teager-Kaiser energy operator, Features extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233
779 A Survey of Various Algorithms for Vlsi Physical Design

Authors: Rajine Swetha R, B. Shekar Babu, Sumithra Devi K.A

Abstract:

Electronic Systems are the core of everyday lives. They form an integral part in financial networks, mass transit, telephone systems, power plants and personal computers. Electronic systems are increasingly based on complex VLSI (Very Large Scale Integration) integrated circuits. Initial electronic design automation is concerned with the design and production of VLSI systems. The next important step in creating a VLSI circuit is Physical Design. The input to the physical design is a logical representation of the system under design. The output of this step is the layout of a physical package that optimally or near optimally realizes the logical representation. Physical design problems are combinatorial in nature and of large problem sizes. Darwin observed that, as variations are introduced into a population with each new generation, the less-fit individuals tend to extinct in the competition of basic necessities. This survival of fittest principle leads to evolution in species. The objective of the Genetic Algorithms (GA) is to find an optimal solution to a problem .Since GA-s are heuristic procedures that can function as optimizers, they are not guaranteed to find the optimum, but are able to find acceptable solutions for a wide range of problems. This survey paper aims at a study on Efficient Algorithms for VLSI Physical design and observes the common traits of the superior contributions.

Keywords: Genetic Algorithms, Physical Design, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
778 Recommender Systems Using Ensemble Techniques

Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim

Abstract:

This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.

Keywords: Product recommender system, Ensemble technique, Association rules, Decision tree, Artificial neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4177
777 Supply Chain Resilience Triangle: The Study and Development of a Framework

Authors: M. Bevilacqua, F. E. Ciarapica, G. Marcucci

Abstract:

Supply Chain Resilience has been broadly studied during the last decade, focusing the research on many aspects of Supply Chain performance. Consequently, different definitions of Supply Chain Resilience have been developed by the research community, drawing inspiration also from other fields of study such as ecology, sociology, psychology, economy et al. This way, the definitions so far developed in the extant literature are therefore very heterogeneous, and many authors have pointed out a lack of consensus in this field of analysis. The aim of this research is to find common points between these definitions, through the development of a framework of study: the Resilience Triangle. The Resilience Triangle is a tool developed in the field of civil engineering, with the objective of modeling the loss of resilience of a given structure during and after the occurrence of a disruption such as an earthquake. The Resilience Triangle is a simple yet powerful tool: in our opinion, it can summarize all the features that authors have captured in the Supply Chain Resilience definitions over the years. This research intends to recapitulate within this framework all these heterogeneities in Supply Chain Resilience research. After collecting a various number of Supply Chain Resilience definitions present in the extant literature, the methodology approach provides a taxonomy step with the scope of collecting and analyzing all the data gathered. The next step provides the comparison of the data obtained with the plotting of a disruption profile, in order to contextualize the Resilience Triangle in the Supply Chain context. The tool and the results developed in this research will allow to lay the foundation for future Supply Chain Resilience modeling and measurement work.

Keywords: Supply chain resilience, resilience definition, supply chain resilience triangle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2602
776 Robust Iterative PID Controller Based on Linear Matrix Inequality for a Sample Power System

Authors: Ahmed Bensenouci

Abstract:

This paper provides the design steps of a robust Linear Matrix Inequality (LMI) based iterative multivariable PID controller whose duty is to drive a sample power system that comprises a synchronous generator connected to a large network via a step-up transformer and a transmission line. The generator is equipped with two control-loops, namely, the speed/power (governor) and voltage (exciter). Both loops are lumped in one where the error in the terminal voltage and output active power represent the controller inputs and the generator-exciter voltage and governor-valve position represent its outputs. Multivariable PID is considered here because of its wide use in the industry, simple structure and easy implementation. It is also preferred in plants of higher order that cannot be reduced to lower ones. To improve its robustness to variation in the controlled variables, H∞-norm of the system transfer function is used. To show the effectiveness of the controller, divers tests, namely, step/tracking in the controlled variables, and variation in plant parameters, are applied. A comparative study between the proposed controller and a robust H∞ LMI-based output feedback is given by its robustness to disturbance rejection. From the simulation results, the iterative multivariable PID shows superiority.

Keywords: Linear matrix inequality, power system, robust iterative PID, robust output feedback control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008
775 Effective Traffic Lights Recognition Method for Real Time Driving Assistance Systemin the Daytime

Authors: Hyun-Koo Kim, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights recognition method at the daytime. First, Potential Traffic Lights Detector (PTLD) use whole color source of YCbCr channel image and make each binary image of green and red traffic lights. After PTLD step, Shape Filter (SF) use to remove noise such as traffic sign, street tree, vehicle, and building. At this time, noise removal properties consist of information of blobs of binary image; length, area, area of boundary box, etc. Finally, after an intermediate association step witch goal is to define relevant candidates region from the previously detected traffic lights, Adaptive Multi-class Classifier (AMC) is executed. The classification method uses Haar-like feature and Adaboost algorithm. For simulation, we are implemented through Intel Core CPU with 2.80 GHz and 4 GB RAM and tested in the urban and rural roads. Through the test, we are compared with our method and standard object-recognition learning processes and proved that it reached up to 94 % of detection rate which is better than the results achieved with cascade classifiers. Computation time of our proposed method is 15 ms.

Keywords: Traffic Light Detection, Multi-class Classification, Driving Assistance System, Haar-like Feature, Color SegmentationMethod, Shape Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2734