Search results for: computational models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3498

Search results for: computational models

2268 Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks

Authors: Maria E. Manioudaki, Panayiota Poirazi

Abstract:

Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.

Keywords: gene modules, artificial neural networks, yeast, stress

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1463
2267 A Real-Time Simulation Environment for Avionics Software Development and Qualification

Authors: U. Tancredi, D. Accardo, M. Grassi, G. Fasano, A. E. Tirri, A. Vitale, N. Genito, F. Montemari, L. Garbarino

Abstract:

The development of guidance, navigation and control algorithms and avionic procedures requires the disposability of suitable analysis and verification tools, such as simulation environments, which support the design process and allow detecting potential problems prior to the flight test, in order to make new technologies available at reduced cost, time and risk. This paper presents a simulation environment for avionic software development and qualification, especially aimed at equipment for general aviation aircrafts and unmanned aerial systems. The simulation environment includes models for short and medium-range radio-navigation aids, flight assistance systems, and ground control stations. All the software modules are able to simulate the modeled systems both in fast-time and real-time tests, and were implemented following component oriented modeling techniques and requirement based approach. The paper describes the specific models features, the architectures of the implemented software systems and its validation process. Performed validation tests highlighted the capability of the simulation environment to guarantee in real-time the required functionalities and performance of the simulated avionics systems, as well as to reproduce the interaction between these systems, thus permitting a realistic and reliable simulation of a complete mission scenario.

Keywords: ADS-B, avionics, NAVAIDs, real time simulation, TCAS, UAS ground control station.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
2266 Interaction Effect of Feed Rate and Cutting Speed in CNC-Turning on Chip Micro-Hardness of 304- Austenitic Stainless Steel

Authors: G. H. Senussi

Abstract:

The present work is concerned with the effect of turning process parameters (cutting speed, feed rate, and depth of cut) and distance from the center of work piece as input variables on the chip micro-hardness as response or output. Three experiments were conducted; they were used to investigate the chip micro-hardness behavior at diameter of work piece for 30[mm], 40[mm], and 50[mm]. Response surface methodology (R.S.M) is used to determine and present the cause and effect of the relationship between true mean response and input control variables influencing the response as a two or three dimensional hyper surface. R.S.M has been used for designing a three factor with five level central composite rotatable factors design in order to construct statistical models capable of accurate prediction of responses. The results obtained showed that the application of R.S.M can predict the effect of machining parameters on chip micro-hardness. The five level factorial designs can be employed easily for developing statistical models to predict chip micro-hardness by controllable machining parameters. Results obtained showed that the combined effect of cutting speed at it?s lower level, feed rate and depth of cut at their higher values, and larger work piece diameter can result increasing chi micro-hardness.

Keywords: Machining Parameters, Chip Micro-Hardness, CNCMachining, 304-Austenic Stainless Steel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3283
2265 3D Modelling and Numerical Analysis of Human Inner Ear by Means of Finite Elements Method

Authors: C. Castro-Egler, A. Durán-Escalante, A. García-González

Abstract:

This paper presents a method to generate a finite element model of the human auditory inner ear system. The geometric model has been realized using 2D images from a virtual model of temporal bones. A point cloud has been gotten manually from those images to construct a whole mesh with hexahedral elements. The main difference with the predecessor models is the spiral shape of the cochlea with its three scales completely defined: scala tympani, scala media and scala vestibuli; which are separate by basilar membrane and Reissner membrane. To validate this model, numerical simulations have been realised with two models: an isolated inner ear and a whole model of human auditory system. Ideal conditions of displacement are applied over the oval window in the isolated Inner Ear model. The whole model is made up of the outer auditory channel, the tympani, the ossicular chain, and the inner ear. The boundary condition for the whole model is 1Pa over the auditory channel entrance. The numerical simulations by FEM have been done using a harmonic analysis with a frequency range between 100-10.000 Hz with an interval of 100Hz. The following results have been carried out: basilar membrane displacement; the scala media pressure according to the cochlea length and the transfer function of the middle ear normalized with the pressure in the tympanic membrane. The basilar membrane displacements and the pressure in the scala media make it possible to validate the response in frequency of the basilar membrane.

Keywords: Finite elements method, human auditory system model, numerical analysis, 3D modelling cochlea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1531
2264 Mobile Ad-Hoc Service Grid – MASGRID

Authors: Imran Ihsan, Muhammad Abdul Qadir, Nadeem Iftikhar

Abstract:

Mobile devices, which are progressively surrounded in our everyday life, have created a new paradigm where they interconnect, interact and collaborate with each other. This network can be used for flexible and secure coordinated sharing. On the other hand Grid computing provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. In this paper, efforts are made to map the concepts of Grid on Ad-Hoc networks because both exhibit similar kind of characteristics like Scalability, Dynamism and Heterogeneity. In this context we propose “Mobile Ad-Hoc Services Grid – MASGRID".

Keywords: Mobile Ad-Hoc Networks, Grid Computing, Resource Discovery, Routing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1797
2263 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: Finite element, Lagrangian, optimal stress location, serendipity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 632
2262 Distributed Manufacturing (DM) - Smart Units and Collaborative Processes

Authors: Hermann Kuehnle

Abstract:

Applications of the Hausdorff space and its mappings into tangent spaces are outlined, including their fractal dimensions and self-similarities. The paper details this theory set up and further describes virtualizations and atomization of manufacturing processes. It demonstrates novel concurrency principles that will guide manufacturing processes and resources configurations. Moreover, varying levels of details may be produced by up folding and breaking down of newly introduced generic models. This choice of layered generic models for units and systems aspects along specific aspects allows research work in parallel to other disciplines with the same focus on all levels of detail. More credit and easier access are granted to outside disciplines for enriching manufacturing grounds. Specific mappings and the layers give hints for chances for interdisciplinary outcomes and may highlight more details for interoperability standards, as already worked on the international level. The new rules are described, which require additional properties concerning all involved entities for defining distributed decision cycles, again on the base of self-similarity. All properties are further detailed and assigned to a maturity scale, eventually displaying the smartness maturity of a total shopfloor or a factory. The paper contributes to the intensive ongoing discussion in the field of intelligent distributed manufacturing and promotes solid concepts for implementations of Cyber Physical Systems and the Internet of Things into manufacturing industry, like industry 4.0, as discussed in German-speaking countries.

Keywords: Autonomous unit, Networkability, Smart manufacturing unit, Virtualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
2261 Determination of Surface Deformations with Global Navigation Satellite System Time Series

Authors: I. Tiryakioglu, M. A. Ugur, C. Ozkaymak

Abstract:

The development of Global Navigation Satellite System (GNSS) technology has led to increasingly widely and successful applications of GNSS surveys for monitoring crustal movements. Instead of the multi-period GNSS solutions, this study utilizes GNSS time series that are required to more precisely determine the vertical deformations in the study area. In recent years, the surface deformations that are parallel and semi-parallel to Bolvadin fault have occurred in Western Anatolia. These surface deformations have continued to occur in Bolvadin settlement area that is located mostly on alluvium ground. Due to these surface deformations, a number of cracks in the buildings located in the residential areas and breaks in underground water and sewage systems have been observed. In order to determine the amount of vertical surface deformations, two continuous GNSS stations have been established in the region. The stations have been operating since 2015 and 2017, respectively. In this study, GNSS observations from the mentioned two GNSS stations were processed with GAMIT/GLOBK (GNSS Analysis Massachusetts Institute of Technology/GLOBal Kalman) program package to create coordinate time series. With the time series analyses, the GNSS stations’ behaviour models (linear, periodical, etc.), the causes of these behaviours, and mathematical models were determined. The study results from the time series analysis of these two 2 GNSS stations show approximately 50-90 mm/yr vertical movement.

Keywords: Bolvadin fault, GAMIT, GNSS time series, surface deformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 813
2260 Dataset Analysis Using Membership-Deviation Graph

Authors: Itgel Bayarsaikhan, Jimin Lee, Sejong Oh

Abstract:

Classification is one of the primary themes in computational biology. The accuracy of classification strongly depends on quality of a dataset, and we need some method to evaluate this quality. In this paper, we propose a new graphical analysis method using 'Membership-Deviation Graph (MDG)' for analyzing quality of a dataset. MDG represents degree of membership and deviations for instances of a class in the dataset. The result of MDG analysis is used for understanding specific feature and for selecting best feature for classification.

Keywords: feature, classification, machine learning algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1444
2259 A New Heuristic for Improving the Performance of Genetic Algorithm

Authors: Warattapop Chainate, Peeraya Thapatsuwan, Pupong Pongcharoen

Abstract:

The hybridisation of genetic algorithm with heuristics has been shown to be one of an effective way to improve its performance. In this work, genetic algorithm hybridised with four heuristics including a new heuristic called neighbourhood improvement were investigated through the classical travelling salesman problem. The experimental results showed that the proposed heuristic outperformed other heuristics both in terms of quality of the results obtained and the computational time.

Keywords: Genetic Algorithm, Hybridisation, Metaheuristics, Travelling Salesman Problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1847
2258 Heat Generation Rate and Computational Simulation for Li-Ion Battery Module

Authors: Ravichandra R., Srithar Rajoo, Tan Lit Wen

Abstract:

In recent years Li-Ion batteries getting more attention among the Electrical Vehicles (EV) and Hybrid Electrical Vehicles (HEV) energy storage. Li-Ion has shown extended power density and light weight compared to other batteries readily available in the market. One of the major drawbacks in Li-Ion batteries is their sensitivity to the temperature. If the working temperature is beyond the limit, that could affect seriously on the durability and performance of Li-Ion battery. Thus Battery Thermal Management (BTM) is the most essential in adapting Li-Ion battery to the EVs and HEVs.

Keywords: Li-Ion battery, HEV/EV, battery thermal management, heat generation rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5934
2257 Orbit Determination Modeling with Graphical Demonstration

Authors: Assem M. F. Sallam, Ah. El-S. Makled

Abstract:

In this paper, there is an implementation, verification, and graphical demonstration of a software application, which can be used swiftly over different preliminary orbit determination methods. A passive orbit determination method is used in this study to determine the location of a satellite or a flying body. It is named a passive orbit determination because it depends on observation without the use of any aids (radio and laser) installed on satellite. In order to understand how these methods work and how their output is accurate when compared with available verification data, the built models help in knowing the different inputs used with each method. Output from the different orbit determination methods (Gibbs, Lambert, and Gauss) will be compared with each other and verified by the data obtained from Satellite Tool Kit (STK) application. A modified model including all of the orbit determination methods using the same input will be introduced to investigate different models output (orbital parameters) for the same input (azimuth, elevation, and time). Simulation software is implemented using MATLAB. A Graphical User Interface (GUI) application named OrDet is produced using the GUI of MATLAB. It includes all the available used inputs and it outputs the current Classical Orbital Elements (COE) of satellite under observation. Produced COE are then used to propagate for a complete revolution and plotted on a 3-D view. Modified model which uses an adapter to allow same input parameters, passes these parameters to the preliminary orbit determination methods under study. Result from all orbit determination methods yield exactly the same COE output, which shows the equality of concept in determination of satellite’s location, but with different numerical methods.

Keywords: Orbit determination, STK, MATLAB-GUI, satellite tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
2256 Further Development in Predicting Post-Earthquake Fire Ignition Hazard

Authors: Pegah Farshadmanesh, Jamshid Mohammadi, Mehdi Modares

Abstract:

In nearly all earthquakes of the past century that resulted in moderate to significant damage, the occurrence of postearthquake fire ignition (PEFI) has imposed a serious hazard and caused severe damage, especially in urban areas. In order to reduce the loss of life and property caused by post-earthquake fires, there is a crucial need for predictive models to estimate the PEFI risk. The parameters affecting PEFI risk can be categorized as: 1) factors influencing fire ignition in normal (non-earthquake) condition, including floor area, building category, ignitability, type of appliance, and prevention devices, and 2) earthquake related factors contributing to the PEFI risk, including building vulnerability and earthquake characteristics such as intensity, peak ground acceleration, and peak ground velocity. State-of-the-art statistical PEFI risk models are solely based on limited available earthquake data, and therefore they cannot predict the PEFI risk for areas with insufficient earthquake records since such records are needed in estimating the PEFI model parameters. In this paper, the correlation between normal condition ignition risk, peak ground acceleration, and PEFI risk is examined in an effort to offer a means for predicting post-earthquake ignition events. An illustrative example is presented to demonstrate how such correlation can be employed in a seismic area to predict PEFI hazard.

Keywords: Fire risk, post-earthquake fire ignition (PEFI), risk management, seismicity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1360
2255 Seismic Retrofitting of RC Buildings with Soft Storey and Floating Columns

Authors: Vinay Agrawal, Suyash Garg, Ravindra Nagar, Vinay Chandwani

Abstract:

Open ground storey with floating columns is a typical feature in the modern multistory constructions in urban India. Such features are very much undesirable in buildings built in seismically active areas. The present study proposes a feasible solution to mitigate the effects caused due to non-uniformity of stiffness and discontinuity in load path and to simultaneously hold the functional use of the open storey particularly under the floating column, through a combination of various lateral strengthening systems. An investigation is performed on an example building with nine different analytical models to bring out the importance of recognising the presence of open ground storey and floating columns. Two separate analyses on various models of the building namely, the equivalent static analysis and the response spectrum analysis as per IS: 1893-2002 were performed. Various measures such as incorporation of Chevron bracings and shear walls, strengthening the columns in the open ground storey, and their different combinations were examined. The analysis shows that, in comparison to two short ones separated by interconnecting beams, the structural walls are most effective when placed at the periphery of the buildings and used as one long structural wall. Further, it can be shown that the force transfer from floating columns becomes less horizontal when the Chevron Bracings are placed just below them, thereby reducing the shear forces in the beams on which the floating column rests.

Keywords: Equivalent static analysis, floating column, open ground storey, response spectrum analysis, shear wall, stiffness irregularity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1534
2254 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (RSM)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarseaggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: Mix proportioning, response surface methodology, compressive strength, optimal design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2213
2253 Parallel Algorithm for Numerical Solution of Three-Dimensional Poisson Equation

Authors: Alibek Issakhov

Abstract:

In this paper developed and realized absolutely new algorithm for solving three-dimensional Poisson equation. This equation used in research of turbulent mixing, computational fluid dynamics, atmospheric front, and ocean flows and so on. Moreover in the view of rising productivity of difficult calculation there was applied the most up-to-date and the most effective parallel programming technology - MPI in combination with OpenMP direction, that allows to realize problems with very large data content. Resulted products can be used in solving of important applications and fundamental problems in mathematics and physics.

Keywords: MPI, OpenMP, three dimensional Poisson equation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1693
2252 An Ising-based Model for the Spread of Infection

Authors: Christian P. Crisostomo, Chrysline Margus N. Piñol

Abstract:

A zero-field ferromagnetic Ising model is utilized to simulate the propagation of infection in a population that assumes a square lattice structure. The rate of infection increases with temperature. The disease spreads faster among individuals with low J values. Such effect, however, diminishes at higher temperatures.

Keywords: Epidemiology, Ising model, lattice models

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
2251 Mixed Mode Fracture Analyses Using Finite Element Method of Edge Cracked Heavy Spinning Annulus Pulley

Authors: Bijit Kalita, K. V. N. Surendra

Abstract:

Rotating disk is one of the most indispensable parts of a rotating machine. Rotating disk has found many applications in the diverging field of science and technology. In this paper, we have taken into consideration the problem of a heavy spinning disk mounted on a rotor system acted upon by boundary traction. Finite element modelling is used at various loading condition to determine the mixed mode stress intensity factors. The effect of combined shear and normal traction on the boundary is incorporated in the analysis under the action of gravity. The variation near the crack tip is characterized in terms of the stress intensity factor (SIF) with an aim to find the SIF for a wide range of parameters. The results of the finite element analyses carried out on the compressed disk of a belt pulley arrangement using fracture mechanics concepts are shown. A total of hundred cases of the problem are solved for each of the variations in loading arc parameter and crack orientation using finite element models of the disc under compression. All models were prepared and analyzed for the uncracked disk, disk with a single crack at different orientation emanating from shaft hole as well as for a disc with pair of cracks emerging from the same center hole. Curves are plotted for various loading conditions. Finally, crack propagation paths are determined using kink angle concepts.

Keywords: Crack-tip deformations, static loading, stress concentration, stress intensity factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 891
2250 Examining the Perceived Usefulness of ICTs for Learning about Indigenous Foods

Authors: K. M. Ngcobo, S. D. Eyono Obono

Abstract:

Science and technology has a major impact on many societal domains such as communication, medicine, food, transportation, etc. However, this dominance of modern technology can have a negative unintended impact on indigenous systems, and in particular on indigenous foods. This problem serves as a motivation to this study whose aim is to examine the perceptions of learners on the usefulness of Information and Communication Technologies (ICTs) for learning about indigenous foods. This aim will be subdivided into two types of research objectives. The design and identification of theories and models will be achieved using literature content analysis. The objective on the empirical testing of such theories and models will be achieved through the survey of Hospitality studies learners from different schools in the iLembe and Umgungundlovu Districts of the South African Kwazulu-Natal province. SPSS is used to quantitatively analyze the data collected by the questionnaire of this survey using descriptive statistics and Pearson correlations after the assessment of the validity and the reliability of the data. The main hypothesis behind this study is that there is a connection between the demographics of learners, their perceptions on the usefulness of ICTs for learning about indigenous foods, and the following personality and eLearning related theories constructs: Computer self-efficacy, Trust in ICT systems, and Conscientiousness; as suggested by existing studies on learning theories. This hypothesis was fully confirmed by the survey conducted by this study except for the demographic factors where gender and age were not found to be determinant factors of learners’ perceptions on the usefulness of ICTs for learning about indigenous foods.

Keywords: E-learning, Indigenous Foods, Information and Communication Technologies, Learning Theories, Personality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2231
2249 Highlighting Document's Structure

Authors: Sylvie Ratté, Wilfried Njomgue, Pierre-André Ménard

Abstract:

In this paper, we present symbolic recognition models to extract knowledge characterized by document structures. Focussing on the extraction and the meticulous exploitation of the semantic structure of documents, we obtain a meaningful contextual tagging corresponding to different unit types (title, chapter, section, enumeration, etc.).

Keywords: Information retrieval, document structures, symbolic grammars.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225
2248 Reliability Levels of Reinforced Concrete Bridges Obtained by Mixing Approaches

Authors: Adrián D. García-Soto, Alejandro Hernández-Martínez, Jesús G. Valdés-Vázquez, Reyna A. Vizguerra-Alvarez

Abstract:

Reinforced concrete bridges designed by code are intended to achieve target reliability levels adequate for the geographical environment where the code is applicable. Several methods can be used to estimate such reliability levels. Many of them require the establishment of an explicit limit state function (LSF). When such LSF is not available as a close-form expression, the simulation techniques are often employed. The simulation methods are computing intensive and time consuming. Note that if the reliability of real bridges designed by code is of interest, numerical schemes, the finite element method (FEM) or computational mechanics could be required. In these cases, it can be quite difficult (or impossible) to establish a close-form of the LSF, and the simulation techniques may be necessary to compute reliability levels. To overcome the need for a large number of simulations when no explicit LSF is available, the point estimate method (PEM) could be considered as an alternative. It has the advantage that only the probabilistic moments of the random variables are required. However, in the PEM, fitting of the resulting moments of the LSF to a probability density function (PDF) is needed. In the present study, a very simple alternative which allows the assessment of the reliability levels when no explicit LSF is available and without the need of extensive simulations is employed. The alternative includes the use of the PEM, and its applicability is shown by assessing reliability levels of reinforced concrete bridges in Mexico when a numerical scheme is required. Comparisons with results by using the Monte Carlo simulation (MCS) technique are included. To overcome the problem of approximating the probabilistic moments from the PEM to a PDF, a well-known distribution is employed. The approach mixes the PEM and other classic reliability method (first order reliability method, FORM). The results in the present study are in good agreement whit those computed with the MCS. Therefore, the alternative of mixing the reliability methods is a very valuable option to determine reliability levels when no close form of the LSF is available, or if numerical schemes, the FEM or computational mechanics are employed.

Keywords: Structural reliability, reinforced concrete bridges, mixing approaches, point estimate method, Monte Carlo simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1409
2247 Implementation of Conceptual Real-Time Embedded Functional Design via Drive-by-Wire ECU Development

Authors: A. Ukaew, C. Chauypen

Abstract:

Design concepts of real-time embedded system can be realized initially by introducing novel design approaches. In this literature, model based design approach and in-the-loop testing were employed early in the conceptual and preliminary phase to formulate design requirements and perform quick real-time verification. The design and analysis methodology includes simulation analysis, model based testing, and in-the-loop testing. The design of conceptual driveby- wire, or DBW, algorithm for electronic control unit, or ECU, was presented to demonstrate the conceptual design process, analysis, and functionality evaluation. The concepts of DBW ECU function can be implemented in the vehicle system to improve electric vehicle, or EV, conversion drivability. However, within a new development process, conceptual ECU functions and parameters are needed to be evaluated. As a result, the testing system was employed to support conceptual DBW ECU functions evaluation. For the current setup, the system components were consisted of actual DBW ECU hardware, electric vehicle models, and control area network or CAN protocol. The vehicle models and CAN bus interface were both implemented as real-time applications where ECU and CAN protocol functionality were verified according to the design requirements. The proposed system could potentially benefit in performing rapid real-time analysis of design parameters for conceptual system or software algorithm development.

Keywords: Drive-by-wire ECU, in-the-loop testing, modelbased design, real-time embedded system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174
2246 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: Adaptive sampling, batch bulk methyl methacrylate polymerization, large margin nearest neighbor regression, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1399
2245 Modeling of Temperature Fields of Gas Turbine Blades by Considering Heat Flow and Specified Temperature

Authors: C. Ardil

Abstract:

A new mathematical model for calculating the temperature field of the profile part of the cooled blades of gas turbines is developed. The theoretical substantiation of the method is based on the application of the method of potential theory (the method of boundary integral equations). The effectiveness of the implementation of the developed mathematical model is confirmed on the basis of a computational experiment.

Keywords: Modeling of temperature fields, gas turbine blades, integral methods, cooled blades, gas turbines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
2244 Dynamic Bayesian Networks Modeling for Inferring Genetic Regulatory Networks by Search Strategy: Comparison between Greedy Hill Climbing and MCMC Methods

Authors: Huihai Wu, Xiaohui Liu

Abstract:

Using Dynamic Bayesian Networks (DBN) to model genetic regulatory networks from gene expression data is one of the major paradigms for inferring the interactions among genes. Averaging a collection of models for predicting network is desired, rather than relying on a single high scoring model. In this paper, two kinds of model searching approaches are compared, which are Greedy hill-climbing Search with Restarts (GSR) and Markov Chain Monte Carlo (MCMC) methods. The GSR is preferred in many papers, but there is no such comparison study about which one is better for DBN models. Different types of experiments have been carried out to try to give a benchmark test to these approaches. Our experimental results demonstrated that on average the MCMC methods outperform the GSR in accuracy of predicted network, and having the comparable performance in time efficiency. By proposing the different variations of MCMC and employing simulated annealing strategy, the MCMC methods become more efficient and stable. Apart from comparisons between these approaches, another objective of this study is to investigate the feasibility of using DBN modeling approaches for inferring gene networks from few snapshots of high dimensional gene profiles. Through synthetic data experiments as well as systematic data experiments, the experimental results revealed how the performances of these approaches can be influenced as the target gene network varies in the network size, data size, as well as system complexity.

Keywords: Genetic regulatory network, Dynamic Bayesian network, GSR, MCMC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1885
2243 Micromechanics Modeling of 3D Network Smart Orthotropic Structures

Authors: E. M. Hassan, A. L. Kalamkarov

Abstract:

Two micromechanical models for 3D smart composite with embedded periodic or nearly periodic network of generally orthotropic reinforcements and actuators are developed and applied to cubic structures with unidirectional orientation of constituents. Analytical formulas for the effective piezothermoelastic coefficients are derived using the Asymptotic Homogenization Method (AHM). Finite Element Analysis (FEA) is subsequently developed and used to examine the aforementioned periodic 3D network reinforced smart structures. The deformation responses from the FE simulations are used to extract effective coefficients. The results from both techniques are compared. This work considers piezoelectric materials that respond linearly to changes in electric field, electric displacement, mechanical stress and strain and thermal effects. This combination of electric fields and thermo-mechanical response in smart composite structures is characterized by piezoelectric and thermal expansion coefficients. The problem is represented by unitcell and the models are developed using the AHM and the FEA to determine the effective piezoelectric and thermal expansion coefficients. Each unit cell contains a number of orthotropic inclusions in the form of structural reinforcements and actuators. Using matrix representation of the coupled response of the unit cell, the effective piezoelectric and thermal expansion coefficients are calculated and compared with results of the asymptotic homogenization method. A very good agreement is shown between these two approaches.

Keywords: Asymptotic Homogenization Method, Effective Piezothermoelastic Coefficients, Finite Element Analysis, 3D Smart Network Composite Structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098
2242 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea, maritime archaeology, underwater photogrammetry, Bronze Age, low visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
2241 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: Growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2475
2240 Influence of Single and Multiple Skin-Core Debonding on Free Vibration Characteristics of Innovative GFRP Sandwich Panels

Authors: Indunil Jayatilake, Warna Karunasena, Weena Lokuge

Abstract:

An Australian manufacturer has fabricated an innovative GFRP sandwich panel made from E-glass fiber skin and a modified phenolic core for structural applications. Debonding, which refers to separation of skin from the core material in composite sandwiches, is one of the most common types of damage in composites. The presence of debonding is of great concern because it not only severely affects the stiffness but also modifies the dynamic behaviour of the structure. Generally it is seen that the majority of research carried out has been concerned about the delamination of laminated structures whereas skin-core debonding has received relatively minor attention. Furthermore it is observed that research done on composite slabs having multiple skin-core debonding is very limited. To address this gap, a comprehensive research investigating dynamic behaviour of composite panels with single and multiple debonding is presented. The study uses finite-element modelling and analyses for investigating the influence of debonding on free vibration behaviour of single and multilayer composite sandwich panels. A broad parametric investigation has been carried out by varying debonding locations, debonding sizes and support conditions of the panels in view of both single and multiple debonding. Numerical models were developed with Strand7 finite element package by innovatively selecting the suitable elements to diligently represent their actual behavior. Three-dimensional finite element models were employed to simulate the physically real situation as close as possible, with the use of an experimentally and numerically validated finite element model. Comparative results and conclusions based on the analyses are presented. For similar extents and locations of debonding, the effect of debonding on natural frequencies appears greatly dependent on the end conditions of the panel, giving greater decrease in natural frequency when the panels are more restrained. Some modes are more sensitive to debonding and this sensitivity seems to be related to their vibration mode shapes. The fundamental mode seems generally the least sensitive mode to debonding with respect to the variation in free vibration characteristics. The results indicate the effectiveness of the developed three dimensional finite element models in assessing debonding damage in composite sandwich panels.

Keywords: Debonding, free vibration behaviour, GFRP sandwich panels, three dimensional finite element modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2010
2239 Local Image Descriptor using VQ-SIFT for Image Retrieval

Authors: Qiu Chen, Feifei Lee, Koji Kotani, Tadahiro Ohmi

Abstract:

In this paper, we present local image descriptor using VQ-SIFT for more effective and efficient image retrieval. Instead of SIFT's weighted orientation histograms, we apply vector quantization (VQ) histogram as an alternate representation for SIFT features. Experimental results show that SIFT features using VQ-based local descriptors can achieve better image retrieval accuracy than the conventional algorithm while the computational cost is significantly reduced.

Keywords: SIFT feature, Vector quantization histogram, Localdescriptor, Image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2401