Search results for: Reduced order model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11992

Search results for: Reduced order model

10102 A Novel Methodology for Synthesis of Fault Trees from MATLAB-Simulink Model

Authors: F. Tajarrod, G. Latif-Shabgahi

Abstract:

Fault tree analysis is a well-known method for reliability and safety assessment of engineering systems. In the last 3 decades, a number of methods have been introduced, in the literature, for automatic construction of fault trees. The main difference between these methods is the starting model from which the tree is constructed. This paper presents a new methodology for the construction of static and dynamic fault trees from a system Simulink model. The method is introduced and explained in detail, and its correctness and completeness is experimentally validated by using an example, taken from literature. Advantages of the method are also mentioned.

Keywords: Fault tree, Simulink, Standby Sparing and Redundancy

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3002
10101 Automatic Sleep Stage Scoring with Wavelet Packets Based on Single EEG Recording

Authors: Luay A. Fraiwan, Natheer Y. Khaswaneh, Khaldon Y. Lweesy

Abstract:

Sleep stage scoring is the process of classifying the stage of the sleep in which the subject is in. Sleep is classified into two states based on the constellation of physiological parameters. The two states are the non-rapid eye movement (NREM) and the rapid eye movement (REM). The NREM sleep is also classified into four stages (1-4). These states and the state wakefulness are distinguished from each other based on the brain activity. In this work, a classification method for automated sleep stage scoring based on a single EEG recording using wavelet packet decomposition was implemented. Thirty two ploysomnographic recording from the MIT-BIH database were used for training and validation of the proposed method. A single EEG recording was extracted and smoothed using Savitzky-Golay filter. Wavelet packets decomposition up to the fourth level based on 20th order Daubechies filter was used to extract features from the EEG signal. A features vector of 54 features was formed. It was reduced to a size of 25 using the gain ratio method and fed into a classifier of regression trees. The regression trees were trained using 67% of the records available. The records for training were selected based on cross validation of the records. The remaining of the records was used for testing the classifier. The overall correct rate of the proposed method was found to be around 75%, which is acceptable compared to the techniques in the literature.

Keywords: Features selection, regression trees, sleep stagescoring, wavelet packets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
10100 The Effect of Cyclone Shape and Dust Collector on Gas-Solid Flow and Performance

Authors: Kyoungwoo Park, Chol-Ho Hong, Ji-Won Han, Byeong-Sam Kim, Cha-Sik Park, Oh Kyung Kwon

Abstract:

Numerical analysis of flow characteristics and separation efficiency in a high-efficiency cyclone has been performed. Several models based on the experimental observation for a design purpose were proposed. However, the model is only estimated the cyclone's performance under the limited environments; it is difficult to obtain a general model for all types of cyclones. The purpose of this study is to find out the flow characteristics and separation efficiency numerically. The Reynolds stress model (RSM) was employed instead of a standard k-ε or a k-ω model which was suitable for isotropic turbulence and it could predict the pressure drop and the Rankine vortex very well. For small particles, there were three significant components (entrance of vortex finder, cone, and dust collector) for the particle separation. In the present work, the particle re-entraining phenomenon from the dust collector to the cyclone body was observed after considerable time. This re-entrainment degraded the separation efficiency and was one of the significant factors for the separation efficiency of the cyclone.

Keywords: CFD, High-efficiency cyclone, Pressure drop, Rankine vortex, Reynolds stress model (RSM), Separation efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4530
10099 Dynamic Load Balancing in PVM Using Intelligent Application

Authors: Kashif Bilal, Tassawar Iqbal, Asad Ali Safi, Nadeem Daudpota

Abstract:

This paper deals with dynamic load balancing using PVM. In distributed environment Load Balancing and Heterogeneity are very critical issues and needed to drill down in order to achieve the optimal results and efficiency. Various techniques are being used in order to distribute the load dynamically among different nodes and to deal with heterogeneity. These techniques are using different approaches where Process Migration is basic concept with different optimal flavors. But Process Migration is not an easy job, it impose lot of burden and processing effort in order to track each process in nodes. We will propose a dynamic load balancing technique in which application will intelligently balance the load among different nodes, resulting in efficient use of system and have no overheads of process migration. It would also provide a simple solution to problem of load balancing in heterogeneous environment.

Keywords: PVM, load balancing, task allocation, intelligent application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1808
10098 Expert Witness Testimony in the Battered Woman Syndrome

Authors: Ana Pauna

Abstract:

The Expert Witness Testimony in the Battered Woman Syndrome Expert witness testimony (EWT) is a kind of information given by an expert specialized in the field (here in BWS) to the jury in order to help the court better understand the case. EWT does not always work in favor of the battered women. Two main decision-making models are discussed in the paper: the Mathematical model and the Explanation model. In the first model, the jurors calculate ″the importance and strength of each piece of evidence″ whereas in the second model they try to integrate the EWT with the evidence and create a coherent story that would describe the crime. The jury often misunderstands and misjudges battered women for their action (or in this case inaction). They assume that these women are masochists and accept being mistreated for if a man abuses a woman constantly, she should and could divorce him or simply leave at any time. The research in the domain found that indeed, expert witness testimony has a powerful influence on juror’s decisions thus its quality needs to be further explored. One of the important factors that need further studies is a bias called the dispositionist worldview (a belief that what happens to people is of their own doing). This kind of attributional bias represents a tendency to think that a person’s behavior is due to his or her disposition, even when the behavior is clearly attributed to the situation. Hypothesis The hypothesis of this paper is that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. The juror would therefore commit the fundamental attribution error and believe that the victim’s disposition caused the rape and not the situation she was in. Methods The subjects in the study were 500 randomly sampled undergraduate students from McGill, Concordia, Université de Montréal and UQAM. Dispositional Worldview was scored on the Dispositionist Worldview Questionnaire. After reading the Rape Scenarios, each student was asked to play the role of a juror and answer a questionnaire consisting of 7 questions about the responsibility, causality and fault of the victim. Results The results confirm the hypothesis which states that if a juror has a dispositionist worldview then he or she will blame the rape victim for triggering the assault. By doing so, the juror commits the fundamental attribution error because he will believe that the victim’s disposition, and not the constraints or opportunities of the situation, caused the rape scenario.

Keywords: bias, expert/witness testimony, attribution error, jury, rape myth

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2179
10097 Real-time Tracking in Image Sequences based-on Parameters Updating with Temporal and Spatial Neighborhoods Mixture Gaussian Model

Authors: Hu Haibo, Zhao Hong

Abstract:

Gaussian mixture background model is widely used in moving target detection of the image sequences. However, traditional Gaussian mixture background model usually considers the time continuity of the pixels, and establishes background through statistical distribution of pixels without taking into account the pixels- spatial similarity, which will cause noise, imperfection and other problems. This paper proposes a new Gaussian mixture modeling approach, which combines the color and gradient of the spatial information, and integrates the spatial information of the pixel sequences to establish Gaussian mixture background. The experimental results show that the movement background can be extracted accurately and efficiently, and the algorithm is more robust, and can work in real time in tracking applications.

Keywords: Gaussian mixture model, real-time tracking, sequence image, gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
10096 Large Eddy Simulation of Hydrogen Deflagration in Open Space and Vented Enclosure

Authors: T. Nozu, K. Hibi, T. Nishiie

Abstract:

This paper discusses the applicability of the numerical model for a damage prediction method of the accidental hydrogen explosion occurring in a hydrogen facility. The numerical model was based on an unstructured finite volume method (FVM) code “NuFD/FrontFlowRed”. For simulating unsteady turbulent combustion of leaked hydrogen gas, a combination of Large Eddy Simulation (LES) and a combustion model were used. The combustion model was based on a two scalar flamelet approach, where a G-equation model and a conserved scalar model expressed a propagation of premixed flame surface and a diffusion combustion process, respectively. For validation of this numerical model, we have simulated the previous two types of hydrogen explosion tests. One is open-space explosion test, and the source was a prismatic 5.27 m3 volume with 30% of hydrogen-air mixture. A reinforced concrete wall was set 4 m away from the front surface of the source. The source was ignited at the bottom center by a spark. The other is vented enclosure explosion test, and the chamber was 4.6 m × 4.6 m × 3.0 m with a vent opening on one side. Vent area of 5.4 m2 was used. Test was performed with ignition at the center of the wall opposite the vent. Hydrogen-air mixtures with hydrogen concentrations close to 18% vol. were used in the tests. The results from the numerical simulations are compared with the previous experimental data for the accuracy of the numerical model, and we have verified that the simulated overpressures and flame time-of-arrival data were in good agreement with the results of the previous two explosion tests.

Keywords: Deflagration, Large Eddy Simulation, Turbulent combustion, Vented enclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1477
10095 Kinetic Studies on Microbial Production of Tannase Using Redgram Husk

Authors: S. K. Mohan, T. Viruthagiri, C. Arunkumar

Abstract:

Tannase (tannin acyl hydrolase, E.C.3.1.1.20) is an important hydrolysable enzyme with innumerable applications and industrial potential. In the present study, a kinetic model has been developed for the batch fermentation used for the production of tannase by A.flavus MTCC 3783. Maximum tannase activity of 143.30 U/ml was obtained at 96 hours under optimum operating conditions at 35oC, an initial pH of 5.5 and with an inducer tannic acid concentration of 3% (w/v) for a fermentation period of 120 hours. The biomass concentration reaches a maximum of 6.62 g/l at 96 hours and further there was no increase in biomass concentration till the end of the fermentation. Various unstructured kinetic models were analyzed to simulate the experimental values of microbial growth, tannase activity and substrate concentration. The Logistic model for microbial growth , Luedeking - Piret model for production of tannase and Substrate utilization kinetic model for utilization of substrate were capable of predicting the fermentation profile with high coefficient of determination (R2) values of 0.980, 0.942 and 0.983 respectively. The results indicated that the unstructured models were able to describe the fermentation kinetics more effectively.

Keywords: Aspergillus flavus, Batch fermentation, Kinetic model, Tannase, Unstructured models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
10094 Laplace Transformation on Ordered Linear Space of Generalized Functions

Authors: K. V. Geetha, N. R. Mangalambal

Abstract:

Aim. We have introduced the notion of order to multinormed spaces and countable union spaces and their duals. The topology of bounded convergence is assigned to the dual spaces. The aim of this paper is to develop the theory of ordered topological linear spaces La,b, L(w, z), the dual spaces of ordered multinormed spaces La,b, ordered countable union spaces L(w, z), with the topology of bounded convergence assigned to the dual spaces. We apply Laplace transformation to the ordered linear space of Laplace transformable generalized functions. We ultimately aim at finding solutions to nonhomogeneous nth order linear differential equations with constant coefficients in terms of generalized functions and comparing different solutions evolved out of different initial conditions. Method. The above aim is achieved by • Defining the spaces La,b, L(w, z). • Assigning an order relation on these spaces by identifying a positive cone on them and studying the properties of the cone. • Defining an order relation on the dual spaces La,b, L(w, z) of La,b, L(w, z) and assigning a topology to these dual spaces which makes the order dual and the topological dual the same. • Defining the adjoint of a continuous map on these spaces and studying its behaviour when the topology of bounded convergence is assigned to the dual spaces. • Applying the two-sided Laplace Transformation on the ordered linear space of generalized functions W and studying some properties of the transformation which are used in solving differential equations. Result. The above techniques are applied to solve non-homogeneous n-th order linear differential equations with constant coefficients in terms of generalized functions and to compare different solutions of the differential equation.

Keywords: Laplace transformable generalized function, positive cone, topology of bounded convergence

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1234
10093 Simulation of Population Dynamics of Aedes aegypti using Climate Dependent Model

Authors: Nuraini Yusoff, Harun Budin, Salemah Ismail

Abstract:

A climate dependent model is proposed to simulate the population of Aedes aegypti mosquito. In developing the model, average temperature of Shah Alam, Malaysia was used to determine the development rate of each stage of the life cycle of mosquito. Rainfall dependent function was proposed to simulate the hatching rate of the eggs under several assumptions. The proposed transition matrix was obtained and used to simulate the population of eggs, larvae, pupae and adults mosquito. It was found that the peak of mosquito abundance comes during a relatively dry period following a heavy rainfall. In addition, lag time between the peaks of mosquito abundance and dengue fever cases in Shah Alam was estimated.

Keywords: simulation, Aedes aegypti, Lefkovitch matrix, rainfall dependent model, Shah Alam

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2487
10092 A Retrospective Analysis of a Professional Learning Community: How Teachers- Capacities Shaped It

Authors: S.Pancucci

Abstract:

The purpose of this paper is to describe the process of setting up a learning community within an elementary school in Ontario, Canada. The description is provided through reflection and examination of field notes taken during the yearlong training and implementation process. Specifically the impact of teachers- capacity on the creation of a learning community was of interest. This paper is intended to inform and add to the debate around the tensions that exist in implementing a bottom-up professional development model like the learning community in a top-down organizational structure. My reflections of the process illustrate that implementation of the learning community professional development model may be difficult and yet transformative in the professional lives of the teachers, students, and administration involved in the change process. I conclude by suggesting the need for a new model of professional development that requires a transformative shift in power dynamics and a shift in the view of what constitutes effective professional learning.

Keywords: Learning community model, professionaldevelopment, teacher capacity, teacher leadership.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650
10091 A Fuzzy Logic Based Model to Predict Surface Roughness of A Machined Surface in Glass Milling Operation Using CBN Grinding Tool

Authors: Ahmed A. D. Sarhan, M. Sayuti, M. Hamdi

Abstract:

Nowadays, the demand for high product quality focuses extensive attention to the quality of machined surface. The (CNC) milling machine facilities provides a wide variety of parameters set-up, making the machining process on the glass excellent in manufacturing complicated special products compared to other machining processes. However, the application of grinding process on the CNC milling machine could be an ideal solution to improve the product quality, but adopting the right machining parameters is required. In glass milling operation, several machining parameters are considered to be significant in affecting surface roughness. These parameters include the lubrication pressure, spindle speed, feed rate and depth of cut. In this research work, a fuzzy logic model is offered to predict the surface roughness of a machined surface in glass milling operation using CBN grinding tool. Four membership functions are allocated to be connected with each input of the model. The predicted results achieved via fuzzy logic model are compared to the experimental result. The result demonstrated settlement between the fuzzy model and experimental results with the 93.103% accuracy.

Keywords: CNC-machine, Glass milling, Grinding, Surface roughness, Cutting force, Fuzzy logic model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2661
10090 Speed Control of a Permanent Magnet Synchronous Machine (PMSM) Fed by an Inverter Voltage Fuzzy Control Approach

Authors: Jamel Khedri, Mohamed Chaabane, Mansour Souissi, Driss Mehdi

Abstract:

This paper deals with the synthesis of fuzzy controller applied to a permanent magnet synchronous machine (PMSM) with a guaranteed H∞ performance. To design this fuzzy controller, nonlinear model of the PMSM is approximated by Takagi-Sugeno fuzzy model (T-S fuzzy model), then the so-called parallel distributed compensation (PDC) is employed. Next, we derive the property of the H∞ norm. The latter is cast in terms of linear matrix inequalities (LMI-s) while minimizing the H∞ norm of the transfer function between the disturbance and the error ( ) ev T . The experimental and simulations results were conducted on a permanent magnet synchronous machine to illustrate the effects of the fuzzy modelling and the controller design via the PDC.

Keywords: Feedback controller, Takagi-Sugeno fuzzy model, Linear Matrix Inequality (LMI), PMSM, H∞ performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2335
10089 Interplay of Power Management at Core and Server Level

Authors: Jörg Lenhardt, Wolfram Schiffmann, Jörg Keller

Abstract:

While the feature sizes of recent Complementary Metal Oxid Semiconductor (CMOS) devices decrease the influence of static power prevails their energy consumption. Thus, power savings that benefit from Dynamic Frequency and Voltage Scaling (DVFS) are diminishing and temporal shutdown of cores or other microchip components become more worthwhile. A consequence of powering off unused parts of a chip is that the relative difference between idle and fully loaded power consumption is increased. That means, future chips and whole server systems gain more power saving potential through power-aware load balancing, whereas in former times this power saving approach had only limited effect, and thus, was not widely adopted. While powering off complete servers was used to save energy, it will be superfluous in many cases when cores can be powered down. An important advantage that comes with that is a largely reduced time to respond to increased computational demand. We include the above developments in a server power model and quantify the advantage. Our conclusion is that strategies from datacenters when to power off server systems might be used in the future on core level, while load balancing mechanisms previously used at core level might be used in the future at server level.

Keywords: Power efficiency, static power consumption, dynamic power consumption, CMOS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1694
10088 Star-Hexagon Transformer Supported UPQC

Authors: Yash Pal, A.Swarup, Bhim Singh

Abstract:

A new topology of unified power quality conditioner (UPQC) is proposed for different power quality (PQ) improvement in a three-phase four-wire (3P-4W) distribution system. For neutral current mitigation, a star-hexagon transformer is connected in shunt near the load along with three-leg voltage source inverters (VSIs) based UPQC. For the mitigation of source neutral current, the uses of passive elements are advantageous over the active compensation due to ruggedness and less complexity of control. In addition to this, by connecting a star-hexagon transformer for neutral current mitigation the over all rating of the UPQC is reduced. The performance of the proposed topology of 3P-4W UPQC is evaluated for power-factor correction, load balancing, neutral current mitigation and mitigation of voltage and currents harmonics. A simple control algorithm based on Unit Vector Template (UVT) technique is used as a control strategy of UPQC for mitigation of different PQ problems. In this control scheme, the current/voltage control is applied over the fundamental supply currents/voltages instead of fast changing APFs currents/voltages, thereby reducing the computational delay. Moreover, no extra control is required for neutral source current compensation; hence the numbers of current sensors are reduced. The performance of the proposed topology of UPQC is analyzed through simulations results using MATLAB software with its Simulink and Power System Block set toolboxes.

Keywords: Power-factor correction, Load balancing, UPQC, Voltage and Current harmonics, Neutral current mitigation, Starhexagon transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2329
10087 Seismic Performance of Reinforced Concrete Frames Infilled by Masonry Walls with Different Heights

Authors: Ji–Wook Mauk, Yu–Suk Kim, Hyung–Joon Kim

Abstract:

This study carried out comparative seismic performance of reinforced concrete frames infilled by masonry walls with different heights. Partial and fully infilled reinforced concrete frames were modeled for the research objectives and the analysis model for a bare reinforced concrete frame was also established for comparison. Non–linear static analyses for the studied frames were performed to investigate their structural behavior under extreme seismic loads and to find out their collapse mechanism. It was observed from analysis results that the strengths of the partial infilled reinforced concrete frames are increased and their ductilities are reduced, as infilled masonry walls are higher. Especially, reinforced concrete frames with higher partial infilled masonry walls would experience shear failures. Non–linear dynamic analyses using 10 earthquake records show that the bare and fully infilled reinforced concrete frame present stable collapse mechanism while the reinforced concrete frames with partially infilled masonry walls collapse in more brittle manner due to short-column effects.

Keywords: Fully infilled RC frame, partially infilled RC frame, masonry wall, short–column effects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2589
10086 Analysis and Evaluation of the Public Responses to Traffic Congestion Pricing Schemes in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Traffic congestion pricing in urban streets is one of the most suitable options for solving the traffic problems and environment pollutions in the cities of the country. Unlike its acceptable outcomes, there are problems concerning the necessity to pay by the mass. Regarding the fact that public response in order to succeed in this strategy is so influential, studying their response and behavior to get the feedback and improve the strategies is of great importance. In this study, a questionnaire was used to examine the public reactions to the traffic congestion pricing schemes at the center of Tehran metropolis and the factors involved in people’s decision making in accepting or rejecting the congestion pricing schemes were assessed based on the data obtained from the questionnaire as well as the international experiences. Then, by analyzing and comparing the schemes, guidelines to reduce public objections to them are discussed. The results of reviewing and evaluating the public reactions show that all the pros and cons must be considered to guarantee the success of these projects. Consequently, with targeted public education and consciousness-raising advertisements, prior to initiating a scheme and ensuring the mechanism of the implementation after the start of the project, the initial opposition is reduced and, with the gradual emergence of the real and tangible benefits of its implementation, users’ satisfaction will increase.

Keywords: Demand management, international experiences, traffic congestion pricing, public acceptance, public objection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651
10085 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF

Authors: Karunakar A K, Manohara Pai M M

Abstract:

In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.

Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619
10084 Diagnosis of the Abdominal Aorta Aneurysm in Magnetic Resonance Imaging Images

Authors: W. Kultangwattana, K. Somkantha, P. Phuangsuwan

Abstract:

This paper presents a technique for diagnosis of the abdominal aorta aneurysm in magnetic resonance imaging (MRI) images. First, our technique is designed to segment the aorta image in MRI images. This is a required step to determine the volume of aorta image which is the important step for diagnosis of the abdominal aorta aneurysm. Our proposed technique can detect the volume of aorta in MRI images using a new external energy for snakes model. The new external energy for snakes model is calculated from Law-s texture. The new external energy can increase the capture range of snakes model efficiently more than the old external energy of snakes models. Second, our technique is designed to diagnose the abdominal aorta aneurysm by Bayesian classifier which is classification models based on statistical theory. The feature for data classification of abdominal aorta aneurysm was derived from the contour of aorta images which was a result from segmenting of our snakes model, i.e., area, perimeter and compactness. We also compare the proposed technique with the traditional snakes model. In our experiment results, 30 images are trained, 20 images are tested and compared with expert opinion. The experimental results show that our technique is able to provide more accurate results than 95%.

Keywords: Adbominal Aorta Aneurysm, Bayesian Classifier, Snakes Model, Texture Feature.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1592
10083 A Consideration on the Offset Frontal Impact Modeling Using Spring-Mass Model

Authors: Jaemoon Lim

Abstract:

To construct the lumped spring-mass model considering the occupants for the offset frontal crash, the SISAME software and the NHTSA test data were used. The data on 56 kph 40% offset frontal vehicle to deformable barrier crash test of a MY2007 Mazda 6 4-door sedan were obtained from NHTSA test database. The overall behaviors of B-pillar and engine of simulation models agreed very well with the test data. The trends of accelerations at the driver and passenger head were similar but big differences in peak values. The differences of peak values caused the large errors of the HIC36 and 3 ms chest g’s. To predict well the behaviors of dummies, the spring-mass model for the offset frontal crash needs to be improved.

Keywords: Chest g’s, HIC36, lumped spring-mass model, offset frontal impact, SISAME.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2668
10082 Numerical Studies of Galerkin-type Time-discretizations Applied to Transient Convection-diffusion-reaction Equations

Authors: Naveed Ahmed, Gunar Matthies

Abstract:

We deal with the numerical solution of time-dependent convection-diffusion-reaction equations. We combine the local projection stabilization method for the space discretization with two different time discretization schemes: the continuous Galerkin-Petrov (cGP) method and the discontinuous Galerkin (dG) method of polynomial of degree k. We establish the optimal error estimates and present numerical results which shows that the cGP(k) and dG(k)- methods are accurate of order k +1, respectively, in the whole time interval. Moreover, the cGP(k)-method is superconvergent of order 2k and dG(k)-method is of order 2k +1 at the discrete time points. Furthermore, the dependence of the results on the choice of the stabilization parameter are discussed and compared.

Keywords: Convection-diffusion-reaction equations, stabilized finite elements, discontinuous Galerkin, continuous Galerkin-Petrov.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1751
10081 Exponentiated Transmuted Weibull Distribution A Generalization of the Weibull Distribution

Authors: Abd El Hady N. Ebraheim

Abstract:

This paper introduces a new generalization of the two parameter Weibull distribution. To this end, the quadratic rank transmutation map has been used. This new distribution is named exponentiated transmuted Weibull (ETW) distribution. The ETW distribution has the advantage of being capable of modeling various shapes of aging and failure criteria. Furthermore, eleven lifetime distributions such as the Weibull, exponentiated Weibull, Rayleigh and exponential distributions, among others follow as special cases. The properties of the new model are discussed and the maximum likelihood estimation is used to estimate the parameters. Explicit expressions are derived for the quantiles. The moments of the distribution are derived, and the order statistics are examined.

Keywords: Exponentiated, Inversion Method, Maximum Likelihood Estimation, Transmutation Map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3472
10080 Value Co-Creation in Used-Car Auctions: A Service Scientific Perspective

Authors: Safdar Muhammad Usman, Youji Kohda, Katsuhiro Umemoto

Abstract:

Electronic market place plays an important intermediary role for connecting dealers and retail customers. The main aim of this paper is to design a value co-creation model in used-car auctions. More specifically, the study has been designed in order to describe the process of value co-creation in used-car auctions, to explore the co-created values in used-car auctions, and finally conclude the paper indicating the future research directions. Our analysis shows that economic values as well as non-economic values are co-created in used-car auctions. In addition, this paper contributes to the academic society broadening the view of value co-creation in service science.

Keywords: Value co-creation, Used-car auctions, Non-economic values, Service science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2474
10079 Gaussian Process Model Identification Using Artificial Bee Colony Algorithm and Its Application to Modeling of Power Systems

Authors: Tomohiro Hachino, Hitoshi Takata, Shigeru Nakayama, Ichiro Iimura, Seiji Fukushima, Yasutaka Igarashi

Abstract:

This paper presents a nonparametric identification of continuous-time nonlinear systems by using a Gaussian process (GP) model. The GP prior model is trained by artificial bee colony algorithm. The nonlinear function of the objective system is estimated as the predictive mean function of the GP, and the confidence measure of the estimated nonlinear function is given by the predictive covariance of the GP. The proposed identification method is applied to modeling of a simplified electric power system. Simulation results are shown to demonstrate the effectiveness of the proposed method.

Keywords: Artificial bee colony algorithm, Gaussian process model, identification, nonlinear system, electric power system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1577
10078 A Study on the Waiting Time for the First Employment of Arts Graduates in Sri Lanka

Authors: Imali T. Jayamanne, K. P. Asoka Ramanayake

Abstract:

Transition from tertiary level education to employment is one of the challenges that many fresh university graduates face after graduation. The transition period or the waiting time to obtain the first employment varies with the socio-economic factors and the general characteristics of a graduate. Compared to other fields of study, Arts graduates in Sri Lanka, have to wait a long time to find their first employment. The objective of this study is to identify the determinants of the transition from higher education to employment of these graduates using survival models. The study is based on a survey that was conducted in the year 2016 on a stratified random sample of Arts graduates from Sri Lankan universities who had graduated in 2012. Among the 469 responses, 36 (8%) waiting times were interval censored and 13 (3%) were right censored. Waiting time for the first employment varied between zero to 51 months. Initially, the log-rank and the Gehan-Wilcoxon tests were performed to identify the significant factors. Gender, ethnicity, GCE Advanced level English grade, civil status, university, class received, degree type, sector of first employment, type of first employment and the educational qualifications required for the first employment were significant at 10%. The Cox proportional hazards model was fitted to model the waiting time for first employment with these significant factors. All factors, except ethnicity and type of employment were significant at 5%. However, since the proportional hazard assumption was violated, the lognormal Accelerated failure time (AFT) model was fitted to model the waiting time for the first employment. The same factors were significant in the AFT model as in Cox proportional model.

Keywords: AFT model, first employment, proportional hazard, survey design, waiting time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1122
10077 The Fundamental Reliance of Iterative Learning Control on Stability Robustness

Authors: Richard W. Longman

Abstract:

Iterative learning control aims to achieve zero tracking error of a specific command. This is accomplished by iteratively adjusting the command given to a feedback control system, based on the tracking error observed in the previous iteration. One would like the iterations to converge to zero tracking error in spite of any error present in the model used to design the learning law. First, this need for stability robustness is discussed, and then the need for robustness of the property that the transients are well behaved. Methods of producing the needed robustness to parameter variations and to singular perturbations are presented. Then a method involving reverse time runs is given that lets the world behavior produce the ILC gains in such a way as to eliminate the need for a mathematical model. Since the real world is producing the gains, there is no issue of model error. Provided the world behaves linearly, the approach gives an ILC law with both stability robustness and good transient robustness, without the need to generate a model.

Keywords: Iterative learning control, stability robustness, monotonic convergence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1594
10076 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets Using an Open-Source Energy System Optimization Model

Authors: A. Balbo, G. Colucci, M. Nicoli, L. Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results are ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: Decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 698
10075 Numerical Simulation of a Single Air Bubble Rising in Water with Various Models of Surface Tension Force

Authors: Afshin Ahmadi Nadooshan, Ebrahim Shirani

Abstract:

Different numerical methods are employed and developed for simulating interfacial flows. A large range of applications belong to this group, e.g. two-phase flows of air bubbles in water or water drops in air. In such problems surface tension effects often play a dominant role. In this paper, various models of surface tension force for interfacial flows, the CSF, CSS, PCIL and SGIP models have been applied to simulate the motion of small air bubbles in water and the results were compared and reviewed. It has been pointed out that by using SGIP or PCIL models, we are able to simulate bubble rise and obtain results in close agreement with the experimental data.

Keywords: Volume-of-Fluid, Bubble Rising, SGIP model, CSS model, CSF model, PCIL model, interface, surface tension force.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
10074 Variational Evolutionary Splines for Solving a Model of Temporomandibular Disorders

Authors: Alberto Hananel

Abstract:

The aim of this work is to modelize the occlusion of a person with temporomandibular disorders as an evolutionary equation and approach its solution by the construction and characterizing of discrete variational splines. To formulate the problem, certain boundary conditions have been considered. After showing the existence and the uniqueness of the solution of such a problem, a convergence result of a discrete variational evolutionary spline is shown. A stress analysis of the occlusion of a human jaw with temporomandibular disorders by finite elements is carried out in FreeFem++ in order to prove the validity of the presented method.

Keywords: Approximation, evolutionary PDE, finite element method, temporomandibular disorders, variational spline.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1586
10073 AGENTMAP: A Conceptual Meta-Model of Interacting Simulations

Authors: Thomas M. Prinz Wilhelm R. Rossak, Kai Gebhardt

Abstract:

A straightforward and intuitive combination of single simulations into an aggregated master-simulation is not trivial. There are lots of problems, which trigger-specific difficulties during the modeling and execution of such a simulation. In this paper we identify these problems and aim to solve them by mapping the task to the field of multi agent systems. The solution is a new meta-model named AGENTMAP, which is able to mitigate most of the problems and to support intuitive modeling at the same time. This meta-model will be introduced and explained on basis of an example from the e-commerce domain.

Keywords: Multi Agent System, Agent-based Simulation, Distributed Systems, Meta-models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878