Search results for: step method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 20915

Search results for: step method

20735 Overhead Lines Induced Transient Overvoltage Analysis Using Finite Difference Time Domain Method

Authors: Abdi Ammar, Ouazir Youcef, Laissaoui Abdelmalek

Abstract:

In this work, an approach based on transmission lines theory is presented. It is exploited for the calculation of overvoltage created by direct impacts of lightning waves on a guard cable of an overhead high-voltage line. First, we show the theoretical developments leading to the propagation equation, its discretization by finite difference time domain method (FDTD), and the resulting linear algebraic equations, followed by the calculation of the linear parameters of the line. The second step consists of solving the transmission lines system of equations by the FDTD method. This enabled us to determine the spatio-temporal evolution of the induced overvoltage.

Keywords: lightning surge, transient overvoltage, eddy current, FDTD, electromagnetic compatibility, ground wire

Procedia PDF Downloads 83
20734 A Three-Step Iterative Process for Common Fixed Points of Three Contractive-Like Operators

Authors: Safeer Hussain Khan, H. Fukhar-ud-Din

Abstract:

The concept of quasi-contractive type operators was given by Berinde and extended by Imoru and Olatinwo. They named this new type as contractive-like operators. On the other hand, Xu and Noo introduced a three-step-one-mappings iterative process which can be seen as a generalization of Mann and Ishikawa iterative processes. Approximating common fixed points has its own importance as it has a direct link with minimization problem. Motivated by this, in this paper, we first extend the iterative process of Xu and Noor to the case of three-step-three-mappings and then prove a strong convergence result using contractive-like operators for this iterative process. In general, this generalizes corresponding results using Mann, Ishikawa and Xu-Noor iterative processes with quasi-contractive type operators. It is to be pointed out that our results can also be proved with iterative process involving error terms.

Keywords: contractive-like operator, iterative process, common fixed point, strong convergence

Procedia PDF Downloads 594
20733 Software Verification of Systematic Resampling for Optimization of Particle Filters

Authors: Osiris Terry, Kenneth Hopkinson, Laura Humphrey

Abstract:

Systematic resampling is the most popularly used resampling method in particle filters. This paper seeks to further the understanding of systematic resampling by defining a formula made up of variables from the sampling equation and the particle weights. The formula is then verified via SPARK, a software verification language. The verified systematic resampling formula states that the minimum/maximum number of possible samples taken of a particle is equal to the floor/ceiling value of particle weight divided by the sampling interval, respectively. This allows for the creation of a randomness spectrum that each resampling method can fall within. Methods on the lower end, e.g., systematic resampling, have less randomness and, thus, are quicker to reach an estimate. Although lower randomness allows for error by having a larger bias towards the size of the weight, having this bias creates vulnerabilities to the noise in the environment, e.g., jamming. Conclusively, this is the first step in characterizing each resampling method. This will allow target-tracking engineers to pick the best resampling method for their environment instead of choosing the most popularly used one.

Keywords: SPARK, software verification, resampling, systematic resampling, particle filter, tracking

Procedia PDF Downloads 84
20732 Two Step Biodiesel Production from High Free Fatty Acid Spent Bleaching Earth

Authors: Rajiv Arora

Abstract:

Biodiesel may be economical if produced from inexpensive feedstock which commonly contains high level of free fatty acids (FFA) as an inhibitor in production of methyl ester. In this study, a two-step process for biodiesel production from high FFA spent bleach earth oil in a batch reactor is developed. Oil sample extracted from spent bleaching earth (SBE) was utilized for biodiesel process. In the first step, FFA of the SBE oil was reduced to 1.91% through sulfuric acid catalyzed esterification. In the second step, the product prepared from the first esterification process was carried out transesterification with an alkaline catalyst. The influence of four variables on conversion efficiency to methyl ester, i.e., methanol/ SBE oil molar ratio, catalyst amount, reaction temperature and reaction time, was studied in the second stage. The optimum process variables in the transesterification were methanol/oil molar ratio 6:1, heterogeneous catalyst conc. 5 wt %, reaction temperature 65 °C and reaction time 60 minutes to produce biodiesel. Major fuel properties of SBE biodiesel were measured to comply with ASTM and EN standards. Therefore, an optimized process for production of biodiesel from a low-cost high FFA source was accomplished.

Keywords: biodiesel, esterification, free fatty acids, residual oil, spent bleaching earth, transesterification

Procedia PDF Downloads 176
20731 Restored CO₂ from Flue Gas and Utilization by Converting to Methanol by 3 Step Processes: Steam Reforming, Reverse Water Gas Shift and Hydrogenation

Authors: Rujira Jitrwung, Kuntima Krekkeitsakul, Weerawat Patthaveekongka, Chiraphat Kumpidet, Jarukit Tepkeaw, Krissana Jaikengdee, Anantachai Wannajampa

Abstract:

Flue gas discharging from coal fired or gas combustion power plant contains around 12% Carbon dioxide (CO₂), 6% Oxygen (O₂), and 82% Nitrogen (N₂).CO₂ is a greenhouse gas which has been concerned to the global warming. Carbon Capture, Utilization, and Storage (CCUS) is a topic which is a tool to deal with this CO₂ realization. Flue gas is drawn down from the chimney and filtered, then it is compressed to build up the pressure until 8 bar. This compressed flue gas is sent to three stages Pressure Swing Adsorption (PSA), which is filled with activated carbon. Experiments were showed the optimum adsorption pressure at 7bar, which CO₂ can be adsorbed step by step in 1st, 2nd, and 3rd stage, obtaining CO₂ concentration 29.8, 66.4, and 96.7 %, respectively. The mixed gas concentration from the last step is composed of 96.7% CO₂,2.7% N₂, and 0.6%O₂. This mixed CO₂product gas obtained from 3 stages PSA contained high concentration CO₂, which is ready to use for methanol synthesis. The mixed CO₂ was experimented in 5 Liter/Day of methanol synthesis reactor skid by 3 step processes as followed steam reforming, reverse water gas shift, and then hydrogenation. The result showed that proportional of mixed CO₂ and CH₄ 70/30, 50/50, 30/70 % (v/v), and 10/90 yielded methanol 2.4, 4.3, 5.6, and 6.0 Liter/day and save CO₂ 40, 30, 20, and 5 % respectively. The optimum condition resulted both methanol yield and CO₂ consumption using CO₂/CH₄ ratio 43/57 % (v/v), which yielded 4.8 Liter/day methanol and save CO₂ 27% comparing with traditional methanol production from methane steam reforming (5 Liter/day)and absent CO₂ consumption.

Keywords: carbon capture utilization and storage, pressure swing adsorption, reforming, reverse water gas shift, methanol

Procedia PDF Downloads 187
20730 The Effect of Cross-Curriculum of L1 and L2 on Elementary School Students’ Linguistic Proficiency: To Sympathize with Others

Authors: Reiko Yamamoto

Abstract:

This paper reports on a project to integrate Japanese (as a first language) and English (as a second language) education. This study focuses on the mutual effects of the two languages on the linguistic proficiency of elementary school students. The research team consisted of elementary school teachers and researchers at a university. The participants of the experiment were students between 3rd and 6th grades at an elementary school. The research process consisted of seven steps: 1) specifying linguistic proficiency; 2) developing the cross-curriculum of L1 and L2; 3) forming can-do statements; 4) creating a self-evaluation questionnaire; 5) executing the self-evaluation questionnaire at the beginning of the school year; 6) instructing L1 and L2 based on the curriculum; and 7) executing the self-evaluation questionnaire at the beginning of the next school year. In Step 1, the members of the research team brainstormed ways to specify elementary school students’ linguistic proficiency that can be observed in various scenes. It was revealed that the teachers evaluate their students’ linguistic proficiency on the basis of the students’ utterances, but also informed by their non-verbal communication abilities. This led to the idea that competency for understanding others’ minds through the use of physical movement or bodily senses in communication in L1 – to sympathize with others – can be transferred to that same competency in communication in L2. Based on the specification of linguistic proficiency that L1 and L2 have in common, a cross-curriculum of L1 and L2 was developed in Step 2. In Step 3, can-do statements based on the curriculum were also formed, building off of the action-oriented approach from the Common European Framework of Reference for Languages (CEFR) used in Europe. A self-evaluation questionnaire consisting of the main can-do statements was given to the students between 3rd grade and 6th grade at the beginning of the school year (Step 4 and Step 5), and all teachers gave L1 and L2 instruction based on the curriculum to the students for one year (Step 6). The same questionnaire was given to the students at the beginning of the next school year (Step 7). The results of statistical analysis proved the enhancement of the students’ linguistic proficiency. This verified the validity of developing the cross-curriculum of L1 and L2 and adapting it in elementary school. It was concluded that elementary school students do not distinguish between L1 and L2, and that they just try to understand others’ minds through physical movement or senses in any language.

Keywords: cross curriculum of L1 and L2, elementary school education, language proficiency, sympathy with others

Procedia PDF Downloads 438
20729 Protein and Lipid Extraction from Microalgae with Ultrasound Assisted Osmotic Shock Method

Authors: Nais Pinta Adetya, H. Hadiyanto

Abstract:

Microalgae has a potential to be utilized as food and natural colorant. The microalgae components consists of three main parts, these are lipid, protein, and carbohydrate. Crucial step in producing lipid and protein from microalgae is extraction. Microalgae has high water level (70-90%), it causes drying process of biomass needs much more energy and also has potential to distract lipid and protein from microalgae. Extraction of lipid from wet biomass is able to take place efficiently with cell disruption of microalgae by osmotic shock method. In this study, osmotic shock method was going to be integrated with ultrasound to maximalize the extraction yield of lipid and protein from wet biomass Spirulina sp. with osmotic shock method assisted ultrasound. This study consisted of two steps, these were osmotic shock process toward wet biomass and ultrasound extraction assisted. NaCl solution was used as osmotic agent, with the variation of concentrations were 10%, 20%, and 30%. Extraction was conducted in 40°C for 20 minutes with frequency of ultrasound wave was 40kHz. The optimal yield of protein (2.7%) and (lipid 38%) were achieved at 20% osmotic agent concentration.

Keywords: extraction, lipid, osmotic shock, protein, ultrasound

Procedia PDF Downloads 359
20728 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 168
20727 Sonochemically Prepared Non-Noble Metal Oxide Catalysts for Methane Catalytic Combustion

Authors: Przemyslaw J. Jodlowski, Roman J. Jedrzejczyk, Damian K. Chlebda, Anna Dziedzicka, Lukasz Kuterasinski, Anna Gancarczyk, Maciej Sitarz

Abstract:

The aim of this study was to obtain highly active catalysts based on non-noble metal oxides supported on zirconia prepared via a sonochemical method. In this study, the influence of the stabilizers addition during the preparation step was checked. The final catalysts were characterized by using such characterization methods as X-ray Diffraction (XRD), nitrogen adsorption, X-ray fluorescence (XRF), scanning electron microscopy (SEM) equipped with energy dispersive X-ray spectrometer (EDS), transmission electron microscopy (TEM) and µRaman. The proposed preparation method allowed to obtain uniformly dispersed metal-oxide nanoparticles at the support’s surface. The catalytic activity of prepared catalyst samples was measured in a methane combustion reaction. The activity of the catalysts prepared by the sonochemical method was considerably higher than their counterparts prepared by the incipient wetness method.

Keywords: methane catalytic combustion, nanoparticles, non-noble metals, sonochemistry

Procedia PDF Downloads 217
20726 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform

Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy

Abstract:

A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.

Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing

Procedia PDF Downloads 172
20725 Semantic Indexing Improvement for Textual Documents: Contribution of Classification by Fuzzy Association Rules

Authors: Mohsen Maraoui

Abstract:

In the aim of natural language processing applications improvement, such as information retrieval, machine translation, lexical disambiguation, we focus on statistical approach to semantic indexing for multilingual text documents based on conceptual network formalism. We propose to use this formalism as an indexing language to represent the descriptive concepts and their weighting. These concepts represent the content of the document. Our contribution is based on two steps. In the first step, we propose the extraction of index terms using the multilingual lexical resource Euro WordNet (EWN). In the second step, we pass from the representation of index terms to the representation of index concepts through conceptual network formalism. This network is generated using the EWN resource and pass by a classification step based on association rules model (in attempt to discover the non-taxonomic relations or contextual relations between the concepts of a document). These relations are latent relations buried in the text and carried by the semantic context of the co-occurrence of concepts in the document. Our proposed indexing approach can be applied to text documents in various languages because it is based on a linguistic method adapted to the language through a multilingual thesaurus. Next, we apply the same statistical process regardless of the language in order to extract the significant concepts and their associated weights. We prove that the proposed indexing approach provides encouraging results.

Keywords: concept extraction, conceptual network formalism, fuzzy association rules, multilingual thesaurus, semantic indexing

Procedia PDF Downloads 141
20724 Graph Planning Based Composition for Adaptable Semantic Web Services

Authors: Rihab Ben Lamine, Raoudha Ben Jemaa, Ikram Amous Ben Amor

Abstract:

This paper proposes a graph planning technique for semantic adaptable Web Services composition. First, we use an ontology based context model for extending Web Services descriptions with information about the most suitable context for its use. Then, we transform the composition problem into a semantic context aware graph planning problem to build the optimal service composition based on user's context. The construction of the planning graph is based on semantic context aware Web Service discovery that allows for each step to add most suitable Web Services in terms of semantic compatibility between the services parameters and their context similarity with the user's context. In the backward search step, semantic and contextual similarity scores are used to find best composed Web Services list. Finally, in the ranking step, a score is calculated for each best solution and a set of ranked solutions is returned to the user.

Keywords: semantic web service, web service composition, adaptation, context, graph planning

Procedia PDF Downloads 521
20723 Sensor Registration in Multi-Static Sonar Fusion Detection

Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin

Abstract:

In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.

Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem

Procedia PDF Downloads 169
20722 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing

Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor

Abstract:

This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.

Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing

Procedia PDF Downloads 322
20721 Bitplanes Image Encryption/Decryption Using Edge Map (SSPCE Method) and Arnold Transform

Authors: Ali A. Ukasha

Abstract:

Data security needed in data transmission, storage, and communication to ensure the security. The single step parallel contour extraction (SSPCE) method is used to create the edge map as a key image from the different Gray level/Binary image. Performing the X-OR operation between the key image and each bit plane of the original image for image pixel values change purpose. The Arnold transform used to changes the locations of image pixels as image scrambling process. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Gary level image and completely reconstructed without any distortion. Also shown that the analyzed algorithm have extremely large security against some attacks like salt & pepper and JPEG compression. Its proof that the Gray level image can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.

Keywords: SSPCE method, image compression, salt and peppers attacks, bitplanes decomposition, Arnold transform, lossless image encryption

Procedia PDF Downloads 497
20720 Developing Methodology of Constructing the Unified Action Plan for External and Internal Risks in University

Authors: Keiko Tamura, Munenari Inoguchi, Michiyo Tsuji

Abstract:

When disasters occur, in order to raise the speed of each decision making and response, it is common that delegation of authority is carried out. This tendency is particularly evident when the department or branch of the organization are separated by the physical distance from the main body; however, there are some issues to think about. If the department or branch is too dependent on the head office in the usual condition, they might feel lost in the disaster response operation when they are face to the situation. Avoiding this problem, an organization should decide how to delegate the authority and also who accept the responsibility for what before the disaster. This paper will discuss about the method which presents an approach for executing the delegation of authority process, implementing authorities, management by objectives, and preparedness plans and agreement. The paper will introduce the examples of efforts for the three research centers of Niigata University, Japan to arrange organizations capable of taking necessary actions for disaster response. Each center has a quality all its own. One is the center for carrying out the research in order to conserve the crested ibis (or Toki birds in Japanese), the endangered species. The another is the marine biological laboratory. The third one is very unique because of the old growth forests maintained as the experimental field. Those research centers are in the Sado Island, located off the coast of Niigata Prefecture, is Japan's second largest island after Okinawa and is known for possessing a rich history and culture. It takes 65 minutes jetfoil (high-speed ferry) ride to get to Sado Island from the mainland. The three centers are expected to be easily isolated at the time of a disaster. A sense of urgency encourages 3 centers in the process of organizational restructuring for enhancing resilience. The research team from the risk management headquarters offer those procedures; Step 1: Offer the hazard scenario based on the scientific evidence, Step 2: Design a risk management organization for disaster response function, Step 3: Conduct the participatory approach to make consensus about the overarching objectives, Step 4: Construct the unified operational action plan for 3 centers, Step 5: Simulate how to respond in each phase based on the understanding the various phases of the timeline of a disaster. Step 6: Document results to measure performance and facilitate corrective action. This paper shows the result of verifying the output and effects.

Keywords: delegation of authority, disaster response, risk management, unified command

Procedia PDF Downloads 125
20719 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 252
20718 Parameter Estimation of Gumbel Distribution with Maximum-Likelihood Based on Broyden Fletcher Goldfarb Shanno Quasi-Newton

Authors: Dewi Retno Sari Saputro, Purnami Widyaningsih, Hendrika Handayani

Abstract:

Extreme data on an observation can occur due to unusual circumstances in the observation. The data can provide important information that can’t be provided by other data so that its existence needs to be further investigated. The method for obtaining extreme data is one of them using maxima block method. The distribution of extreme data sets taken with the maxima block method is called the distribution of extreme values. Distribution of extreme values is Gumbel distribution with two parameters. The parameter estimation of Gumbel distribution with maximum likelihood method (ML) is difficult to determine its exact value so that it is necessary to solve the approach. The purpose of this study was to determine the parameter estimation of Gumbel distribution with quasi-Newton BFGS method. The quasi-Newton BFGS method is a numerical method used for nonlinear function optimization without constraint so that the method can be used for parameter estimation from Gumbel distribution whose distribution function is in the form of exponential doubel function. The quasi-New BFGS method is a development of the Newton method. The Newton method uses the second derivative to calculate the parameter value changes on each iteration. Newton's method is then modified with the addition of a step length to provide a guarantee of convergence when the second derivative requires complex calculations. In the quasi-Newton BFGS method, Newton's method is modified by updating both derivatives on each iteration. The parameter estimation of the Gumbel distribution by a numerical approach using the quasi-Newton BFGS method is done by calculating the parameter values that make the distribution function maximum. In this method, we need gradient vector and hessian matrix. This research is a theory research and application by studying several journals and textbooks. The results of this study obtained the quasi-Newton BFGS algorithm and estimation of Gumbel distribution parameters. The estimation method is then applied to daily rainfall data in Purworejo District to estimate the distribution parameters. This indicates that the high rainfall that occurred in Purworejo District decreased its intensity and the range of rainfall that occurred decreased.

Keywords: parameter estimation, Gumbel distribution, maximum likelihood, broyden fletcher goldfarb shanno (BFGS)quasi newton

Procedia PDF Downloads 324
20717 An Impregnated Active Layer Mode of Solution Combustion Synthesis as a Tool for the Solution Combustion Mechanism Investigation

Authors: Zhanna Yermekova, Sergey Roslyakov

Abstract:

Solution combustion synthesis (SCS) is the unique method which multiple times has proved itself as an effective and efficient approach for the versatile synthesis of a variety of materials. It has significant advantages such as relatively simple handling process, high rates of product synthesis, mixing of the precursors on a molecular level, and fabrication of the nanoproducts as a result. Nowadays, an overwhelming majority of solution combustion investigations performed through the volume combustion synthesis (VCS) where the entire liquid precursor is heated until the combustion self-initiates throughout the volume. Less amount of the experiments devoted to the steady-state self-propagating mode of SCS. Under the beforementioned regime, the precursor solution is dried until the gel-like media, and later on, the gel substance is locally ignited. In such a case, a combustion wave propagates in a self-sustaining mode as in conventional solid combustion synthesis. Even less attention is given to the impregnated active layer (IAL) mode of solution combustion. An IAL approach to the synthesis is implying that the solution combustion of the precursors should be initiated on the surface of the third chemical or inside the third substance. This work is aiming to emphasize an underestimated role of the impregnated active layer mode of the solution combustion synthesis for the fundamental studies of the combustion mechanisms. It also serves the purpose of popularizing the technical terms and clarifying the difference between them. In order to do so, the solution combustion synthesis of γ-FeNi (PDF#47-1417) alloy has been accomplished within short (seconds) one-step reaction of metal precursors with hexamethylenetetramine (HTMA) fuel. An idea of the special role of the Ni in a process of alloy formation was suggested and confirmed with the particularly organized set of experiments. The first set of experiments were conducted in a conventional steady-state self-propagating mode of SCS. An alloy was synthesized as a single monophasic product. In two other experiments, the synthesis was divided into two independent processes which are possible under the IAL mode of solution combustion. The sequence of the process was changed according to the equations which are describing an Experiment A and B below: Experiment A: Step 1. Fe(NO₃)₃*9H₂O + HMTA = FeO + gas products; Step 2. FeO + Ni(NO₃)₂*6H₂O + HMTA = Ni + FeO + gas products; Experiment B: Step 1. Ni(NO₃)₂*6H₂O + HMTA = Ni + gas products; Step 2. Ni + Fe(NO₃)₃*9H₂O + HMTA = Fe₃Ni₂+ traces (Ni + FeO). Based on the IAL experiment results, one can see that combustion of the Fe(NO₃)₃9H₂O on the surface of the Ni is leading to the alloy formation while presence of the already formed FeO does not affect the Ni(NO₃)₂*6H₂O + HMTA reaction in any way and Ni is the main product of the synthesis.

Keywords: alloy, hexamethylenetetramine, impregnated active layer mode, mechanism, solution combustion synthesis

Procedia PDF Downloads 135
20716 Tool for Analysing the Sensitivity and Tolerance of Mechatronic Systems in Matlab GUI

Authors: Bohuslava Juhasova, Martin Juhas, Renata Masarova, Zuzana Sutova

Abstract:

The article deals with the tool in Matlab GUI form that is designed to analyse a mechatronic system sensitivity and tolerance. In the analysed mechatronic system, a torque is transferred from the drive to the load through a coupling containing flexible elements. Different methods of control system design are used. The classic form of the feedback control is proposed using Naslin method, modulus optimum criterion and inverse dynamics method. The cascade form of the control is proposed based on combination of modulus optimum criterion and symmetric optimum criterion. The sensitivity is analysed on the basis of absolute and relative sensitivity of system function to the change of chosen parameter value of the mechatronic system, as well as the control subsystem. The tolerance is analysed in the form of determining the range of allowed relative changes of selected system parameters in the field of system stability. The tool allows to analyse an influence of torsion stiffness, torsion damping, inertia moments of the motor and the load and controller(s) parameters. The sensitivity and tolerance are monitored in terms of the impact of parameter change on the response in the form of system step response and system frequency-response logarithmic characteristics. The Symbolic Math Toolbox for expression of the final shape of analysed system functions was used. The sensitivity and tolerance are graphically represented as 2D graph of sensitivity or tolerance of the system function and 3D/2D static/interactive graph of step/frequency response.

Keywords: mechatronic systems, Matlab GUI, sensitivity, tolerance

Procedia PDF Downloads 433
20715 Investigating the Energy Gap and Wavelength of (AlₓGa₁₋ₓAs)ₘ/(GaAs)ₙ Superlattices in Terms of Material Thickness and Al Mole Fraction Using Empirical Tight-Binding Method

Authors: Matineh Sadat Hosseini Gheidari, Vahid Reza Yazdanpanah

Abstract:

In this paper, we used the empirical tight-binding method (ETBM) with sp3s* approximation and considering the first nearest neighbor with spin-orbit interactions in order to model superlattice structure (SLS) of (AlₓGa₁₋ₓAs)ₘ/(GaAs)ₙ grown on GaAs (100) substrate at 300K. In the next step, we investigated the behavior of the energy gap and wavelength of this superlattice in terms of different thicknesses of core materials and Al mole fractions. As a result of this survey, we found out that as the Al composition increases, the energy gap of this superlattice has an upward trend and ranges from 1.42-1.63 eV. Also, according to the wavelength range that we gained from this superlattice in different Al mole fractions and various thicknesses, we can find a suitable semiconductor for a special light-emitting diode (LED) application.

Keywords: energy gap, empirical tight-binding method, light-emitting diode, superlattice, wavelength

Procedia PDF Downloads 206
20714 Numerical Simulation of Two-Dimensional Flow over a Stationary Circular Cylinder Using Feedback Forcing Scheme Based Immersed Boundary Finite Volume Method

Authors: Ranjith Maniyeri, Ahamed C. Saleel

Abstract:

Two-dimensional fluid flow over a stationary circular cylinder is one of the bench mark problem in the field of fluid-structure interaction in computational fluid dynamics (CFD). Motivated by this, in the present work, a two-dimensional computational model is developed using an improved version of immersed boundary method which combines the feedback forcing scheme of the virtual boundary method with Peskin’s regularized delta function approach. Lagrangian coordinates are used to represent the cylinder and Eulerian coordinates are used to describe the fluid flow. A two-dimensional Dirac delta function is used to transfer the quantities between the sold to fluid domain. Further, continuity and momentum equations governing the fluid flow are solved using fractional step based finite volume method on a staggered Cartesian grid system. The developed code is validated by comparing the values of drag coefficient obtained for different Reynolds numbers with that of other researcher’s results. Also, through numerical simulations for different Reynolds numbers flow behavior is well captured. The stability analysis of the improved version of immersed boundary method is tested for different values of feedback forcing coefficients.

Keywords: Feedback Forcing Scheme, Finite Volume Method, Immersed Boundary Method, Navier-Stokes Equations

Procedia PDF Downloads 305
20713 Comparing the Embodied Carbon Impacts of a Passive House with the BC Energy Step Code Using Life Cycle Assessment

Authors: Lorena Polovina, Maddy Kennedy-Parrott, Mohammad Fakoor

Abstract:

The construction industry accounts for approximately 40% of total GHG emissions worldwide. In order to limit global warming to 1.5 degrees Celsius, ambitious reductions in the carbon intensity of our buildings are crucial. Passive House presents an opportunity to reduce operational carbon by as much as 90% compared to a traditional building through improving thermal insulation, limiting thermal bridging, increasing airtightness and heat recovery. Up until recently, Passive House design was mainly concerned with meeting the energy demands without considering embodied carbon. As buildings become more energy-efficient, embodied carbon becomes more significant. The main objective of this research is to calculate the embodied carbon impact of a Passive House and compare it with the BC Energy Step Code (ESC). British Columbia is committed to increasing the energy efficiency of buildings through the ESC, which is targeting net-zero energy-ready buildings by 2032. However, there is a knowledge gap in the embodied carbon impacts of more energy-efficient buildings, in particular Part 3 construction. In this case study, life cycle assessments (LCA) are performed on Part 3, a multi-unit residential building in Victoria, BC. The actual building is not constructed to the Passive House standard; however, the building envelope and mechanical systems are designed to comply with the Passive house criteria, as well as Steps 1 and 4 of the BC Energy Step Code (ESC) for comparison. OneClick LCA is used to perform the LCA of the case studies. Several strategies are also proposed to minimize the total carbon emissions of the building. The assumption is that there will not be significant differences in embodied carbon between a Passive House and a Step 4 building due to the building envelope.

Keywords: embodied carbon, energy modeling, energy step code, life cycle assessment

Procedia PDF Downloads 148
20712 Determination of LS-DYNA MAT162 Material input Parameters for Low Velocity Impact Analysis of Layered Composites

Authors: Mustafa Albayrak, Mete Onur Kaman, Ilyas Bozkurt

Abstract:

In this study, the necessary material parameters were determined to be able to conduct progressive damage analysis of layered composites under low velocity impact by using the MAT162 material module in the LS-DYNA program. The material module MAT162 based on Hashin failure criterion requires 34 parameters in total. Some of these parameters were obtained directly as a result of dynamic and quasi-static mechanical tests, and the remaining part was calibrated and determined by comparing numerical and experimental results. Woven glass/epoxy was used as the composite material and it was produced by vacuum infusion method. In the numerical model, composites are modeled as three-dimensional and layered. As a result, the acquisition of MAT162 material module parameters, which will enable progressive damage analysis, is given in detail and step by step, and the selection methods of the parameters are explained. Numerical data consistent with the experimental results are given in graphics.

Keywords: Composite Impact, Finite Element Simulation, Progressive Damage Analyze, LS-DYNA, MAT162

Procedia PDF Downloads 106
20711 TRACE/FRAPTRAN Analysis of Kuosheng Nuclear Power Plant Dry-Storage System

Authors: J. R. Wang, Y. Chiang, W. Y. Li, H. T. Lin, H. C. Chen, C. Shih, S. W. Chen

Abstract:

The dry-storage systems of nuclear power plants (NPPs) in Taiwan have become one of the major safety concerns. There are two steps considered in this study. The first step is the verification of the TRACE by using VSC-17 experimental data. The results of TRACE were similar to the VSC-17 data. It indicates that TRACE has the respectable accuracy in the simulation and analysis of the dry-storage systems. The next step is the application of TRACE in the dry-storage system of Kuosheng NPP (BWR/6). Kuosheng NPP is the second BWR NPP of Taiwan Power Company. In order to solve the storage of the spent fuels, Taiwan Power Company developed the new dry-storage system for Kuosheng NPP. In this step, the dry-storage system model of Kuosheng NPP was established by TRACE. Then, the steady state simulation of this model was performed and the results of TRACE were compared with the Kuosheng NPP data. Finally, this model was used to perform the safety analysis of Kuosheng NPP dry-storage system. Besides, FRAPTRAN was used tocalculate the transient performance of fuel rods.

Keywords: BWR, TRACE, FRAPTRAN, dry-storage

Procedia PDF Downloads 519
20710 Study of Evapotranspiration for Pune District

Authors: Ranjeet Sable, Mahotsavi Patil, Aadesh Nimbalkar, Prajakta Palaskar, Ritu Sagar

Abstract:

The exact amount of water used by various crops in different climatic conditions is necessary to step for design, planning, and management of irrigation schemes, water resources, scheduling of irrigation systems. Evaporation and transpiration are combinable called as evapotranspiration. Water loss from trees during photosynthesis is called as transpiration and when water gets converted into gaseous state is called evaporation. For calculation of correct evapotranspiration, we have to choose the method in such way that is should be suitable and require minimum climatic data also it should be applicable for wide range of climatic conditions. In hydrology, there are multiple correlations and regression is generally used to develop relationships between three or more hydrological variables by knowing the dependence between them. This research work includes the study of various methods for calculation of evapotranspiration and selects reasonable and suitable one Pune region (Maharashtra state). As field methods are very costly, time-consuming and not give appropriate results if the suitable climate is not maintained. Observation recorded at Pune metrological stations are used to calculate evapotranspiration with the help of Radiation Method (RAD), Modified Penman Method (MPM), Thornthwaite Method (THW), Blaney-Criddle (BCL), Christiansen Equation (CNM), Hargreaves Method (HGM), from which Hargreaves and Thornthwaite are temperature based methods. Performance of all these methods are compared with Modified Penman method and method which showing less variation with standard Modified Penman method (MPM) is selected as the suitable one. Evapotranspiration values are estimated on a monthly basis. Comparative analysis in this research used for selection for raw data-dependent methods in case of missing data.

Keywords: Blaney-Criddle, Christiansen equation evapotranspiration, Hargreaves method, precipitations, Penman method, water use efficiency

Procedia PDF Downloads 271
20709 Acoustic Echo Cancellation Using Different Adaptive Algorithms

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.

Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)

Procedia PDF Downloads 80
20708 Hypergraph for System of Systems modeling

Authors: Haffaf Hafid

Abstract:

Hypergraphs, after being used to model the structural organization of System of Sytems (SoS) at macroscopic level, has recent trends towards generalizing this powerful representation at different stages of complex system modelling. In this paper, we first describe different applications of hypergraph theory, and step by step, introduce multilevel modeling of SoS by means of integrating Constraint Programming Langages (CSP) dealing with engineering system reconfiguration strategy. As an application, we give an A.C.T Terminal controlled by a set of Intelligent Automated Vehicle.

Keywords: hypergraph model, structural analysis, bipartite graph, monitoring, system of systems, reconfiguration analysis, hypernetwork

Procedia PDF Downloads 488
20707 In situ Investigation of PbI₂ Precursor Film Formation and Its Subsequent Conversion to Mixed Cation Perovskite

Authors: Dounya Barrit, Ming-Chun Tang, Hoang Dang, Kai Wang, Detlef-M. Smilgies, Aram Amassian

Abstract:

Several deposition methods have been developed for perovskite film preparation. The one-step spin-coating process has emerged as a more popular option thanks to its ability to produce films of different compositions, including mixed cation and mixed halide perovskites, which can stabilize the perovskite phase and produce phases with desired band gap. The two-step method, however, is not understood in great detail. There is a significant need and opportunity to adopt the two-step process toward mixed cation and mixed halide perovskites, but this requires deeper understanding of the two-step conversion process, for instance when using different cations and mixtures thereof, to produce high-quality perovskite films with uniform composition. In this work, we demonstrate using in situ investigations that the conversion of PbI₂ to perovskite is largely dictated by the state of the PbI₂ precursor film in terms of its solvated state. Using time-resolved grazing incidence wide-angle X-Ray scattering (GIWAXS) measurements during spin coating of PbI₂ from a DMF (Dimethylformamide) solution we show the film formation to be a sol-gel process involving three PbI₂-DMF solvate complexes: disordered precursor (P₀), ordered precursor (P₁, P₂) prior to PbI₂ formation at room temperature after 5 minutes. The ordered solvates are highly metastable and eventually disappear, but we show that performing conversion from P₀, P₁, P₂ or PbI₂ can lead to very different conversion behaviors and outcomes. We compare conversion behaviors by using MAI (Methylammonium iodide), FAI (Formamidinium Iodide) and mixtures of these cations, and show that conversion can occur spontaneously and quite rapidly at room temperature without requiring further thermal annealing. We confirm this by demonstrating improvements in the morphology and microstructure of the resulting perovskite films, using techniques such as in situ quartz crystal microbalance with dissipation monitoring, SEM and XRD.

Keywords: in situ GIWAXS, lead iodide, mixed cation, perovskite solar cell, sol-gel process, solvate phase

Procedia PDF Downloads 148
20706 Reliability Analysis of Variable Stiffness Composite Laminate Structures

Authors: A. Sohouli, A. Suleman

Abstract:

This study focuses on reliability analysis of variable stiffness composite laminate structures to investigate the potential structural improvement compared to conventional (straight fibers) composite laminate structures. A computational framework was developed which it consists of a deterministic design step and reliability analysis. The optimization part is Discrete Material Optimization (DMO) and the reliability of the structure is computed by Monte Carlo Simulation (MCS) after using Stochastic Response Surface Method (SRSM). The design driver in deterministic optimization is the maximum stiffness, while optimization method concerns certain manufacturing constraints to attain industrial relevance. These manufacturing constraints are the change of orientation between adjacent patches cannot be too large and the maximum number of successive plies of a particular fiber orientation should not be too high. Variable stiffness composites may be manufactured by Automated Fiber Machines (AFP) which provides consistent quality with good production rates. However, laps and gaps are the most important challenges to steer fibers that effect on the performance of the structures. In this study, the optimal curved fiber paths at each layer of composites are designed in the first step by DMO, and then the reliability analysis is applied to investigate the sensitivity of the structure with different standard deviations compared to the straight fiber angle composites. The random variables are material properties and loads on the structures. The results show that the variable stiffness composite laminate structures are much more reliable, even for high standard deviation of material properties, than the conventional composite laminate structures. The reason is that the variable stiffness composite laminates allow tailoring stiffness and provide the possibility of adjusting stress and strain distribution favorably in the structures.

Keywords: material optimization, Monte Carlo simulation, reliability analysis, response surface method, variable stiffness composite structures

Procedia PDF Downloads 520