Search results for: Hybrid evolutionary optimization algorithm
57 Fuzzy Power Controller Design for Purdue University Research Reactor-1
Authors: Oktavian Muhammad Rizki, Appiah Rita, Lastres Oscar, Miller True, Chapman Alec, Tsoukalas Lefteri H.
Abstract:
The Purdue University Research Reactor-1 (PUR-1) is a 10 kWth pool-type research reactor located at Purdue University’s West Lafayette campus. The reactor was recently upgraded to use entirely digital instrumentation and control systems. However, currently, there is no automated control system to regulate the power in the reactor. We propose a fuzzy logic controller as a form of digital twin to complement the existing digital instrumentation system to monitor and stabilize power control using existing experimental data. This work assesses the feasibility of a power controller based on a Fuzzy Rule-Based System (FRBS) by modelling and simulation with a MATLAB algorithm. The controller uses power error and reactor period as inputs and generates reactivity insertion as output. The reactivity insertion is then converted to control rod height using a logistic function based on information from the recorded experimental reactor control rod data. To test the capability of the proposed fuzzy controller, a point-kinetic reactor model is utilized based on the actual PUR-1 operation conditions and a Monte Carlo N-Particle simulation result of the core to numerically compute the neutronics parameters of reactor behavior. The Point Kinetic Equation (PKE) was employed to model dynamic characteristics of the research reactor since it explains the interactions between the spatial and time varying input and output variables efficiently. The controller is demonstrated computationally using various cases: startup, power maneuver, and shutdown. From the test results, it can be proved that the implemented fuzzy controller can satisfactorily regulate the reactor power to follow demand power without compromising nuclear safety measures.
Keywords: Fuzzy logic controller, power controller, reactivity, research reactor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42456 Named Entity Recognition using Support Vector Machine: A Language Independent Approach
Authors: Asif Ekbal, Sivaji Bandyopadhyay
Abstract:
Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity classes and is now-a-days considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). Though this state of the art machine learning technique has been widely applied to NER in several well-studied languages, the use of this technique to Indian languages (ILs) is very new. The system makes use of the different contextual information of the words along with the variety of features that are helpful in predicting the four different named (NE) classes, such as Person name, Location name, Organization name and Miscellaneous name. We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes 1, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL) 2. In addition, we have manually annotated 150K wordforms of the Bengali news corpus, developed from the web-archive of a leading Bengali newspaper. We have also developed an unsupervised algorithm in order to generate the lexical context patterns from a part of the unlabeled Bengali news corpus. Lexical patterns have been used as the features of SVM in order to improve the system performance. The NER system has been tested with the gold standard test sets of 35K, and 60K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the recall, precision, and f-score values of 88.61%, 80.12%, and 84.15%, respectively, for Bengali and 80.23%, 74.34%, and 77.17%, respectively, for Hindi. Results show the improvement in the f-score by 5.13% with the use of context patterns. Statistical analysis, ANOVA is also performed to compare the performance of the proposed NER system with that of the existing HMM based system for both the languages.
Keywords: Named Entity (NE), Named Entity Recognition (NER), Support Vector Machine (SVM), Bengali, Hindi.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 340855 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering
Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause
Abstract:
In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.Keywords: Image processing, Illumination equalization, Shadow filtering, Object detection, Colour models, Image segmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 102254 Computer-Assisted Management of Building Climate and Microgrid with Model Predictive Control
Authors: Vinko Lešić, Mario Vašak, Anita Martinčević, Marko Gulin, Antonio Starčić, Hrvoje Novak
Abstract:
With 40% of total world energy consumption, building systems are developing into technically complex large energy consumers suitable for application of sophisticated power management approaches to largely increase the energy efficiency and even make them active energy market participants. Centralized control system of building heating and cooling managed by economically-optimal model predictive control shows promising results with estimated 30% of energy efficiency increase. The research is focused on implementation of such a method on a case study performed on two floors of our faculty building with corresponding sensors wireless data acquisition, remote heating/cooling units and central climate controller. Building walls are mathematically modeled with corresponding material types, surface shapes and sizes. Models are then exploited to predict thermal characteristics and changes in different building zones. Exterior influences such as environmental conditions and weather forecast, people behavior and comfort demands are all taken into account for deriving price-optimal climate control. Finally, a DC microgrid with photovoltaics, wind turbine, supercapacitor, batteries and fuel cell stacks is added to make the building a unit capable of active participation in a price-varying energy market. Computational burden of applying model predictive control on such a complex system is relaxed through a hierarchical decomposition of the microgrid and climate control, where the former is designed as higher hierarchical level with pre-calculated price-optimal power flows control, and latter is designed as lower level control responsible to ensure thermal comfort and exploit the optimal supply conditions enabled by microgrid energy flows management. Such an approach is expected to enable the inclusion of more complex building subsystems into consideration in order to further increase the energy efficiency.Keywords: Energy-efficient buildings, Hierarchical model predictive control, Microgrid power flow optimization, Price-optimal building climate control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152053 Cluster Based Energy Efficient and Fault Tolerant n-Coverage in Wireless Sensor Network
Authors: D. Satish Kumar, N. Nagarajan
Abstract:
Coverage conservation and extend the network lifetime are the primary issues in wireless sensor networks. Due to the large variety of applications, coverage is focus to a wide range of interpretations. The applications necessitate that each point in the area is observed by only one sensor while other applications may require that each point is enclosed by at least sensors (n>1) to achieve fault tolerance. Sensor scheduling activities in existing Transparent and non- Transparent relay modes (T-NT) Mobile Multi-Hop relay networks fails to guarantee area coverage with minimal energy consumption and fault tolerance. To overcome these issues, Cluster based Energy Competent n- coverage scheme called (CEC n-coverage scheme) to ensure the full coverage of a monitored area while saving energy. CEC n-coverage scheme uses a novel sensor scheduling scheme based on the n-density and the remaining energy of each sensor to determine the state of all the deployed sensors to be either active or sleep as well as the state durations. Hence, it is attractive to trigger a minimum number of sensors that are able to ensure coverage area and turn off some redundant sensors to save energy and therefore extend network lifetime. In addition, decisive a smallest amount of active sensors based on the degree coverage required and its level. A variety of numerical parameters are computed using ns2 simulator on existing (T-NT) Mobile Multi-Hop relay networks and CEC n-coverage scheme. Simulation results showed that CEC n-coverage scheme in wireless sensor network provides better performance in terms of the energy efficiency, 6.61% reduced fault tolerant in terms of seconds and the percentage of active sensors to guarantee the area coverage compared to exiting algorithm.
Keywords: Wireless Sensor network, Mobile Multi-Hop relay networks, n-coverage, Cluster based Energy Competent, Transparent and non- Transparent relay modes, Fault Tolerant, sensor scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 214852 C-LNRD: A Cross-Layered Neighbor Route Discovery for Effective Packet Communication in Wireless Sensor Network
Authors: K. Kalaikumar, E. Baburaj
Abstract:
One of the problems to be addressed in wireless sensor networks is the issues related to cross layer communication. Cross layer architecture shares the information across the layer, ensuring Quality of Services (QoS). With this shared information, MAC protocol adapts effective functionality maintenance such as route selection on changeable sensor network environment. However, time slot assignment and neighbour route selection time duration for cross layer have not been carried out. The time varying physical layer communication over cross layer causes high traffic load in the sensor network. Though, the traffic load was reduced using cross layer optimization procedure, the computational cost is high. To improve communication efficacy in the sensor network, a self-determined time slot based Cross-Layered Neighbour Route Discovery (C-LNRD) method is presented in this paper. In the presented work, the initial process is to discover the route in the sensor network using Dynamic Source Routing based Medium Access Control (MAC) sub layers. This process considers MAC layer operation with dynamic route neighbour table discovery. Then, the discovered route path for packet communication employs Broad Route Distributed Time Slot Assignment method on Cross-Layered Sensor Network system. Broad Route means time slotting on varying length of the route paths. During packet communication in this sensor network, transmission of packets is adjusted over the different time with varying ranges for controlling the traffic rate. Finally, Rayleigh fading model is developed in C-LNRD to identify the performance of the sensor network communication structure. The main task of Rayleigh Fading is to measure the power level of each communication under MAC sub layer. The minimized power level helps to easily reduce the computational cost of packet communication in the sensor network. Experiments are conducted on factors such as power factor, on packet communication, neighbour route discovery time, and information (i.e., packet) propagation speed.
Keywords: Medium access control, neighbour route discovery, wireless sensor network, Rayleigh fading, distributed time slot assignment
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 77651 Toward Indoor and Outdoor Surveillance Using an Improved Fast Background Subtraction Algorithm
Authors: A. El Harraj, N. Raissouni
Abstract:
The detection of moving objects from a video image sequences is very important for object tracking, activity recognition, and behavior understanding in video surveillance. The most used approach for moving objects detection / tracking is background subtraction algorithms. Many approaches have been suggested for background subtraction. But, these are illumination change sensitive and the solutions proposed to bypass this problem are time consuming. In this paper, we propose a robust yet computationally efficient background subtraction approach and, mainly, focus on the ability to detect moving objects on dynamic scenes, for possible applications in complex and restricted access areas monitoring, where moving and motionless persons must be reliably detected. It consists of three main phases, establishing illumination changes invariance, background/foreground modeling and morphological analysis for noise removing. We handle illumination changes using Contrast Limited Histogram Equalization (CLAHE), which limits the intensity of each pixel to user determined maximum. Thus, it mitigates the degradation due to scene illumination changes and improves the visibility of the video signal. Initially, the background and foreground images are extracted from the video sequence. Then, the background and foreground images are separately enhanced by applying CLAHE. In order to form multi-modal backgrounds we model each channel of a pixel as a mixture of K Gaussians (K=5) using Gaussian Mixture Model (GMM). Finally, we post process the resulting binary foreground mask using morphological erosion and dilation transformations to remove possible noise. For experimental test, we used a standard dataset to challenge the efficiency and accuracy of the proposed method on a diverse set of dynamic scenes.
Keywords: Video surveillance, background subtraction, Contrast Limited Histogram Equalization, illumination invariance, object tracking, object detection, behavior understanding, dynamic scenes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208850 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing
Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima
Abstract:
Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.
Keywords: Biofouling control, hypochlorite, reverse osmosis, washing condition optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119149 Performance Analysis of HSDPA Systems using Low-Density Parity-Check (LDPC)Coding as Compared to Turbo Coding
Authors: K. Anitha Sheela, J. Tarun Kumar
Abstract:
HSDPA is a new feature which is introduced in Release-5 specifications of the 3GPP WCDMA/UTRA standard to realize higher speed data rate together with lower round-trip times. Moreover, the HSDPA concept offers outstanding improvement of packet throughput and also significantly reduces the packet call transfer delay as compared to Release -99 DSCH. Till now the HSDPA system uses turbo coding which is the best coding technique to achieve the Shannon limit. However, the main drawbacks of turbo coding are high decoding complexity and high latency which makes it unsuitable for some applications like satellite communications, since the transmission distance itself introduces latency due to limited speed of light. Hence in this paper it is proposed to use LDPC coding in place of Turbo coding for HSDPA system which decreases the latency and decoding complexity. But LDPC coding increases the Encoding complexity. Though the complexity of transmitter increases at NodeB, the End user is at an advantage in terms of receiver complexity and Bit- error rate. In this paper LDPC Encoder is implemented using “sparse parity check matrix" H to generate a codeword at Encoder and “Belief Propagation algorithm "for LDPC decoding .Simulation results shows that in LDPC coding the BER suddenly drops as the number of iterations increase with a small increase in Eb/No. Which is not possible in Turbo coding. Also same BER was achieved using less number of iterations and hence the latency and receiver complexity has decreased for LDPC coding. HSDPA increases the downlink data rate within a cell to a theoretical maximum of 14Mbps, with 2Mbps on the uplink. The changes that HSDPA enables includes better quality, more reliable and more robust data services. In other words, while realistic data rates are only a few Mbps, the actual quality and number of users achieved will improve significantly.Keywords: AMC, HSDPA, LDPC, WCDMA, 3GPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 205148 Optimal Image Compression Based on Sign and Magnitude Coding of Wavelet Coefficients
Authors: Mbainaibeye Jérôme, Noureddine Ellouze
Abstract:
Wavelet transforms is a very powerful tools for image compression. One of its advantage is the provision of both spatial and frequency localization of image energy. However, wavelet transform coefficients are defined by both a magnitude and sign. While algorithms exist for efficiently coding the magnitude of the transform coefficients, they are not efficient for the coding of their sign. It is generally assumed that there is no compression gain to be obtained from the coding of the sign. Only recently have some authors begun to investigate the sign of wavelet coefficients in image coding. Some authors have assumed that the sign information bit of wavelet coefficients may be encoded with the estimated probability of 0.5; the same assumption concerns the refinement information bit. In this paper, we propose a new method for Separate Sign Coding (SSC) of wavelet image coefficients. The sign and the magnitude of wavelet image coefficients are examined to obtain their online probabilities. We use the scalar quantization in which the information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also examined. We show that the sign information and the refinement information may be encoded by the probability of approximately 0.5 only after about five bit planes. Two maps are separately entropy encoded: the sign map and the magnitude map. The refinement information of the wavelet coefficient to belong to the lower or to the upper sub-interval in the uncertainly interval is also entropy encoded. An algorithm is developed and simulations are performed on three standard images in grey scale: Lena, Barbara and Cameraman. Five scales are performed using the biorthogonal wavelet transform 9/7 filter bank. The obtained results are compared to JPEG2000 standard in terms of peak signal to noise ration (PSNR) for the three images and in terms of subjective quality (visual quality). It is shown that the proposed method outperforms the JPEG2000. The proposed method is also compared to other codec in the literature. It is shown that the proposed method is very successful and shows its performance in term of PSNR.
Keywords: Image compression, wavelet transform, sign coding, magnitude coding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167547 A Computational Study of Very High Turbulent Flow and Heat Transfer Characteristics in Circular Duct with Hemispherical Inline Baffles
Authors: Dipak Sen, Rajdeep Ghosh
Abstract:
This paper presents a computational study of steady state three dimensional very high turbulent flow and heat transfer characteristics in a constant temperature-surfaced circular duct fitted with 900 hemispherical inline baffles. The computations are based on realizable k-ɛ model with standard wall function considering the finite volume method, and the SIMPLE algorithm has been implemented. Computational Study are carried out for Reynolds number, Re ranging from 80000 to 120000, Prandtl Number, Pr of 0.73, Pitch Ratios, PR of 1,2,3,4,5 based on the hydraulic diameter of the channel, hydrodynamic entry length, thermal entry length and the test section. Ansys Fluent 15.0 software has been used to solve the flow field. Study reveals that circular pipe having baffles has a higher Nusselt number and friction factor compared to the smooth circular pipe without baffles. Maximum Nusselt number and friction factor are obtained for the PR=5 and PR=1 respectively. Nusselt number increases while pitch ratio increases in the range of study; however, friction factor also decreases up to PR 3 and after which it becomes almost constant up to PR 5. Thermal enhancement factor increases with increasing pitch ratio but with slightly decreasing Reynolds number in the range of study and becomes almost constant at higher Reynolds number. The computational results reveal that optimum thermal enhancement factor of 900 inline hemispherical baffle is about 1.23 for pitch ratio 5 at Reynolds number 120000.It also shows that the optimum pitch ratio for which the baffles can be installed in such very high turbulent flows should be 5. Results show that pitch ratio and Reynolds number play an important role on both fluid flow and heat transfer characteristics.Keywords: Friction factor, heat transfer, turbulent flow, circular duct, baffle, pitch ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212946 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: Early stage prediction, heart rate variability, linear and non linear analysis, sudden cardiac death.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181445 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.
Keywords: Blood flow, Morphometric data, Vascular tree, Strahler ordering system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210144 Multistage Condition Monitoring System of Aircraft Gas Turbine Engine
Authors: A. M. Pashayev, D. D. Askerov, C. Ardil, R. A. Sadiqov, P. S. Abdullayev
Abstract:
Researches show that probability-statistical methods application, especially at the early stage of the aviation Gas Turbine Engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods is considered. According to the purpose of this problem training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. For GTE technical condition more adequate model making dynamics of skewness and kurtosis coefficients- changes are analysed. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows drawing conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stageby- stage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine technical condition was made.Keywords: aviation gas turbine engine, technical condition, fuzzy logic, neural networks, fuzzy statistics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157043 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general-purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: Heuristic, MIP model, Remedial course, School, Timetabling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163442 Power Performance Improvement of 500W Vertical Axis Wind Turbine with Salient Design Parameters
Authors: Young-Tae Lee, Hee-Chang Lim
Abstract:
This paper presents the performance characteristics of Darrieus-type vertical axis wind turbine (VAWT) with NACA airfoil blades. The performance of Darrieus-type VAWT can be characterized by torque and power. There are various parameters affecting the performance such as chord length, helical angle, pitch angle and rotor diameter. To estimate the optimum shape of Darrieustype wind turbine in accordance with various design parameters, we examined aerodynamic characteristics and separated flow occurring in the vicinity of blade, interaction between flow and blade, and torque and power characteristics derived from it. For flow analysis, flow variations were investigated based on the unsteady RANS (Reynolds-averaged Navier-Stokes) equation. Sliding mesh algorithm was employed in order to consider rotational effect of blade. To obtain more realistic results we conducted experiment and numerical analysis at the same time for three-dimensional shape. In addition, several parameters (chord length, rotor diameter, pitch angle, and helical angle) were considered to find out optimum shape design and characteristics of interaction with ambient flow. Since the NACA airfoil used in this study showed significant changes in magnitude of lift and drag depending on an angle of attack, the rotor with low drag, long cord length and short diameter shows high power coefficient in low tip speed ratio (TSR) range. On the contrary, in high TSR range, drag becomes high. Hence, the short-chord and long-diameter rotor produces high power coefficient. When a pitch angle at which airfoil directs toward inside equals to -2° and helical angle equals to 0°, Darrieus-type VAWT generates maximum power.Keywords: Darrieus wind turbine, VAWT, NACA airfoil, performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 298041 Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks
Authors: Maria E. Manioudaki, Panayiota Poirazi
Abstract:
Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.
Keywords: gene modules, artificial neural networks, yeast, stress
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146540 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models
Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu
Abstract:
Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.
Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 81039 Investigation of VMAT Algorithms and Dosimetry
Authors: A. Taqaddas
Abstract:
Purpose: Planning and dosimetry of different VMAT algorithms (SmartArc, Ergo++, Autobeam) is compared with IMRT for Head and Neck Cancer patients. Modelling was performed to rule out the causes of discrepancies between planned and delivered dose. Methods: Five HNC patients previously treated with IMRT were re-planned with SmartArc (SA), Ergo++ and Autobeam. Plans were compared with each other and against IMRT and evaluated using DVHs for PTVs and OARs, delivery time, monitor units (MU) and dosimetric accuracy. Modelling of control point (CP) spacing, Leaf-end Separation and MLC/Aperture shape was performed to rule out causes of discrepancies between planned and delivered doses. Additionally estimated arc delivery times, overall plan generation times and effect of CP spacing and number of arcs on plan generation times were recorded. Results: Single arc SmartArc plans (SA4d) were generally better than IMRT and double arc plans (SA2Arcs) in terms of homogeneity and target coverage. Double arc plans seemed to have a positive role in achieving improved Conformity Index (CI) and better sparing of some Organs at Risk (OARs) compared to Step and Shoot IMRT (ss-IMRT) and SA4d. Overall Ergo++ plans achieved best CI for both PTVs. Dosimetric validation of all VMAT plans without modelling was found to be lower than ss-IMRT. Total MUs required for delivery were on average 19%, 30%, 10.6% and 6.5% lower than ss-IMRT for SA4d, SA2d (Single arc with 20 Gantry Spacing), SA2Arcs and Autobeam plans respectively. Autobeam was most efficient in terms of actual treatment delivery times whereas Ergo++ plans took longest to deliver. Conclusion: Overall SA single arc plans on average achieved best target coverage and homogeneity for both PTVs. SA2Arc plans showed improved CI and some OARs sparing. Very good dosimetric results were achieved with modelling. Ergo++ plans achieved best CI. Autobeam resulted in fastest treatment delivery times.
Keywords: Dosimetry, Intensity Modulated Radiotherapy, Optimization Algorithms, Volumetric Modulated Arc Therapy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 331938 An Efficient Motion Recognition System Based on LMA Technique and a Discrete Hidden Markov Model
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Human motion recognition has been extensively increased in recent years due to its importance in a wide range of applications, such as human-computer interaction, intelligent surveillance, augmented reality, content-based video compression and retrieval, etc. However, it is still regarded as a challenging task especially in realistic scenarios. It can be seen as a general machine learning problem which requires an effective human motion representation and an efficient learning method. In this work, we introduce a descriptor based on Laban Movement Analysis technique, a formal and universal language for human movement, to capture both quantitative and qualitative aspects of movement. We use Discrete Hidden Markov Model (DHMM) for training and classification motions. We improve the classification algorithm by proposing two DHMMs for each motion class to process the motion sequence in two different directions, forward and backward. Such modification allows avoiding the misclassification that can happen when recognizing similar motions. Two experiments are conducted. In the first one, we evaluate our method on a public dataset, the Microsoft Research Cambridge-12 Kinect gesture data set (MSRC-12) which is a widely used dataset for evaluating action/gesture recognition methods. In the second experiment, we build a dataset composed of 10 gestures(Introduce yourself, waving, Dance, move, turn left, turn right, stop, sit down, increase velocity, decrease velocity) performed by 20 persons. The evaluation of the system includes testing the efficiency of our descriptor vector based on LMA with basic DHMM method and comparing the recognition results of the modified DHMM with the original one. Experiment results demonstrate that our method outperforms most of existing methods that used the MSRC-12 dataset, and a near perfect classification rate in our dataset.Keywords: Human Motion Recognition, Motion representation, Laban Movement Analysis, Discrete Hidden Markov Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73037 Aircraft Gas Turbine Engines Technical Condition Identification System
Authors: A. M. Pashayev, C. Ardil, D. D. Askerov, R. A. Sadiqov, P. S. Abdullayev
Abstract:
In this paper is shown that the probability-statistic methods application, especially at the early stage of the aviation gas turbine engine (GTE) technical condition diagnosing, when the flight information has property of the fuzzy, limitation and uncertainty is unfounded. Hence is considered the efficiency of application of new technology Soft Computing at these diagnosing stages with the using of the Fuzzy Logic and Neural Networks methods. Training with high accuracy of fuzzy multiple linear and non-linear models (fuzzy regression equations) which received on the statistical fuzzy data basis is made. Thus for GTE technical condition more adequate model making are analysed dynamics of skewness and kurtosis coefficients' changes. Researches of skewness and kurtosis coefficients values- changes show that, distributions of GTE work parameters have fuzzy character. Hence consideration of fuzzy skewness and kurtosis coefficients is expedient. Investigation of the basic characteristics changes- dynamics of GTE work parameters allows to draw conclusion on necessity of the Fuzzy Statistical Analysis at preliminary identification of the engines' technical condition. Researches of correlation coefficients values- changes shows also on their fuzzy character. Therefore for models choice the application of the Fuzzy Correlation Analysis results is offered. For checking of models adequacy is considered the Fuzzy Multiple Correlation Coefficient of Fuzzy Multiple Regression. At the information sufficiency is offered to use recurrent algorithm of aviation GTE technical condition identification (Hard Computing technology is used) on measurements of input and output parameters of the multiple linear and non-linear generalised models at presence of noise measured (the new recursive Least Squares Method (LSM)). The developed GTE condition monitoring system provides stage-bystage estimation of engine technical conditions. As application of the given technique the estimation of the new operating aviation engine temperature condition was made.
Keywords: Gas turbine engines, neural networks, fuzzy logic, fuzzy statistics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 190636 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.
Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17335 Tagged Grid Matching Based Object Detection in Wavelet Neural Network
Authors: R. Arulmurugan, P. Sengottuvelan
Abstract:
Object detection using Wavelet Neural Network (WNN) plays a major contribution in the analysis of image processing. Existing cluster-based algorithm for co-saliency object detection performs the work on the multiple images. The co-saliency detection results are not desirable to handle the multi scale image objects in WNN. Existing Super Resolution (SR) scheme for landmark images identifies the corresponding regions in the images and reduces the mismatching rate. But the Structure-aware matching criterion is not paying attention to detect multiple regions in SR images and fail to enhance the result percentage of object detection. To detect the objects in the high-resolution remote sensing images, Tagged Grid Matching (TGM) technique is proposed in this paper. TGM technique consists of the three main components such as object determination, object searching and object verification in WNN. Initially, object determination in TGM technique specifies the position and size of objects in the current image. The specification of the position and size using the hierarchical grid easily determines the multiple objects. Second component, object searching in TGM technique is carried out using the cross-point searching. The cross out searching point of the objects is selected to faster the searching process and reduces the detection time. Final component performs the object verification process in TGM technique for identifying (i.e.,) detecting the dissimilarity of objects in the current frame. The verification process matches the search result grid points with the stored grid points to easily detect the objects using the Gabor wavelet Transform. The implementation of TGM technique offers a significant improvement on the multi-object detection rate, processing time, precision factor and detection accuracy level.
Keywords: Object Detection, Cross-point Searching, Wavelet Neural Network, Object Determination, Gabor Wavelet Transform, Tagged Grid Matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196534 Optimization of Mechanical Properties of Alginate Hydrogel for 3D Bio-Printing Self-Standing Scaffold Architecture for Tissue Engineering Applications
Authors: Ibtisam A. Abbas Al-Darkazly
Abstract:
In this study, the mechanical properties of alginate hydrogel material for self-standing 3D scaffold architecture with proper shape fidelity are investigated. In-lab built 3D bio-printer extrusion-based technology is utilized to fabricate 3D alginate scaffold constructs. The pressure, needle speed and stage speed are varied using a computer-controlled system. The experimental result indicates that the concentration of alginate solution, calcium chloride (CaCl2) cross-linking concentration and cross-linking ratios lead to the formation of alginate hydrogel with various gelation states. Besides, the gelling conditions, such as cross-linking reaction time and temperature also have a significant effect on the mechanical properties of alginate hydrogel. Various experimental tests such as the material gelation, the material spreading and the printability test for filament collapse as well as the swelling test were conducted to evaluate the fabricated 3D scaffold constructs. The result indicates that the fabricated 3D scaffold from composition of 3.5% wt alginate solution, that is prepared in DI water and 1% wt CaCl2 solution with cross-linking ratios of 7:3 show good printability and sustain good shape fidelity for more than 20 days, compared to alginate hydrogel that is prepared in a phosphate buffered saline (PBS). The fabricated self-standing 3D scaffold constructs measured 30 mm × 30 mm and consisted of 4 layers (n = 4) show good pore geometry and clear grid structure after printing. In addition, the percentage change of swelling degree exhibits high swelling capability with respect to time. The swelling test shows that the geometry of 3D alginate-scaffold construct and of the macro-pore are rarely changed, which indicates the capability of holding the shape fidelity during the incubation period. This study demonstrated that the mechanical and physical properties of alginate hydrogel could be tuned for a 3D bio-printing extrusion-based system to fabricate self-standing 3D scaffold soft structures. This 3D bioengineered scaffold provides a natural microenvironment present in the extracellular matrix of the tissue, which could be seeded with the biological cells to generate the desired 3D live tissue model for in vitro and in vivo tissue engineering applications.
Keywords: Biomaterial, calcium chloride, 3D bio-printing, extrusion, scaffold, sodium alginate, tissue engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 78033 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 253132 An Identification Method of Geological Boundary Using Elastic Waves
Authors: Masamitsu Chikaraishi, Mutsuto Kawahara
Abstract:
This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.
Keywords: Parameter identification, finite element method, average acceleration method, first order adjoint equation method, weighted gradient method, geological boundary, navier equation, optimal control theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 158531 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique
Authors: Nishant Shrivastava, D. K. Sehgal
Abstract:
In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.
Keywords: Finite element, Lagrangian, optimal stress location, serendipity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 63630 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.
Keywords: Amplitude-independent damping, Homogeneous friction, Pendulum nonlinear dynamics, Structural control, Vibration resonant absorbers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73329 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.
Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 65028 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility
Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda
Abstract:
Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1) Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4) Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.
Keywords: Affordable, European, green, mobility, technologies development, travel time reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 539