Search results for: MatLab
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 902

Search results for: MatLab

152 An Intelligent Controller Augmented with Variable Zero Lag Compensation for Antilock Braking System

Authors: Benjamin Chijioke Agwah, Paulinus Chinaenye Eze

Abstract:

Antilock braking system (ABS) is one of the important contributions by the automobile industry, designed to ensure road safety in such way that vehicles are kept steerable and stable when during emergency braking. This paper presents a wheel slip-based intelligent controller with variable zero lag compensation for ABS. It is required to achieve a very fast perfect wheel slip tracking during hard braking condition and eliminate chattering with improved transient and steady state performance, while shortening the stopping distance using effective braking torque less than maximum allowable torque to bring a braking vehicle to a stop. The dynamic of a vehicle braking with a braking velocity of 30 ms⁻¹ on a straight line was determined and modelled in MATLAB/Simulink environment to represent a conventional ABS system without a controller. Simulation results indicated that system without a controller was not able to track desired wheel slip and the stopping distance was 135.2 m. Hence, an intelligent control based on fuzzy logic controller (FLC) was designed with a variable zero lag compensator (VZLC) added to enhance the performance of FLC control variable by eliminating steady state error, provide improve bandwidth to eliminate the effect of high frequency noise such as chattering during braking. The simulation results showed that FLC- VZLC provided fast tracking of desired wheel slip, eliminate chattering, and reduced stopping distance by 70.5% (39.92 m), 63.3% (49.59 m), 57.6% (57.35 m) and 50% (69.13 m) on dry, wet, cobblestone and snow road surface conditions respectively. Generally, the proposed system used effective braking torque that is less than the maximum allowable braking torque to achieve efficient wheel slip tracking and overall robust control performance on different road surfaces.

Keywords: ABS, fuzzy logic controller, variable zero lag compensator, wheel slip tracking

Procedia PDF Downloads 146
151 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography

Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun

Abstract:

A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.

Keywords: fuzzy, membership functions, pornography, risk factors

Procedia PDF Downloads 129
150 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 504
149 A Thermo-mechanical Finite Element Model to Predict Thermal Cycles and Residual Stresses in Directed Energy Deposition Technology

Authors: Edison A. Bonifaz

Abstract:

In this work, a numerical procedure is proposed to design dense multi-material structures using the Directed Energy Deposition (DED) process. A thermo-mechanical finite element model to predict thermal cycles and residual stresses is presented. A numerical layer build-up procedure coupled with a moving heat flux was constructed to minimize strains and residual stresses that result in the multi-layer deposition of an AISI 316 austenitic steel on an AISI 304 austenitic steel substrate. To simulate the DED process, the automated interface of the ABAQUS AM module was used to define element activation and heat input event data as a function of time and position. Of this manner, the construction of ABAQUS user-defined subroutines was not necessary. Thermal cycles and thermally induced stresses created during the multi-layer deposition metal AM pool crystallization were predicted and validated. Results were analyzed in three independent metal layers of three different experiments. The one-way heat and material deposition toolpath used in the analysis was created with a MatLab path script. An optimal combination of feedstock and heat input printing parameters suitable for fabricating multi-material dense structures in the directed energy deposition metal AM process was established. At constant power, it can be concluded that the lower the heat input, the lower the peak temperatures and residual stresses. It means that from a design point of view, the one-way heat and material deposition processing toolpath with the higher welding speed should be selected.

Keywords: event series, thermal cycles, residual stresses, multi-pass welding, abaqus am modeler

Procedia PDF Downloads 69
148 Active Control Effects on Dynamic Response of Elevated Water Storage Tanks

Authors: Ali Etemadi, Claudia Fernanda Yasar

Abstract:

Elevated water storage tank structures (EWSTs) are high elevated-ponderous structural systems and very vulnerable to seismic vibrations. In past earthquake events, many of these structures exhibit poor performance and experienced severe damage. The dynamic analysis of the EWSTs under earthquake loads is, therefore, of significant importance for the design of the structure and a key issue for the development of modern methods, such as active control design. In this study, a reduced model of the EWSTs is explained, which is based on a tuned mass damper model (TMD). Vibration analysis of a structure under seismic excitation is presented and then used to propose an active vibration controller. MATLAB/Simulink is employed for dynamic analysis of the system and control of the seismic response. A single degree of freedom (SDOF) and two degree of freedom (2DOF) models of ELSTs are going to be used to study the concept of active vibration control. Lab-scale experimental models similar to pendulum are applied to suppress vibrations in ELST under seismic excitation. One of the most important phenomena in liquid storage tanks is the oscillation of fluid due to the movements of the tank body because of its base motions during an earthquake. Simulation results illustrate that the EWSTs vibration can be reduced by means of an input shaping technique that takes into account the dominant mode shape of the structure. Simulations with which to guide many of our designs are presented in detail. A simple and effective real-time control for seismic vibration damping can be, therefore, design and built-in practice.

Keywords: elevated water storage tank, tuned mass damper model, real time control, shaping control, seismic vibration control, the laplace transform

Procedia PDF Downloads 150
147 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity

Procedia PDF Downloads 422
146 Influence of Glenohumeral Joint Approximation Technique on the Cardiovascular System in the Acute Phase after Stroke

Authors: Iva Hereitova, Miroslav Svatek, Vit Novacek

Abstract:

Background and Aim: Autonomic imbalance is one of the complications for immobilized patients in the acute stage after a stroke. The predominance of sympathetic activity significantly increases cardiac activity. The technique of glenohumeral joint approximation may contribute in a non-pharmacological way to the regulation of blood pressure and heart rate in patients in this risk group. The aim of the study was to evaluate the effect of glenohumeral joint approximation on the change in heart rate and blood pressure in immobilized patients in the acute phase after a stroke. Methods: The experimental study bilaterally evaluated heart rate, systolic and diastolic pressure values before and after glenohumeral joint approximation in 40 immobilized participants (72.6 ± 10.2 years) in the acute phase after stroke. The experimental group was compared with 40 healthy participants in the control group (68.6 ± 14.2 years). An SpO2 vital signs monitor and a validated Microlife WatchBP Office blood pressure monitor were used for evaluation. Statistical processing and evaluation were performed in MATLAB R2019 (The Math Works®, Inc., Natick, MA, USA). Results: Approximation of the glenohumeral joint resulted in a statistically significant decrease in systolic and diastolic pressure. An average decrease in systolic pressure for individual groups ranged from 8.2 to 11.3 mmHg (p <0.001). For diastolic pressure, the average decrease ranged from 5.0 - 14.2 mmHg (p <0.001). There was a statistically significant reduction in heart rate (p <0.01) only in patients after ischemic stroke in the inferior cerebral artery. There was the average decrease in heart rate of 3.9 beats per minute (median 4 beats per minute). Conclusion: Approximation of the glenohumeral joint leads to a statistically significant decrease in systolic and diastolic pressure in immobilized patients in the acute phase after stroke.

Keywords: Aproximation technique, Cardiovaskular system, Glenohumeral joint, Stroke

Procedia PDF Downloads 216
145 A Flexible Real-Time Eco-Drive Strategy for Electric Minibus

Authors: Felice De Luca, Vincenzo Galdi, Piera Stella, Vito Calderaro, Adriano Campagna, Antonio Piccolo

Abstract:

Sustainable mobility has become one of the major issues of recent years. The challenge in reducing polluting emissions as much as possible has led to the production and diffusion of vehicles with internal combustion engines that are less polluting and to the adoption of green energy vectors, such as vehicles powered by natural gas or LPG and, more recently, with hybrid and electric ones. While on the one hand, the spread of electric vehicles for private use is becoming a reality, albeit rather slowly, not the same is happening for vehicles used for public transport, especially those that operate in the congested areas of the cities. Even if the first electric buses are increasingly being offered on the market, it remains central to the problem of autonomy for battery fed vehicles with high daily routes and little time available for recharging. In fact, at present, solid-state batteries are still too large in size, heavy, and unable to guarantee the required autonomy. Therefore, in order to maximize the energy management on the vehicle, the optimization of driving profiles offer a faster and cheaper contribution to improve vehicle autonomy. In this paper, following the authors’ precedent works on electric vehicles in public transport and energy management strategies in the electric mobility area, an eco-driving strategy for electric bus is presented and validated. Particularly, the characteristics of the prototype bus are described, and a general-purpose eco-drive methodology is briefly presented. The model is firstly simulated in MATLAB™ and then implemented on a mobile device installed on-board of a prototype bus developed by the authors in a previous research project. The solution implemented furnishes the bus-driver suggestions on the guide style to adopt. The result of the test in a real case will be shown to highlight the effectiveness of the solution proposed in terms of energy saving.

Keywords: eco-drive, electric bus, energy management, prototype

Procedia PDF Downloads 141
144 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method

Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya

Abstract:

This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.

Keywords: particle size reduction, micromixer, FDM modelling, wet etching

Procedia PDF Downloads 429
143 A Leader-Follower Kinematic-Based Control System for a Cable-Driven Hyper-Redundant Manipulator

Authors: Abolfazl Zaraki, Yoshikatsu Hayashi, Harry Thorpe, Vincent Strong, Gisle-Andre Larsen, William Holderbaum

Abstract:

Thanks to the high maneuverability of the cable-driven hyper-redundant manipulators (HRMs), this class of robots has shown a superior capability in highly confined and unstructured space applications. Although the large number of degrees of freedom (DOF) of HRMs enhances the motion flexibility and the robot’s reachability range, it highly increases the complexity of the kinematic configuration which makes the kinematic control problem very challenging or even impossible to solve. This paper presents our current progress achieved on the development of a kinematic-based leader-follower control system which is designed to control not only the robot’s body posture but also to control the trajectory of the robot’s movement in a semi-autonomous manner (the human operator is retained in the robot’s control loop). To obtain the forward kinematic model, the coordinate frames are established by the classical Denavit–Hartenburg (D-H) convention for a hyper-redundant serial manipulator which has a controlled cables-driven mechanism. To solve the inverse kinematics of the robot, unlike the conventional methods, a leader-follower mechanism, based on the sequential inverse kinematic, is followed. Using this mechanism, the inverse kinematic problem is solved for all sequential joints starting from the head joint to the base joint of the robot. To verify the kinematic design and simulate the robot motion, the MATLAB robotic toolbox is used. The simulation result demonstrated the promising capability of the proposed leader-follower control system in controlling the robot motion and trajectory in our confined space application.

Keywords: hyper-redundant robots, kinematic analysis, semi-autonomous control, serial manipulators

Procedia PDF Downloads 157
142 Control Strategy for a Solar Vehicle Race

Authors: Francois Defay, Martim Calao, Jean Francois Dassieu, Laurent Salvetat

Abstract:

Electrical vehicles are a solution for reducing the pollution using green energy. The shell Eco-Marathon provides rules in order to minimize the battery use for the race. The use of solar panel combined with efficient motor control and race strategy allow driving a 60kg vehicle with one pilot using only the solar energy in the best case. This paper presents a complete modelization of a solar vehicle used for the shell eco-marathon. This project called Helios is cooperation between non-graduated students, academic institutes, and industrials. The prototype is an ultra-energy-efficient vehicle based on one-meter square solar panel and an own-made brushless controller to optimize the electrical part. The vehicle is equipped with sensors and embedded system to provide all the data in real time in order to evaluate the best strategy for the course. A complete modelization with Matlab/Simulink is used to test the optimal strategy to increase the global endurance. Experimental results are presented to validate the different parts of the model: mechanical, aerodynamics, electrical, solar panel. The major finding of this study is to provide solutions to identify the model parameters (Rolling Resistance Coefficient, drag coefficient, motor torque coefficient, etc.) by means of experimental results combined with identification techniques. One time the coefficients are validated, the strategy to optimize the consumption and the average speed can be tested first in simulation before to be implanted for the race. The paper describes all the simulation and experimental parts and provides results in order to optimize the global efficiency of the vehicle. This works have been started four years ago and evolved many students for the experimental and theoretical parts and allow to increase the knowledge on electrical self-efficient vehicle.

Keywords: electrical vehicle, endurance, optimization, shell eco-marathon

Procedia PDF Downloads 265
141 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 104
140 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 115
139 HPSEC Application as a New Indicator of Nitrification Occurrence in Water Distribution Systems

Authors: Sina Moradi, Sanly Liu, Christopher W. K. Chow, John Van Leeuwen, David Cook, Mary Drikas, Soha Habibi, Rose Amal

Abstract:

In recent years, chloramine has been widely used for both primary and secondary disinfection. However, a major concern with the use of chloramine as a secondary disinfectant is the decay of chloramine and nitrification occurrence. The management of chloramine decay and the prevention of nitrification are critical for water utilities managing chloraminated drinking water distribution systems. The detection and monitoring of nitrification episodes is usually carried out through measuring certain water quality parameters, which are commonly referred to as indicators of nitrification. The approach taken in this study was to collect water samples from different sites throughout a drinking water distribution systems, Tailem Bend – Keith (TBK) in South Australia, and analyse the samples by high performance size exclusion chromatography (HPSEC). We investigated potential association between the water qualities from HPSEC analysis with chloramine decay and/or nitrification occurrence. MATLAB 8.4 was used for data processing of HPSEC data and chloramine decay. An increase in the absorbance signal of HPSEC profiles at λ=230 nm between apparent molecular weights of 200 to 1000 Da was observed at sampling sites that experienced rapid chloramine decay and nitrification while its absorbance signal of HPSEC profiles at λ=254 nm decreased. An increase in absorbance at λ=230 nm and AMW < 500 Da was detected for Raukkan CT (R.C.T), a location that experienced nitrification and had significantly lower chloramine residual (<0.1 mg/L). This increase in absorbance was not detected in other sites that did not experience nitrification. Moreover, the UV absorbance at 254 nm of the HPSEC spectra was lower at R.C.T. than other sites. In this study, a chloramine residual index (C.R.I) was introduced as a new indicator of chloramine decay and nitrification occurrence, and is defined based on the ratio of area underneath the HPSEC spectra at two different wavelengths of 230 and 254 nm. The C.R.I index is able to indicate DS sites that experienced nitrification and rapid chloramine loss. This index could be useful for water treatment and distribution system managers to know if nitrification is occurring at a specific location in water distribution systems.

Keywords: nitrification, HPSEC, chloramine decay, chloramine residual index

Procedia PDF Downloads 298
138 Optimization-Based Design Improvement of Synchronizer in Transmission System for Efficient Vehicle Performance

Authors: Sanyka Banerjee, Saikat Nandi, P. K. Dan

Abstract:

Synchronizers as an integral part of gearbox is a key element in the transmission system in automotive. The performance of synchronizer affects transmission efficiency and driving comfort. Synchronizing mechanism as a major component of transmission system must be capable of preventing vibration and noise in the gears. Gear shifting efficiency improvement with an aim to achieve smooth, quick and energy efficient power transmission remains a challenge for the automotive industry. Performance of the synchronizer is dependent on the features and characteristics of its sub-components and therefore analysis of the contribution of such characteristics is necessary. An important exercise involved is to identify all such characteristics or factors which are associated with the modeling and analysis and for this purpose the literature was reviewed, rather extensively, to study the mathematical models, formulated considering such. It has been observed that certain factors are rather common across models; however, there are few factors which have specifically been selected for individual models, as reported. In order to obtain a more realistic model, an attempt here has been made to identify and assimilate practically all possible factors which may be considered in formulating the model more comprehensively. A simulation study, formulated as a block model, for such analysis has been carried out in a reliable environment like MATLAB. Lower synchronization time is desirable and hence, it has been considered here as the output factors in the simulation modeling for evaluating transmission efficiency. An improved synchronizer model requires optimized values of sub-component design parameters. A parametric optimization utilizing Taguchi’s design of experiment based response data and their analysis has been carried out for this purpose. The effectiveness of the optimized parameters for the improved synchronizer performance has been validated by the simulation study of the synchronizer block model with improved parameter values as input parameters for better transmission efficiency and driver comfort.

Keywords: design of experiments, modeling, parametric optimization, simulation, synchronizer

Procedia PDF Downloads 311
137 Cluster Analysis of Students’ Learning Satisfaction

Authors: Purevdolgor Luvsantseren, Ajnai Luvsan-Ish, Oyuntsetseg Sandag, Javzmaa Tsend, Akhit Tileubai, Baasandorj Chilhaasuren, Jargalbat Puntsagdash, Galbadrakh Chuluunbaatar

Abstract:

One of the indicators of the quality of university services is student satisfaction. Aim: We aimed to study the level of satisfaction of students in the first year of premedical courses in the course of Medical Physics using the cluster method. Materials and Methods: In the framework of this goal, a questionnaire was collected from a total of 324 students who studied the medical physics course of the 1st course of the premedical course at the Mongolian National University of Medical Sciences. When determining the level of satisfaction, the answers were obtained on five levels of satisfaction: "excellent", "good", "medium", "bad" and "very bad". A total of 39 questionnaires were collected from students: 8 for course evaluation, 19 for teacher evaluation, and 12 for student evaluation. From the research, a database with 39 fields and 324 records was created. Results: In this database, cluster analysis was performed in MATLAB and R programs using the k-means method of data mining. Calculated the Hopkins statistic in the created database, the values are 0.88, 0.87, and 0.97. This shows that cluster analysis methods can be used. The course evaluation sub-fund is divided into three clusters. Among them, cluster I has 150 objects with a "good" rating of 46.2%, cluster II has 119 objects with a "medium" rating of 36.7%, and Cluster III has 54 objects with a "good" rating of 16.6%. The teacher evaluation sub-base into three clusters, there are 179 objects with a "good" rating of 55.2% in cluster II, 108 objects with an "average" rating of 33.3% in cluster III, and 36 objects with an "excellent" rating in cluster I of 11.1%. The sub-base of student evaluations is divided into two clusters: cluster II has 215 objects with an "excellent" rating of 66.3%, and cluster I has 108 objects with an "excellent" rating of 33.3%. Evaluating the resulting clusters with the Silhouette coefficient, 0.32 for the course evaluation cluster, 0.31 for the teacher evaluation cluster, and 0.30 for student evaluation show statistical significance. Conclusion: Finally, to conclude, cluster analysis in the model of the medical physics lesson “good” - 46.2%, “middle” - 36.7%, “bad” - 16.6%; 55.2% - “good”, 33.3% - “middle”, 11.1% - “bad” in the teacher evaluation model; 66.3% - “good” and 33.3% of “bad” in the student evaluation model.

Keywords: questionnaire, data mining, k-means method, silhouette coefficient

Procedia PDF Downloads 49
136 Chassis Level Control Using Proportional Integrated Derivative Control, Fuzzy Logic and Deep Learning

Authors: Atakan Aral Ormancı, Tuğçe Arslantaş, Murat Özcü

Abstract:

This study presents the design and implementation of an experimental chassis-level system for various control applications. Specifically, the height level of the chassis is controlled using proportional integrated derivative, fuzzy logic, and deep learning control methods. Real-time data obtained from height and pressure sensors installed in a 6x2 truck chassis, in combination with pulse-width modulation signal values, are utilized during the tests. A prototype pneumatic system of a 6x2 truck is added to the setup, which enables the Smart Pneumatic Actuators to function as if they were in a real-world setting. To obtain real-time signal data from height sensors, an Arduino Nano is utilized, while a Raspberry Pi processes the data using Matlab/Simulink and provides the correct output signals to control the Smart Pneumatic Actuator in the truck chassis. The objective of this research is to optimize the time it takes for the chassis to level down and up under various loads. To achieve this, proportional integrated derivative control, fuzzy logic control, and deep learning techniques are applied to the system. The results show that the deep learning method is superior in optimizing time for a non-linear system. Fuzzy logic control with a triangular membership function as the rule base achieves better outcomes than proportional integrated derivative control. Traditional proportional integrated derivative control improves the time it takes to level the chassis down and up compared to an uncontrolled system. The findings highlight the superiority of deep learning techniques in optimizing the time for a non-linear system, and the potential of fuzzy logic control. The proposed approach and the experimental results provide a valuable contribution to the field of control, automation, and systems engineering.

Keywords: automotive, chassis level control, control systems, pneumatic system control

Procedia PDF Downloads 81
135 Automation of Savitsky's Method for Power Calculation of High Speed Vessel and Generating Empirical Formula

Authors: M. Towhidur Rahman, Nasim Zaman Piyas, M. Sadiqul Baree, Shahnewaz Ahmed

Abstract:

The design of high-speed craft has recently become one of the most active areas of naval architecture. Speed increase makes these vehicles more efficient and useful for military, economic or leisure purpose. The planing hull is designed specifically to achieve relatively high speed on the surface of the water. Speed on the water surface is closely related to the size of the vessel and the installed power. The Savitsky method was first presented in 1964 for application to non-monohedric hulls and for application to stepped hulls. This method is well known as a reliable comparative to CFD analysis of hull resistance. A computer program based on Savitsky’s method has been developed using MATLAB. The power of high-speed vessels has been computed in this research. At first, the program reads some principal parameters such as displacement, LCG, Speed, Deadrise angle, inclination of thrust line with respect to keel line etc. and calculates the resistance of the hull using empirical planning equations of Savitsky. However, some functions used in the empirical equations are available only in the graphical form, which is not suitable for the automatic computation. We use digital plotting system to extract data from nomogram. As a result, value of wetted length-beam ratio and trim angle can be determined directly from the input of initial variables, which makes the power calculation automated without manually plotting of secondary variables such as p/b and other coefficients and the regression equations of those functions are derived by using data from different charts. Finally, the trim angle, mean wetted length-beam ratio, frictional coefficient, resistance, and power are computed and compared with the results of Savitsky and good agreement has been observed.

Keywords: nomogram, planing hull, principal parameters, regression

Procedia PDF Downloads 404
134 Analysis of Bridge-Pile Foundation System in Multi-layered Non-Linear Soil Strata Using Energy-Based Method

Authors: Arvan Prakash Ankitha, Madasamy Arockiasamy

Abstract:

The increasing demand for adopting pile foundations in bridgeshas pointed towardsthe need to constantly improve the existing analytical techniques for better understanding of the behavior of such foundation systems. This study presents a simplistic approach using the energy-based method to assess the displacement responses of piles subjected to general loading conditions: Axial Load, Lateral Load, and a Bending Moment. The governing differential equations and the boundary conditions for a bridge pile embedded in multi-layered soil strata subjected to the general loading conditions are obtained using the Hamilton’s principle employing variational principles and minimization of energies. The soil non-linearity has been incorporated through simple constitutive relationships that account for degradation of soil moduli with increasing strain values.A simple power law based on published literature is used where the soil is assumed to be nonlinear-elastic and perfectly plastic. A Tresca yield surface is assumed to develop the soil stiffness variation with different strain levels that defines the non-linearity of the soil strata. This numerical technique has been applied to a pile foundation in a two - layered soil strata for a pier supporting the bridge and solved using the software MATLAB R2019a. The analysis yields the bridge pile displacements at any depth along the length of the pile. The results of the analysis are in good agreement with the published field data and the three-dimensional finite element analysis results performed using the software ANSYS 2019R3. The methodology can be extended to study the response of the multi-strata soil supporting group piles underneath the bridge piers.

Keywords: pile foundations, deep foundations, multilayer soil strata, energy based method

Procedia PDF Downloads 140
133 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator

Authors: Di Yao, Gunther Prokop, Kay Buttner

Abstract:

Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.

Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory

Procedia PDF Downloads 265
132 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 115
131 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 240
130 Education-based, Graphical User Interface Design for Analyzing Phase Winding Inter-Turn Faults in Permanent Magnet Synchronous Motors

Authors: Emir Alaca, Hasbi Apaydin, Rohullah Rahmatullah, Necibe Fusun Oyman Serteller

Abstract:

In recent years, Permanent Magnet Synchronous Motors (PMSMs) have found extensive applications in various industrial sectors, including electric vehicles, wind turbines, and robotics, due to their high performance and low losses. Accurate mathematical modeling of PMSMs is crucial for advanced studies in electric machines. To enhance the effectiveness of graduate-level education, incorporating virtual or real experiments becomes essential to reinforce acquired knowledge. Virtual laboratories have gained popularity as cost-effective alternatives to physical testing, mitigating the risks associated with electrical machine experiments. This study presents a MATLAB-based Graphical User Interface (GUI) for PMSMs. The GUI offers a visual interface that allows users to observe variations in motor outputs corresponding to different input parameters. It enables users to explore healthy motor conditions and the effects of short-circuit faults in the one-phase winding. Additionally, the interface includes menus through which users can access equivalent circuits related to the motor and gain hands-on experience with the mathematical equations used in synchronous motor calculations. The primary objective of this paper is to enhance the learning experience of graduate and doctoral students by providing a GUI-based approach in laboratory studies. This interactive platform empowers students to examine and analyze motor outputs by manipulating input parameters, facilitating a deeper understanding of PMSM operation and control.

Keywords: magnet synchronous motor, mathematical modelling, education tools, winding inter-turn fault

Procedia PDF Downloads 51
129 Development of a Computer Aided Diagnosis Tool for Brain Tumor Extraction and Classification

Authors: Fathi Kallel, Abdulelah Alabd Uljabbar, Abdulrahman Aldukhail, Abdulaziz Alomran

Abstract:

The brain is an important organ in our body since it is responsible about the majority actions such as vision, memory, etc. However, different diseases such as Alzheimer and tumors could affect the brain and conduct to a partial or full disorder. Regular diagnosis are necessary as a preventive measure and could help doctors to early detect a possible trouble and therefore taking the appropriate treatment, especially in the case of brain tumors. Different imaging modalities are proposed for diagnosis of brain tumor. The powerful and most used modality is the Magnetic Resonance Imaging (MRI). MRI images are analyzed by doctor in order to locate eventual tumor in the brain and describe the appropriate and needed treatment. Diverse image processing methods are also proposed for helping doctors in identifying and analyzing the tumor. In fact, a large Computer Aided Diagnostic (CAD) tools including developed image processing algorithms are proposed and exploited by doctors as a second opinion to analyze and identify the brain tumors. In this paper, we proposed a new advanced CAD for brain tumor identification, classification and feature extraction. Our proposed CAD includes three main parts. Firstly, we load the brain MRI. Secondly, a robust technique for brain tumor extraction is proposed. This technique is based on both Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA). DWT is characterized by its multiresolution analytic property, that’s why it was applied on MRI images with different decomposition levels for feature extraction. Nevertheless, this technique suffers from a main drawback since it necessitates a huge storage and is computationally expensive. To decrease the dimensions of the feature vector and the computing time, PCA technique is considered. In the last stage, according to different extracted features, the brain tumor is classified into either benign or malignant tumor using Support Vector Machine (SVM) algorithm. A CAD tool for brain tumor detection and classification, including all above-mentioned stages, is designed and developed using MATLAB guide user interface.

Keywords: MRI, brain tumor, CAD, feature extraction, DWT, PCA, classification, SVM

Procedia PDF Downloads 249
128 Study on Capability of the Octocopter Configurations in Finite Element Analysis Simulation Environment

Authors: Jeet Shende, Leonid Shpanin, Misko Abramiuk, Mattew Goodwin, Nicholas Pickett

Abstract:

Energy harvesting on board the Unmanned Ariel Vehicle (UAV) is one of the most rapidly growing emerging technologies and consists of the collection of small amounts of energy, for different applications, from unconventional sources that are incidental to the operation of the parent system or device. Different energy harvesting techniques have already been investigated in the multirotor drones, where the energy collected comes from the systems surrounding ambient environment and typically involves the conversion of solar, kinetic, or thermal energies into electrical energy. The energy harvesting from the vibrated propeller using the piezoelectric components inside the propeller has also been proven to be feasible. However, the impact on the UAV flight performance using this technology has not been investigated. In this contribution the impact on the multirotor drone operation has been investigated at different flight control configurations which support the efficient performance of the propeller vibration energy harvesting. The industrially made MANTIS X8-PRO octocopter frame kit was used to explore the octocopter operation which was modelled using SolidWorks 3D CAD package for simulation studies. The octocopter flight control strategy is developed through integration of the SolidWorks 3D CAD software and MATLAB/Simulink simulation environment for evaluation of the octocopter behaviour under different simulated flight modes and octocopter geometries. Analysis of the two modelled octocopter geometries and their flight performance is presented via graphical representation of simulated parameters. The possibility of not using the landing gear in octocopter geometry is demonstrated. The conducted study evaluates the octocopter’s flight control technique and its impact on the energy harvesting mechanism developed on board the octocopter. Finite Element Analysis (FEA) simulation results of the modelled octocopter in operation are presented exploring the performance of the octocopter flight control and structural configurations. Applications of both octocopter structures and their flight control strategy are discussed.

Keywords: energy harvesting, flight control modelling, object modeling, unmanned aerial vehicle

Procedia PDF Downloads 76
127 Algorithm for Automatic Real-Time Electrooculographic Artifact Correction

Authors: Norman Sinnigen, Igor Izyurov, Marina Krylova, Hamidreza Jamalabadi, Sarah Alizadeh, Martin Walter

Abstract:

Background: EEG is a non-invasive brain activity recording technique with a high temporal resolution that allows the use of real-time applications, such as neurofeedback. However, EEG data are susceptible to electrooculographic (EOG) and electromyography (EMG) artifacts (i.e., jaw clenching, teeth squeezing and forehead movements). Due to their non-stationary nature, these artifacts greatly obscure the information and power spectrum of EEG signals. Many EEG artifact correction methods are too time-consuming when applied to low-density EEG and have been focusing on offline processing or handling one single type of EEG artifact. A software-only real-time method for correcting multiple types of EEG artifacts of high-density EEG remains a significant challenge. Methods: We demonstrate an improved approach for automatic real-time EEG artifact correction of EOG and EMG artifacts. The method was tested on three healthy subjects using 64 EEG channels (Brain Products GmbH) and a sampling rate of 1,000 Hz. Captured EEG signals were imported in MATLAB with the lab streaming layer interface allowing buffering of EEG data. EMG artifacts were detected by channel variance and adaptive thresholding and corrected by using channel interpolation. Real-time independent component analysis (ICA) was applied for correcting EOG artifacts. Results: Our results demonstrate that the algorithm effectively reduces EMG artifacts, such as jaw clenching, teeth squeezing and forehead movements, and EOG artifacts (horizontal and vertical eye movements) of high-density EEG while preserving brain neuronal activity information. The average computation time of EOG and EMG artifact correction for 80 s (80,000 data points) 64-channel data is 300 – 700 ms depending on the convergence of ICA and the type and intensity of the artifact. Conclusion: An automatic EEG artifact correction algorithm based on channel variance, adaptive thresholding, and ICA improves high-density EEG recordings contaminated with EOG and EMG artifacts in real-time.

Keywords: EEG, muscle artifacts, ocular artifacts, real-time artifact correction, real-time ICA

Procedia PDF Downloads 178
126 The Comparative Electroencephalogram Study: Children with Autistic Spectrum Disorder and Healthy Children Evaluate Classical Music in Different Ways

Authors: Galina Portnova, Kseniya Gladun

Abstract:

In our EEG experiment participated 27 children with ASD with the average age of 6.13 years and the average score for CARS 32.41 and 25 healthy children (of 6.35 years). Six types of musical stimulation were presented, included Gluck, Javier-Naida, Kenny G, Chopin and other classic musical compositions. Children with autism showed orientation reaction to the music and give behavioral responses to different types of music, some of them might assess stimulation by scales. The participants were instructed to remain calm. Brain electrical activity was recorded using a 19-channel EEG recording device, 'Encephalan' (Russia, Taganrog). EEG epochs lasting 150 s were analyzed using EEGLab plugin for MatLab (Mathwork Inc.). For EEG analysis we used Fast Fourier Transform (FFT), analyzed Peak alpha frequency (PAF), correlation dimension D2 and Stability of rhythms. To express the dynamics of desynchronizing of different rhythms we've calculated the envelope of the EEG signal, using the whole frequency range and a set of small narrowband filters using Hilbert transformation. Our data showed that healthy children showed similar EEG spectral changes during musical stimulation as well as described the feelings induced by musical fragments. The exception was the ‘Chopin. Prelude’ fragment (no.6). This musical fragment induced different subjective feeling, behavioral reactions and EEG spectral changes in children with ASD and healthy children. The correlation dimension D2 was significantly lower in autists compared to healthy children during musical stimulation. Hilbert envelope frequency was reduced in all group of subjects during musical compositions 1,3,5,6 compositions compared to the background. During musical fragments 2 and 4 (terrible) lower Hilbert envelope frequency was observed only in children with ASD and correlated with the severity of the disease. Alfa peak frequency was lower compared to the background during this musical composition in healthy children and conversely higher in children with ASD.

Keywords: electroencephalogram (EEG), emotional perception, ASD, musical perception, childhood Autism rating scale (CARS)

Procedia PDF Downloads 284
125 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry

Authors: Anu S. Nair, V. Anantha Subramanian

Abstract:

This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.

Keywords: computer-aided design, methodical series, NURBS, ship design

Procedia PDF Downloads 169
124 Proportional and Integral Controller-Based Direct Current Servo Motor Speed Characterization

Authors: Adel Salem Bahakeem, Ahmad Jamal, Mir Md. Maruf Morshed, Elwaleed Awad Khidir

Abstract:

Direct Current (DC) servo motors, or simply DC motors, play an important role in many industrial applications such as manufacturing of plastics, precise positioning of the equipment, and operating computer-controlled systems where speed of feed control, maintaining the position, and ensuring to have a constantly desired output is very critical. These parameters can be controlled with the help of control systems such as the Proportional Integral Derivative (PID) controller. The aim of the current work is to investigate the effects of Proportional (P) and Integral (I) controllers on the steady state and transient response of the DC motor. The controller gains are varied to observe their effects on the error, damping, and stability of the steady and transient motor response. The current investigation is conducted experimentally on a servo trainer CE 110 using analog PI controller CE 120 and theoretically using Simulink in MATLAB. Both experimental and theoretical work involves varying integral controller gain to obtain the response to a steady-state input, varying, individually, the proportional and integral controller gains to obtain the response to a step input function at a certain frequency, and theoretically obtaining the proportional and integral controller gains for desired values of damping ratio and response frequency. Results reveal that a proportional controller helps reduce the steady-state and transient error between the input signal and output response and makes the system more stable. In addition, it also speeds up the response of the system. On the other hand, the integral controller eliminates the error but tends to make the system unstable with induced oscillations and slow response to eliminate the error. From the current work, it is desired to achieve a stable response of the servo motor in terms of its angular velocity subjected to steady-state and transient input signals by utilizing the strengths of both P and I controllers.

Keywords: DC servo motor, proportional controller, integral controller, controller gain optimization, Simulink

Procedia PDF Downloads 110
123 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 132