Search results for: computational accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5479

Search results for: computational accuracy

3439 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 504
3438 Getting to Know the Enemy: Utilization of Phone Record Analysis Simulations to Uncover a Target’s Personal Life Attributes

Authors: David S. Byrne

Abstract:

The purpose of this paper is to understand how phone record analysis can enable identification of subjects in communication with a target of a terrorist plot. This study also sought to understand the advantages of the implementation of simulations to develop the skills of future intelligence analysts to enhance national security. Through the examination of phone reports which in essence consist of the call traffic of incoming and outgoing numbers (and not by listening to calls or reading the content of text messages), patterns can be uncovered that point toward members of a criminal group and activities planned. Through temporal and frequency analysis, conclusions were drawn to offer insights into the identity of participants and the potential scheme being undertaken. The challenge lies in the accurate identification of the users of the phones in contact with the target. Often investigators rely on proprietary databases and open sources to accomplish this task, however it is difficult to ascertain the accuracy of the information found. Thus, this paper poses two research questions: how effective are freely available web sources of information at determining the actual identification of callers? Secondly, does the identity of the callers enable an understanding of the lifestyle and habits of the target? The methodology for this research consisted of the analysis of the call detail records of the author’s personal phone activity spanning the period of a year combined with a hypothetical theory that the owner of said phone was a leader of terrorist cell. The goal was to reveal the identity of his accomplices and understand how his personal attributes can further paint a picture of the target’s intentions. The results of the study were interesting, nearly 80% of the calls were identified with over a 75% accuracy rating via datamining of open sources. The suspected terrorist’s inner circle was recognized including relatives and potential collaborators as well as financial institutions [money laundering], restaurants [meetings], a sporting goods store [purchase of supplies], and airline and hotels [travel itinerary]. The outcome of this research showed the benefits of cellphone analysis without more intrusive and time-consuming methodologies though it may be instrumental for potential surveillance, interviews, and developing probable cause for wiretaps. Furthermore, this research highlights the importance of building upon the skills of future intelligence analysts through phone record analysis via simulations; that hands-on learning in this case study emphasizes the development of the competencies necessary to improve investigations overall.

Keywords: hands-on learning, intelligence analysis, intelligence education, phone record analysis, simulations

Procedia PDF Downloads 17
3437 A Firefly Based Optimization Technique for Optimal Planning of Voltage Controlled Distributed Generators

Authors: M. M. Othman, Walid El-Khattam, Y. G. Hegazy, A. Y. Abdelaziz

Abstract:

This paper presents a method for finding the optimal location and capacity of dispatchable DGs connected to the distribution feeders for optimal planning for a specified power loss without violating the system practical constraints. The distributed generation units in the proposed algorithm is modeled as voltage controlled node with the flexibility to be converted to constant power node in case of reactive power limit violation. The proposed algorithm is implemented in MATLAB and tested on the IEEE 37-nodes feeder. The results that are validated by comparing it with results obtained from other competing methods show the effectiveness, accuracy and speed of the proposed method.

Keywords: distributed generators, firefly technique, optimization, power loss

Procedia PDF Downloads 535
3436 Wind Speed Prediction Using Passive Aggregation Artificial Intelligence Model

Authors: Tarek Aboueldahab, Amin Mohamed Nassar

Abstract:

Wind energy is a fluctuating energy source unlike conventional power plants, thus, it is necessary to accurately predict short term wind speed to integrate wind energy in the electricity supply structure. To do so, we present a hybrid artificial intelligence model of short term wind speed prediction based on passive aggregation of the particle swarm optimization and neural networks. As a result, improvement of the prediction accuracy is obviously obtained compared to the standard artificial intelligence method.

Keywords: artificial intelligence, neural networks, particle swarm optimization, passive aggregation, wind speed prediction

Procedia PDF Downloads 453
3435 Open Jet Testing for Buoyant and Hybrid Buoyant Aerial Vehicles

Authors: A. U. Haque, W. Asrar, A. A. Omar, E. Sulaeman, J. S Mohamed Ali

Abstract:

Open jet testing is a valuable testing technique which provides the desired results with reasonable accuracy. It has been used in past for the airships and now has recently been applied for the hybrid ones, having more non-buoyant force coming from the wings, empennage and the fuselage. In the present review work, an effort has been done to review the challenges involved in open jet testing. In order to shed light on the application of this technique, the experimental results of two different configurations are presented. Although, the aerodynamic results of such vehicles are unique to its own design; however, it will provide a starting point for planning any future testing. Few important testing areas which need more attention are also highlighted. Most of the hybrid buoyant aerial vehicles are unconventional in shape and there experimental data is generated, which is unique to its own design.

Keywords: open jet testing, aerodynamics, hybrid buoyant aerial vehicles, airships

Procedia PDF Downloads 573
3434 CFD Effect of the Tidal Grating in Opposite Directions

Authors: N. M. Thao, I. Dolguntseva, M. Leijon

Abstract:

Flow blockages referring to the increase in flow are considered as a vital equipment for marine current energy conversion. However, the shape of these devices will result in extracted energy under the operation. The present work investigates the effect of two configurations of a grating, convergent and divergent that located upstream, to the water flow velocity. Computational Fluid Dynamic simulation studies the flow characteristics by using the ANSYS Fluent solver for these specified arrangements of the grating. The results indicate that distinct features of flow velocity between “convergent” and “divergent” grating placements are up to in confined conditions. Furthermore, the velocity in case of granting is higher than that of the divergent grating.

Keywords: marine current energy, converter, turbine granting, RANS simulation, water flow velocity

Procedia PDF Downloads 409
3433 Numerical Investigation of Flow Past in a Staggered Tube Bundle

Authors: Kerkouri Abdelkadir

Abstract:

Numerical calculations of turbulent flows are one of the most prominent modern interests in various engineering applications. Due to the difficulty of predicting, following up and studying this flow for computational fluid dynamic (CFD), in this paper, we simulated numerical study of a flow past in a staggered tube bundle, using CFD Code ANSYS FLUENT with several models of turbulence following: k-ε, k-ω and SST approaches. The flow is modeled based on the experimental studies. The predictions of mean velocities are in very good agreement with detailed LDA (Laser Doppler Anemometry) measurements performed in 8 stations along the depth of the array. The sizes of the recirculation zones behind the cylinders are also predicted. The simulations are conducted for Reynolds numbers of 12858. The Reynolds number is set to depend experimental results.

Keywords: flow, tube bundle, ANSYS Fluent, CFD, turbulence, LDA, RANS (k-ε, k-ω, SST)

Procedia PDF Downloads 166
3432 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model

Authors: T. Thein, S. Kalyar Myo

Abstract:

Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.

Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)

Procedia PDF Downloads 286
3431 A Variable Structural Control for a Flexible Lamina

Authors: Xuezhang Hou

Abstract:

A control problem of a flexible Lamina formulated by partial differential equations with viscoelastic boundary conditions is studied in this paper. The problem is written in standard form of linear infinite dimensional system in an appropriate energy Hilbert space. The semigroup approach of linear operators is adopted in investigating wellposedness of the closed loop system. A variable structural control for the system is proposed, and meanwhile an equivalent control method is applied to the thin plate system. A significant result on control theory that the thin plate can be approximated by ideal sliding mode in any accuracy in terms of semigroup approach is obtained.

Keywords: partial differential equations, flexible lamina, variable structural control, semigroup of linear operators

Procedia PDF Downloads 87
3430 An Ergonomic Handle Design for Instruments in Laparoscopic Surgery

Authors: Ramon Sancibrian, Carlos Redondo-Figuero, Maria C. Gutierrez-Diez, Esther G. Sarabia, Maria A. Benito-Gonzalez, Jose C. Manuel-Palazuelos

Abstract:

In this paper, the design and evaluation of a handle for laparoscopic surgery is presented. The design of the handle is based on ergonomic principles and tries to avoid awkward postures for surgeons. The handle combines the so-called power-grip and accurate-grip in order to provide strength and accuracy in the performance of surgery. The handle is tested using both objective and subjective approaches. The objective approach uses motion capture techniques to obtain the angles of forearm, arm, wrist and hand. The muscular effort is obtained with electromyography electrodes. On the other hand, a subjective survey has been carried out using questionnaires. Results confirm that the handle is preferred by the majority of the surgeons.

Keywords: laparoscopic surgery, ergonomics, mechanical design, biomechanics

Procedia PDF Downloads 502
3429 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
3428 Nitrogen Effects on Ignition Delay Time in Supersonic Premixed and Diffusion Flames

Authors: A. M. Tahsini

Abstract:

Computational study of two dimensional supersonic reacting hydrogen-air flows is performed to investigate the nitrogen effects on ignition delay time for premixed and diffusion flames. Chemical reaction is treated using detail kinetics and the advection upstream splitting method is used to calculate the numerical inviscid fluxes. The results show that only in the stoichiometric condition for both premixed and diffusion flames, there is monotone dependency of the ignition delay time to the nitrogen addition. In other situations, the optimal condition from ignition viewpoint should be found using numerical investigations.

Keywords: diffusion flame, ignition delay time, mixing layer, numerical simulation, premixed flame, supersonic flow

Procedia PDF Downloads 463
3427 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach

Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier

Abstract:

Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.

Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube

Procedia PDF Downloads 156
3426 Comparison of the Effectiveness of Tree Algorithms in Classification of Spongy Tissue Texture

Authors: Roza Dzierzak, Waldemar Wojcik, Piotr Kacejko

Abstract:

Analysis of the texture of medical images consists of determining the parameters and characteristics of the examined tissue. The main goal is to assign the analyzed area to one of two basic groups: as a healthy tissue or a tissue with pathological changes. The CT images of the thoracic lumbar spine from 15 healthy patients and 15 with confirmed osteoporosis were used for the analysis. As a result, 120 samples with dimensions of 50x50 pixels were obtained. The set of features has been obtained based on the histogram, gradient, run-length matrix, co-occurrence matrix, autoregressive model, and Haar wavelet. As a result of the image analysis, 290 descriptors of textural features were obtained. The dimension of the space of features was reduced by the use of three selection methods: Fisher coefficient (FC), mutual information (MI), minimization of the classification error probability and average correlation coefficients between the chosen features minimization of classification error probability (POE) and average correlation coefficients (ACC). Each of them returned ten features occupying the initial place in the ranking devised according to its own coefficient. As a result of the Fisher coefficient and mutual information selections, the same features arranged in a different order were obtained. In both rankings, the 50% percentile (Perc.50%) was found in the first place. The next selected features come from the co-occurrence matrix. The sets of features selected in the selection process were evaluated using six classification tree methods. These were: decision stump (DS), Hoeffding tree (HT), logistic model trees (LMT), random forest (RF), random tree (RT) and reduced error pruning tree (REPT). In order to assess the accuracy of classifiers, the following parameters were used: overall classification accuracy (ACC), true positive rate (TPR, classification sensitivity), true negative rate (TNR, classification specificity), positive predictive value (PPV) and negative predictive value (NPV). Taking into account the classification results, it should be stated that the best results were obtained for the Hoeffding tree and logistic model trees classifiers, using the set of features selected by the POE + ACC method. In the case of the Hoeffding tree classifier, the highest values of three parameters were obtained: ACC = 90%, TPR = 93.3% and PPV = 93.3%. Additionally, the values of the other two parameters, i.e., TNR = 86.7% and NPV = 86.6% were close to the maximum values obtained for the LMT classifier. In the case of logistic model trees classifier, the same ACC value was obtained ACC=90% and the highest values for TNR=88.3% and NPV= 88.3%. The values of the other two parameters remained at a level close to the highest TPR = 91.7% and PPV = 91.6%. The results obtained in the experiment show that the use of classification trees is an effective method of classification of texture features. This allows identifying the conditions of the spongy tissue for healthy cases and those with the porosis.

Keywords: classification, feature selection, texture analysis, tree algorithms

Procedia PDF Downloads 180
3425 Reliability Based Topology Optimization: An Efficient Method for Material Uncertainty

Authors: Mehdi Jalalpour, Mazdak Tootkaboni

Abstract:

We present a computationally efficient method for reliability-based topology optimization under material properties uncertainty, which is assumed to be lognormally distributed and correlated within the domain. Computational efficiency is achieved through estimating the response statistics with stochastic perturbation of second order, using these statistics to fit an appropriate distribution that follows the empirical distribution of the response, and employing an efficient gradient-based optimizer. The proposed algorithm is utilized for design of new structures and the changes in the optimized topology is discussed for various levels of target reliability and correlation strength. Predictions were verified thorough comparison with results obtained using Monte Carlo simulation.

Keywords: material uncertainty, stochastic perturbation, structural reliability, topology optimization

Procedia PDF Downloads 606
3424 Developing an AI-Driven Application for Real-Time Emotion Recognition from Human Vocal Patterns

Authors: Sayor Ajfar Aaron, Mushfiqur Rahman, Sajjat Hossain Abir, Ashif Newaz

Abstract:

This study delves into the development of an artificial intelligence application designed for real-time emotion recognition from human vocal patterns. Utilizing advanced machine learning algorithms, including deep learning and neural networks, the paper highlights both the technical challenges and potential opportunities in accurately interpreting emotional cues from speech. Key findings demonstrate the critical role of diverse training datasets and the impact of ambient noise on recognition accuracy, offering insights into future directions for improving robustness and applicability in real-world scenarios.

Keywords: artificial intelligence, convolutional neural network, emotion recognition, vocal patterns

Procedia PDF Downloads 57
3423 Aerodynamic Analysis of a Frontal Deflector for Vehicles

Authors: C. Malça, N. Alves, A. Mateus

Abstract:

This work was one of the tasks of the Manufacturing2Client project, whose objective was to develop a frontal deflector to be commercialized in the automotive industry, using new project and manufacturing methods. In this task, in particular, it was proposed to develop the ability to predict computationally the aerodynamic influence of flow in vehicles, in an effort to reduce fuel consumption in vehicles from class 3 to 8. With this aim, two deflector models were developed and their aerodynamic performance analyzed. The aerodynamic study was done using the Computational Fluid Dynamics (CFD) software Ansys CFX and allowed the calculation of the drag coefficient caused by the vehicle motion for the different configurations considered. Moreover, the reduction of diesel consumption and carbon dioxide (CO2) emissions associated with the optimized deflector geometry could be assessed.

Keywords: erodynamic analysis, CFD, CO2 emissions, drag coefficient, frontal deflector, fuel consumption

Procedia PDF Downloads 407
3422 Concentric Circle Detection based on Edge Pre-Classification and Extended RANSAC

Authors: Zhongjie Yu, Hancheng Yu

Abstract:

In this paper, we propose an effective method to detect concentric circles with imperfect edges. First, the gradient of edge pixel is coded and a 2-D lookup table is built to speed up normal generation. Then we take an accumulator to estimate the rough center and collect plausible edges of concentric circles through gradient and distance. Later, we take the contour-based method, which takes the contour and edge intersection, to pre-classify the edges. Finally, we use the extended RANSAC method to find all the candidate circles. The center of concentric circles is determined by the two circles with the highest concentricity. Experimental results demonstrate that the proposed method has both good performance and accuracy for the detection of concentric circles.

Keywords: concentric circle detection, gradient, contour, edge pre-classification, RANSAC

Procedia PDF Downloads 131
3421 Face Recognition Using Discrete Orthogonal Hahn Moments

Authors: Fatima Akhmedova, Simon Liao

Abstract:

One of the most critical decision points in the design of a face recognition system is the choice of an appropriate face representation. Effective feature descriptors are expected to convey sufficient, invariant and non-redundant facial information. In this work, we propose a set of Hahn moments as a new approach for feature description. Hahn moments have been widely used in image analysis due to their invariance, non-redundancy and the ability to extract features either globally and locally. To assess the applicability of Hahn moments to Face Recognition we conduct two experiments on the Olivetti Research Laboratory (ORL) database and University of Notre-Dame (UND) X1 biometric collection. Fusion of the global features along with the features from local facial regions are used as an input for the conventional k-NN classifier. The method reaches an accuracy of 93% of correctly recognized subjects for the ORL database and 94% for the UND database.

Keywords: face recognition, Hahn moments, recognition-by-parts, time-lapse

Procedia PDF Downloads 377
3420 Modeling and Control of a 4DoF Robotic Assistive Device for Hand Rehabilitation

Authors: Christopher Spiewak, M. R. Islam, Mohammad Arifur Rahaman, Mohammad H. Rahman, Roger Smith, Maarouf Saad

Abstract:

For those who have lost the ability to move their hand, going through repetitious motions with the assistance of a therapist is the main method of recovery. We have been developed a robotic assistive device to rehabilitate the hand motions in place of the traditional therapy. The developed assistive device (RAD-HR) is comprised of four degrees of freedom enabling basic movements, hand function, and assists in supporting the hand during rehabilitation. We used a nonlinear computed torque control technique to control the RAD-HR. The accuracy of the controller was evaluated in simulations (MATLAB/Simulink environment). To see the robustness of the controller external disturbance as modelling uncertainty (±10% of joint torques) were added in each joints.

Keywords: biorobotics, rehabilitation, robotic assistive device, exoskeleton, nonlinear control

Procedia PDF Downloads 480
3419 Morphological Analysis of English L1-Persian L2 Adult Learners’ Interlanguage: From the Perspective of SLA Variation

Authors: Maassoumeh Bemani Naeini

Abstract:

Studies on interlanguage have long been engaged in describing the phenomenon of variation in SLA. Pursuing the same goal and particularly addressing the role of linguistic features, this study describes the use of Persian morphology in the interlanguage of two adult English-speaking learners of Persian L2. Taking the general approach of a combination of contrastive analysis, error analysis and interlanguage analysis, this study focuses on the identification and prediction of some possible instances of transfer from English L1 to Persian L2 across six elicitation tasks aiming to investigate whether any of contextual features may variably influence the learners’ order of morpheme accuracy in the areas of copula, possessives, articles, demonstratives, plural form, personal pronouns, and genitive cases.  Results describe the existence of task variation in the interlanguage system of Persian L2 learners.

Keywords: English L1, Interlanguage Analysis, Persian L2, SLA variation

Procedia PDF Downloads 317
3418 A Microwave Heating Model for Endothermic Reaction in the Cement Industry

Authors: Sofia N. Gonçalves, Duarte M. S. Albuquerque, José C. F. Pereira

Abstract:

Microwave technology has been gaining importance in contributing to decarbonization processes in high energy demand industries. Despite the several numerical models presented in the literature, a proper Verification and Validation exercise is still lacking. This is important and required to evaluate the physical process model accuracy and adequacy. Another issue addresses impedance matching, which is an important mechanism used in microwave experiments to increase electromagnetic efficiency. Such mechanism is not available in current computational tools, thus requiring an external numerical procedure. A numerical model was implemented to study the continuous processing of limestone with microwave heating. This process requires the material to be heated until a certain temperature that will prompt a highly endothermic reaction. Both a 2D and 3D model were built in COMSOL Multiphysics to solve the two-way coupling between Maxwell and Energy equations, along with the coupling between both heat transfer phenomena and limestone endothermic reaction. The 2D model was used to study and evaluate the required numerical procedure, being also a benchmark test, allowing other authors to implement impedance matching procedures. To achieve this goal, a controller built in MATLAB was used to continuously matching the cavity impedance and predicting the required energy for the system, thus successfully avoiding energy inefficiencies. The 3D model reproduces realistic results and therefore supports the main conclusions of this work. Limestone was modeled as a continuous flow under the transport of concentrated species, whose material and kinetics properties were taken from literature. Verification and Validation of the coupled model was taken separately from the chemical kinetic model. The chemical kinetic model was found to correctly describe the chosen kinetic equation by comparing numerical results with experimental data. A solution verification was made for the electromagnetic interface, where second order and fourth order accurate schemes were found for linear and quadratic elements, respectively, with numerical uncertainty lower than 0.03%. Regarding the coupled model, it was demonstrated that the numerical error would diverge for the heat transfer interface with the mapped mesh. Results showed numerical stability for the triangular mesh, and the numerical uncertainty was less than 0.1%. This study evaluated limestone velocity, heat transfer, and load influence on thermal decomposition and overall process efficiency. The velocity and heat transfer coefficient were studied with the 2D model, while different loads of material were studied with the 3D model. Both models demonstrated to be highly unstable when solving non-linear temperature distributions. High velocity flows exhibited propensity to thermal runways, and the thermal efficiency showed the tendency to stabilize for the higher velocities and higher filling ratio. Microwave efficiency denoted an optimal velocity for each heat transfer coefficient, pointing out that electromagnetic efficiency is a consequence of energy distribution uniformity. The 3D results indicated the inefficient development of the electric field for low filling ratios. Thermal efficiencies higher than 90% were found for the higher loads and microwave efficiencies up to 75% were accomplished. The 80% fill ratio was demonstrated to be the optimal load with an associated global efficiency of 70%.

Keywords: multiphysics modeling, microwave heating, verification and validation, endothermic reactions modeling, impedance matching, limestone continuous processing

Procedia PDF Downloads 140
3417 CFD Simulations to Study the Cooling Effects of Different Greening Modifications

Authors: An-Shik Yang, Chih-Yung Wen, Chiang-Ho Cheng, Yu-Hsuan Juan

Abstract:

The objective of this study is to conduct computational fluid dynamic (CFD) simulations for evaluating the cooling efficacy from vegetation implanted in a public park in the Taipei, Taiwan. To probe the impacts of park renewal by means of adding three pavilions and supplementary green areas on urban microclimates, the simulated results have revealed that the park having a higher percentage of green coverage ratio (GCR) tended to experience a better cooling effect. These findings can be used to explore the effects of different greening modifications on urban environments for achieving an effective thermal comfort in urban public spaces.

Keywords: CFD simulations, Green Coverage Ratio, Urban heat island, Urban Public Park

Procedia PDF Downloads 493
3416 A Generalization of Option Pricing with Discrete Dividends to Markets with Daily Price Limits

Authors: Jiahau Guo, Yihe Zhang

Abstract:

This paper proposes solutions for pricing options on stocks paying discrete dividends in markets with daily price limits. We first extend the intraday density function of Guo and Chang (2020) to a multi-day one and use the framework of Haug et al. (2003) to value European options on stocks paying discrete dividends. Next, we adopt the fast Fourier transform (FFT) to derive accurate and efficient formulae for American options and further employ the three-point Richardson extrapolation to accelerate the computation. Finally, the accuracy of our proposed methods is verified by simulations.

Keywords: daily price limit, discrete dividend, early exercise, fast Fourier transform, multi-day density function, Richardson extrapolation

Procedia PDF Downloads 165
3415 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)

Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim

Abstract:

In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.

Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step

Procedia PDF Downloads 464
3414 Nanoparticles Using in Chiral Analysis with Different Methods of Separation

Authors: Bounoua Nadia, Rebizi Mohamed Nadjib

Abstract:

Chiral molecules in relation to particular biological roles are stereoselective. Enantiomers differ significantly in their biochemical responses in a biological environment. Despite the current advancement in drug discovery and pharmaceutical biotechnology, the chiral separation of some racemic mixtures continues to be one of the greatest challenges because the available techniques are too costly and time-consuming for the assessment of therapeutic drugs in the early stages of development worldwide. Various nanoparticles became one of the most investigated and explored nanotechnology-derived nanostructures, especially in chirality, where several studies are reported to improve the enantiomeric separation of different racemic mixtures. The production of surface-modified nanoparticles has contributed to these limitations in terms of sensitivity, accuracy, and enantioselectivity that can be optimized and therefore makes these surface-modified nanoparticles convenient for enantiomeric identification and separation.

Keywords: chirality, enantiomeric recognition, selectors, analysis, surface-modified nanoparticles

Procedia PDF Downloads 95
3413 Urban Land Cover from GF-2 Satellite Images Using Object Based and Neural Network Classifications

Authors: Lamyaa Gamal El-Deen Taha, Ashraf Sharawi

Abstract:

China launched satellite GF-2 in 2014. This study deals with comparing nearest neighbor object-based classification and neural network classification methods for classification of the fused GF-2 image. Firstly, rectification of GF-2 image was performed. Secondly, a comparison between nearest neighbor object-based classification and neural network classification for classification of fused GF-2 was performed. Thirdly, the overall accuracy of classification and kappa index were calculated. Results indicate that nearest neighbor object-based classification is better than neural network classification for urban mapping.

Keywords: GF-2 images, feature extraction-rectification, nearest neighbour object based classification, segmentation algorithms, neural network classification, multilayer perceptron

Procedia PDF Downloads 389
3412 Optimization of Structures Subjected to Earthquake

Authors: Alireza Lavaei, Alireza Lohrasbi, Mohammadali M. Shahlaei

Abstract:

To reduce the overall time of structural optimization for earthquake loads two strategies are adopted. In the first strategy, a neural system consisting self-organizing map and radial basis function neural networks, is utilized to predict the time history responses. In this case, the input space is classified by employing a self-organizing map neural network. Then a distinct RBF neural network is trained in each class. In the second strategy, an improved genetic algorithm is employed to find the optimum design. A 72-bar space truss is designed for optimal weight using exact and approximate analysis for the El Centro (S-E 1940) earthquake loading. The numerical results demonstrate the computational advantages and effectiveness of the proposed method.

Keywords: optimization, genetic algorithm, neural networks, self-organizing map

Procedia PDF Downloads 314
3411 DeClEx-Processing Pipeline for Tumor Classification

Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba

Abstract:

Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.

Keywords: machine learning, healthcare, classification, explainability

Procedia PDF Downloads 58
3410 CFD Simulations to Examine Natural Ventilation of a Work Area in a Public Building

Authors: An-Shik Yang, Chiang-Ho Cheng, Jen-Hao Wu, Yu-Hsuan Juan

Abstract:

Natural ventilation has played an important role for many low energy-building designs. It has been also noticed as a essential subject to persistently bring the fresh cool air from the outside into a building. This study carried out the computational fluid dynamics (CFD)-based simulations to examine the natural ventilation development of a work area in a public building. The simulated results can be useful to better understand the indoor microclimate and the interaction of wind with buildings. Besides, this CFD simulation procedure can serve as an effective analysis tool to characterize the airing performance, and thereby optimize the building ventilation for strengthening the architects, planners and other decision makers on improving the natural ventilation design of public buildings.

Keywords: CFD simulations, natural ventilation, microclimate, wind environment

Procedia PDF Downloads 575