Search results for: fault classification
1567 Speech Disorders as Predictors of Social Participation of Children with Cerebral Palsy in the Primary Schools of the Czech Republic
Authors: Marija Zulić, Vanda Hájková, Nina Brkić–Jovanović, Srećko Potić, Sanja Tomić
Abstract:
The name cerebral palsy comes from the word cerebrum, which means the brain and the word palsy, which means seizure, and essentially refers to the movement disorder. In the clinical picture of cerebral palsy, basic neuromotor disorders are associated with other various disorders: behavioural, intellectual, speech, sensory, epileptic seizures, and bone and joint deformities. Motor speech disorders are among the most common difficulties present in people with cerebral palsy. Social participation represents an interaction between an individual and their social environment. Quality of social participation of the students with cerebral palsy at school is an important indicator of their successful participation in adulthood. One of the most important skills for the undisturbed social participation is ability of good communication. The aim of the study was to determine relation between social participation of students with cerebral palsy and presence of their speech impairment in primary schools in the Czech Republic. The study was performed in the Czech Republic in mainstream schools and schools established for the pupils with special education needs. We analysed 75 children with cerebral palsy aged between six and twelve years attending up to sixth grade by using the first and the third part of the school function assessment questionnaire as the main instrument. The other instrument we used in the research is the Gross motor function classification system–five–level classification system, which measures degree of motor functions of children and youth with cerebral palsy. Funding for this study was provided by the Grant Agency of Charles University in Prague.Keywords: cerebral palsy, social participation, speech disorders, The Czech Republic, the school function assessment
Procedia PDF Downloads 2851566 Modelling of Geotechnical Data Using Geographic Information System and MATLAB for Eastern Ahmedabad City, Gujarat
Authors: Rahul Patel
Abstract:
Ahmedabad, a city located in western India, is experiencing rapid growth due to urbanization and industrialization. It is projected to become a metropolitan city in the near future, resulting in various construction activities. Soil testing is necessary before construction can commence, requiring construction companies and contractors to periodically conduct soil testing. The focus of this study is on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical (Geo)-database involves three steps: collecting borehole data from reputable sources, verifying the accuracy and redundancy of the data, and standardizing and organizing the geotechnical information for integration into the database. Once the database is complete, it is integrated with GIS, allowing users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. This GIS map enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This study highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers.Keywords: ArcGIS, borehole data, geographic information system, geo-database, interpolation, SPT N-value, soil classification, Φ-Value, bearing capacity
Procedia PDF Downloads 741565 Probabilistic Safety Assessment of Koeberg Spent Fuel Pool
Authors: Sibongiseni Thabethe, Ian Korir
Abstract:
The effective management of spent fuel pool (SFP) safety has been raised as one of the emerging issues to further enhance nuclear installation safety after the Fukushima accident on March 11, 2011. Before then, SFP safety-related issues have been mainly focused on (a) controlling the configuration of the fuel assemblies in the pool with no loss of pool coolants and (b) ensuring adequate pool storage space to prevent fuel criticality owing to chain reactions of the fission products and the ability for neutron absorption to keep the fuel cool. A probabilistic safety (PSA) assessment was performed using the systems analysis program for hands-on integrated reliability evaluations (SAPHIRE) computer code. Event and fault tree analysis was done to develop a PSA model for the Koeberg SFP. We present preliminary PSA results of events that lead to boiling and cause fuel uncovering, resulting in possible fuel damage in the Koeberg SFP.Keywords: computer code, fuel assemblies, probabilistic risk assessment, spent fuel pool
Procedia PDF Downloads 1691564 Effects of Occupational Therapy on Children with Unilateral Cerebral Palsy
Authors: Sedef Şahin, Meral Huri
Abstract:
Cerebral Palsy (CP) represents the most frequent cause of physical disability in children with a rate of 2,9 per 1000 live births. The activity-focused intervention is known to improve function and reduce activity limitations and barriers to participation of children with disabilities. The aim of the study was to assess the effects of occupational therapy on level of fatigue, activity performance and satisfaction in children with Unilateral Cerebral Palsy. Twenty-two children with hemiparetic cerebral palsy (mean age: 9,3 ± 2.1years; Gross Motor Function Classification System ( GMFCS) level from I to V (I = 54%, II = 23%, III = 14%, IV= 9%, V= 0%), Manual Ability Classification System (MACS) level from I to V (I = 40%, II = 32%, III = 14%, IV= 10%, V= 4%), were assigned to occupational therapy program for 6 weeks.Visual Analogue Scale (VAS) was used for intensity of the fatigue they experienced at the time on a 10 point Likert scale (1-10).Activity performance and satisfaction were measured with Canadian Occupational Performance Measure (COPM).A client-centered occupational therapy intervention was designed according to results of COPM. The results were compared with nonparametric Wilcoxon test before and after the intervention. Thirteen of the children were right-handed, whereas nine of the children were left handed.Six weeks of intervention showed statistically significant differences in level of fatigue, compared to first assessment(p<0,05). The mean score of first and the second activity performance scores were 4.51 ± 1.70 and 7.35 ± 2.51 respectively. Statistically significant difference between performance scores were found (p<0.01). The mean scores of first and second activity satisfaction scores were of 2.30± 1.05 and 5.51 ± 2.26 respectively. Statistically significant difference between satisfaction assessments were found (p<0.01). Occupational therapy is an evidence-based approach and occupational therapy interventions implemented by therapists were clinically effective on severity of fatigue, activity performance and satisfaction if implemented individually during 6 weeks.Keywords: activity performance, cerebral palsy, fatigue, occupational therapy
Procedia PDF Downloads 2371563 Remote Sensing of Urban Land Cover Change: Trends, Driving Forces, and Indicators
Authors: Wei Ji
Abstract:
This study was conducted in the Kansas City metropolitan area of the United States, which has experienced significant urban sprawling in recent decades. The remote sensing of land cover changes in this area spanned over four decades from 1972 through 2010. The project was implemented in two stages: the first stage focused on detection of long-term trends of urban land cover change, while the second one examined how to detect the coupled effects of human impact and climate change on urban landscapes. For the first-stage study, six Landsat images were used with a time interval of about five years for the period from 1972 through 2001. Four major land cover types, built-up land, forestland, non-forest vegetation land, and surface water, were mapped using supervised image classification techniques. The study found that over the three decades the built-up lands in the study area were more than doubled, which was mainly at the expense of non-forest vegetation lands. Surprisingly and interestingly, the area also saw a significant gain in surface water coverage. This observation raised questions: How have human activities and precipitation variation jointly impacted surface water cover during recent decades? How can we detect such coupled impacts through remote sensing analysis? These questions led to the second stage of the study, in which we designed and developed approaches to detecting fine-scale surface waters and analyzing coupled effects of human impact and precipitation variation on the waters. To effectively detect urban landscape changes that might be jointly shaped by precipitation variation, our study proposed “urban wetscapes” (loosely-defined urban wetlands) as a new indicator for remote sensing detection. The study examined whether urban wetscape dynamics was a sensitive indicator of the coupled effects of the two driving forces. To better detect this indicator, a rule-based classification algorithm was developed to identify fine-scale, hidden wetlands that could not be appropriately detected based on their spectral differentiability by a traditional image classification. Three SPOT images for years 1992, 2008, and 2010, respectively were classified with this technique to generate the four types of land cover as described above. The spatial analyses of remotely-sensed wetscape changes were implemented at the scales of metropolitan, watershed, and sub-watershed, as well as based on the size of surface water bodies in order to accurately reveal urban wetscape change trends in relation to the driving forces. The study identified that urban wetscape dynamics varied in trend and magnitude from the metropolitan, watersheds, to sub-watersheds in response to human impacts at different scales. The study also found that increased precipitation in the region in the past decades swelled larger wetlands in particular while generally smaller wetlands decreased mainly due to human development activities. These results confirm that wetscape dynamics can effectively reveal the coupled effects of human impact and climate change on urban landscapes. As such, remote sensing of this indicator provides new insights into the relationships between urban land cover changes and driving forces.Keywords: urban land cover, human impact, climate change, rule-based classification, across-scale analysis
Procedia PDF Downloads 3081562 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 3471561 A Use Case-Oriented Performance Measurement Framework for AI and Big Data Solutions in the Banking Sector
Authors: Yassine Bouzouita, Oumaima Belghith, Cyrine Zitoun, Charles Bonneau
Abstract:
Performance measurement framework (PMF) is an essential tool in any organization to assess the performance of its processes. It guides businesses to stay on track with their objectives and benchmark themselves from the market. With the growing trend of the digital transformation of business processes, led by innovations in artificial intelligence (AI) & Big Data applications, developing a mature system capable of capturing the impact of digital solutions across different industries became a necessity. Based on the conducted research, no such system has been developed in academia nor the industry. In this context, this paper covers a variety of methodologies on performance measurement, overviews the major AI and big data applications in the banking sector, and covers an exhaustive list of relevant metrics. Consequently, this paper is of interest to both researchers and practitioners. From an academic perspective, it offers a comparative analysis of the reviewed performance measurement frameworks. From an industry perspective, it offers exhaustive research, from market leaders, of the major applications of AI and Big Data technologies, across the different departments of an organization. Moreover, it suggests a standardized classification model with a well-defined structure of intelligent digital solutions. The aforementioned classification is mapped to a centralized library that contains an indexed collection of potential metrics for each application. This library is arranged in a manner that facilitates the rapid search and retrieval of relevant metrics. This proposed framework is meant to guide professionals in identifying the most appropriate AI and big data applications that should be adopted. Furthermore, it will help them meet their business objectives through understanding the potential impact of such solutions on the entire organization.Keywords: AI and Big Data applications, impact assessment, metrics, performance measurement
Procedia PDF Downloads 1981560 Decision Support System for Fetus Status Evaluation Using Cardiotocograms
Authors: Oyebade K. Oyedotun
Abstract:
The cardiotocogram is a technical recording of the heartbeat rate and uterine contractions of a fetus during pregnancy. During pregnancy, several complications can occur to both the mother and the fetus; hence it is very crucial that medical experts are able to find technical means to check the healthiness of the mother and especially the fetus. It is very important that the fetus develops as expected in stages during the pregnancy period; however, the task of monitoring the health status of the fetus is not that which is easily achieved as the fetus is not wholly physically available to medical experts for inspection. Hence, doctors have to resort to some other tests that can give an indication of the status of the fetus. One of such diagnostic test is to obtain cardiotocograms of the fetus. From the analysis of the cardiotocograms, medical experts can determine the status of the fetus, and therefore necessary medical interventions. Generally, medical experts classify examined cardiotocograms into ‘normal’, ‘suspect’, or ‘pathological’. This work presents an artificial neural network based decision support system which can filter cardiotocograms data, producing the corresponding statuses of the fetuses. The capability of artificial neural network to explore the cardiotocogram data and learn features that distinguish one class from the others has been exploited in this research. In this research, feedforward and radial basis neural networks were trained on a publicly available database to classify the processed cardiotocogram data into one of the three classes: ‘normal’, ‘suspect’, or ‘pathological’. Classification accuracies of 87.8% and 89.2% were achieved during the test phase of the trained network for the feedforward and radial basis neural networks respectively. It is the hope that while the system described in this work may not be a complete replacement for a medical expert in fetus status evaluation, it can significantly reinforce the confidence in medical diagnosis reached by experts.Keywords: decision support, cardiotocogram, classification, neural networks
Procedia PDF Downloads 3321559 Electromagnetic Simulation of Underground Cable Perforation by Nail
Authors: Ahmed Nour El Islam Ayad, Tahar Rouibah, Wafa Krika, Houari Boudjella, Larab Moulay, Farid Benhamida, Selma Benmoussa
Abstract:
The purpose of this study is to evaluate the electromagnetic field of an underground cable of very high voltage perforated by nail. The aim of this work shows a numerical simulation of the electromagnetic field of 400 kV line after perforation through a ferrous nail in four positions for the pinch pin at different distances. From results for a longitudinal section, we observe and evaluate the distribution and the variation of the electromagnetic field in the cable and the earth. When the nail approaches the underground power cable, the distribution of the magnetic field changes and takes several forms, the magnetic field increase and become very important when the nail breaks the metal screen and will produce a significant leak of the electric field, characterized by a large electric arc and or electric discharge to earth and then a fault in the electrical network. These electromagnetic analysis results help to detect defects in underground cables.Keywords: underground, electromagnetic, nail, defect
Procedia PDF Downloads 2311558 Assessment of Forest Above Ground Biomass Through Linear Modeling Technique Using SAR Data
Authors: Arjun G. Koppad
Abstract:
The study was conducted in Joida taluk of Uttara Kannada district, Karnataka, India, to assess the land use land cover (LULC) and forest aboveground biomass using L band SAR data. The study area covered has dense, moderately dense, and sparse forests. The sampled area was 0.01 percent of the forest area with 30 sampling plots which were selected randomly. The point center quadrate (PCQ) method was used to select the tree and collected the tree growth parameters viz., tree height, diameter at breast height (DBH), and diameter at the tree base. The tree crown density was measured with a densitometer. Each sample plot biomass was estimated using the standard formula. In this study, the LULC classification was done using Freeman-Durden, Yamaghuchi and Pauli polarimetric decompositions. It was observed that the Freeman-Durden decomposition showed better LULC classification with an accuracy of 88 percent. An attempt was made to estimate the aboveground biomass using SAR backscatter. The ALOS-2 PALSAR-2 L-band data (HH, HV, VV &VH) fully polarimetric quad-pol SAR data was used. SAR backscatter-based regression model was implemented to retrieve forest aboveground biomass of the study area. Cross-polarization (HV) has shown a good correlation with forest above-ground biomass. The Multi Linear Regression analysis was done to estimate aboveground biomass of the natural forest areas of the Joida taluk. The different polarizations (HH &HV, VV &HH, HV & VH, VV&VH) combination of HH and HV polarization shows a good correlation with field and predicted biomass. The RMSE and value for HH & HV and HH & VV were 78 t/ha and 0.861, 81 t/ha and 0.853, respectively. Hence the model can be recommended for estimating AGB for the dense, moderately dense, and sparse forest.Keywords: forest, biomass, LULC, back scatter, SAR, regression
Procedia PDF Downloads 261557 The Effects of Functionality Level on Gait in Subjects with Low Back Pain
Authors: Vedat Kurt, Tansel Koyunoglu, Gamze Kurt, Ozgen Aras
Abstract:
Low back pain is one of the most common health problem in public. Common symptoms that can be associated with low back pain include; pain, functional disability, gait disturbances. The aim of the study was to investigate the differences between disability scores and gait parameters in subjects with low back pain. Sixty participants are included in our study, (35 men, 25 women, mean age: 37.65±10.02 years). Demographic characteristics of participants were recorded. Pain (visual analog scale) and disability level (Oswestry Disability Index(ODI)) were evaluated. Gait parameters were measured with Zebris-FDM-2 platform. Independent samples t-test was used to analyse the differences between subjects with under 40 points (n=31, mean age:35.8±11.3) and above 40 points (n=29, mean age:39.6±8.1) of ODI scores. Significant level in statistical analysis was accepted as 0.05. There was no significant difference between the ODI scores and groups’ ages. Statistically significant differences were found in step width between subjects with under 40 points of ODI and above 40 points of ODI score(p < 0.05). But there were non-significant differences with other gait parameters (p > 0.05). The differences between gait parameters and pain scores were not statistically significant (p > 0.05). Researchers generally agree that individuals with LBP walk slower and take shorter steps and have asymmetric step lengths when compared with than their age-matched pain-free counterparts. Also perceived general disability may have moderate correlation with walking performance. In the current study, the patients classified as minimal/moderate and severe disability level by using ODI scores. As a result, a patient with LBP who have higher disability level tends to increase support surface. On the other hand, we did not find any relation between pain intensity and gait parameters. It may be caused by the classification system of pain scores. Additional research is needed to investigate the effects of functionality level and pain intensity on gait in subjects with low back pain under different classification types.Keywords: functionality, low back pain, gait, pain
Procedia PDF Downloads 2851556 Review of Numerical Models for Granular Beds in Solar Rotary Kilns for Thermal Applications
Authors: Edgar Willy Rimarachin Valderrama, Eduardo Rojas Parra
Abstract:
Thermal energy from solar radiation is widely present in power plants, food drying, chemical reactors, heating and cooling systems, water treatment processes, hydrogen production, and others. In the case of power plants, one of the technologies available to transform solar energy into thermal energy is by solar rotary kilns where a bed of granular matter is heated through concentrated radiation obtained from an arrangement of heliostats. Numerical modeling is a useful approach to study the behavior of granular beds in solar rotary kilns. This technique, once validated with small-scale experiments, can be used to simulate large-scale processes for industrial applications. This study gives a comprehensive classification of numerical models used to simulate the movement and heat transfer for beds of granular media within solar rotary furnaces. In general, there exist three categories of models: 1) continuum, 2) discrete, and 3) multiphysics modeling. The continuum modeling considers zero-dimensional, one-dimensional and fluid-like models. On the other hand, the discrete element models compute the movement of each particle of the bed individually. In this kind of modeling, the heat transfer acts during contacts, which can occur by solid-solid and solid-gas-solid conduction. Finally, the multiphysics approach considers discrete elements to simulate grains and a continuous modeling to simulate the fluid around particles. This classification allows to compare the advantages and disadvantages for each kind of model in terms of accuracy, computational cost and implementation.Keywords: granular beds, numerical models, rotary kilns, solar thermal applications
Procedia PDF Downloads 341555 Transient Stability Improvement in Multi-Machine System Using Power System Stabilizer (PSS) and Static Var Compensator (SVC)
Authors: Khoshnaw Khalid Hama Saleh, Ergun Ercelebi
Abstract:
Increasingly complex modern power systems require stability, especially for transient and small disturbances. Transient stability plays a major role in stability during fault and large disturbance. This paper compares a power system stabilizer (PSS) and static Var compensator (SVC) to improve damping oscillation and enhance transient stability. The effectiveness of a PSS connected to the exciter and/or governor in damping electromechanical oscillations of isolated synchronous generator was tested. The SVC device is a member of the shunt FACTS (flexible alternating current transmission system) family, utilized in power transmission systems. The designed model was tested with a multi-machine system consisting of four machines six bus, using MATLAB/SIMULINK software. The results obtained indicate that SVC solutions are better than PSS.Keywords: FACTS, MATLAB/SIMULINK, multi-machine system, PSS, SVC, transient stability
Procedia PDF Downloads 4551554 Stabilization of Lateritic Soil Sample from Ijoko with Cement Kiln Dust and Lime
Authors: Akinbuluma Ayodeji Theophilus, Adewale Olutaiwo
Abstract:
When building roads and paved surfaces, a strong foundation is always essential. A durable material that can withstand years of traffic while staying trustworthy must be used to build the foundation. A frequent problem in the construction of roads and pavements is the lack of high-quality, long-lasting materials for the pavement structure (base, subbase, and subgrade). Hence, this study examined the stabilization of lateritic soil samples from Ijoko with cement kiln dust and lime. The study adopted the experimental design. Laboratory tests were conducted on classification, swelling potential, compaction, California bearing ratio (CBR), and unconfined compressive tests, among others, were conducted on the laterite sample treated with cement kiln dust (CKD) and lime in incremental order of 2% up to 10% of dry weight soft soil sample. The results of the test showed that the studied soil could be classified as an A-7-6 and CL soil using the American Association of State Highway and transport officials (AASHTO) and the unified soil classification system (USCS), respectively. The plasticity (PI) of the studied soil reduced from 30.5% to 29.9% at the application of CKD. The maximum dry density on the application of CKD reduced from 1.9.7 mg/m3 to 1.86mg/m3, and lime application yielded a reduction from 1.97mg/m3 to 1.88.mg/m3. The swell potential on CKD application was reduced from 0.05 to 0.039%. The study concluded that soil stabilizations are effective and economic way of improving road pavement for engineering benefit. The degree of effectiveness of stabilization in pavement construction was found to depend on the type of soil to be stabilized. The study therefore recommended that stabilized soil mixtures should be used to subbase material for flexible pavement since is a suitable.Keywords: lateritic soils, sand, cement, stabilization, road pavement
Procedia PDF Downloads 901553 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: customer value, Huff's Gravity Model, POS, Retailer
Procedia PDF Downloads 1231552 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI
Authors: James Rigor Camacho, Wansu Lim
Abstract:
Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors
Procedia PDF Downloads 1051551 Timely Detection and Identification of Abnormalities for Process Monitoring
Authors: Hyun-Woo Cho
Abstract:
The detection and identification of multivariate manufacturing processes are quite important in order to maintain good product quality. Unusual behaviors or events encountered during its operation can have a serious impact on the process and product quality. Thus they should be detected and identified as soon as possible. This paper focused on the efficient representation of process measurement data in detecting and identifying abnormalities. This qualitative method is effective in representing fault patterns of process data. In addition, it is quite sensitive to measurement noise so that reliable outcomes can be obtained. To evaluate its performance a simulation process was utilized, and the effect of adopting linear and nonlinear methods in the detection and identification was tested with different simulation data. It has shown that the use of a nonlinear technique produced more satisfactory and more robust results for the simulation data sets. This monitoring framework can help operating personnel to detect the occurrence of process abnormalities and identify their assignable causes in an on-line or real-time basis.Keywords: detection, monitoring, identification, measurement data, multivariate techniques
Procedia PDF Downloads 2361550 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 3091549 An Autonomous Passive Acoustic System for Detection, Tracking and Classification of Motorboats in Portofino Sea
Authors: A. Casale, J. Alessi, C. N. Bianchi, G. Bozzini, M. Brunoldi, V. Cappanera, P. Corvisiero, G. Fanciulli, D. Grosso, N. Magnoli, A. Mandich, C. Melchiorre, C. Morri, P. Povero, N. Stasi, M. Taiuti, G. Viano, M. Wurtz
Abstract:
This work describes a real-time algorithm for detecting, tracking and classifying single motorboats, developed using the acoustic data recorded by a hydrophone array within the framework of EU LIFE + project ARION (LIFE09NAT/IT/000190). The project aims to improve the conservation status of bottlenose dolphins through a real-time simultaneous monitoring of their population and surface ship traffic. A Passive Acoustic Monitoring (PAM) system is installed on two autonomous permanent marine buoys, located close to the boundaries of the Marine Protected Area (MPA) of Portofino (Ligurian Sea- Italy). Detecting surface ships is also a necessity in many other sensible areas, such as wind farms, oil platforms, and harbours. A PAM system could be an effective alternative to the usual monitoring systems, as radar or active sonar, for localizing unauthorized ship presence or illegal activities, with the advantage of not revealing its presence. Each ARION buoy consists of a particular type of structure, named meda elastica (elastic beacon) composed of a main pole, about 30-meter length, emerging for 7 meters, anchored to a mooring of 30 tons at 90 m depth by an anti-twist steel wire. Each buoy is equipped with a floating element and a hydrophone tetrahedron array, whose raw data are send via a Wi-Fi bridge to a ground station where real-time analysis is performed. Bottlenose dolphin detection algorithm and ship monitoring algorithm are operating in parallel and in real time. Three modules were developed and commissioned for ship monitoring. The first is the detection algorithm, based on Time Difference Of Arrival (TDOA) measurements, i.e., the evaluation of angular direction of the target respect to each buoy and the triangulation for obtaining the target position. The second is the tracking algorithm, based on a Kalman filter, i.e., the estimate of the real course and speed of the target through a predictor filter. At last, the classification algorithm is based on the DEMON method, i.e., the extraction of the acoustic signature of single vessels. The following results were obtained; the detection algorithm succeeded in evaluating the bearing angle with respect to each buoy and the position of the target, with an uncertainty of 2 degrees and a maximum range of 2.5 km. The tracking algorithm succeeded in reconstructing the real vessel courses and estimating the speed with an accuracy of 20% respect to the Automatic Identification System (AIS) signals. The classification algorithm succeeded in isolating the acoustic signature of single vessels, demonstrating its temporal stability and the consistency of both buoys results. As reference, the results were compared with the Hilbert transform of single channel signals. The algorithm for tracking multiple targets is ready to be developed, thanks to the modularity of the single ship algorithm: the classification module will enumerate and identify all targets present in the study area; for each of them, the detection module and the tracking module will be applied to monitor their course.Keywords: acoustic-noise, bottlenose-dolphin, hydrophone, motorboat
Procedia PDF Downloads 1731548 Fault Diagnosis of Squirrel-Cage Induction Motor by a Neural Network Multi-Models
Authors: Yahia. Kourd, N. Guersi D. Lefebvre
Abstract:
In this paper we propose to study the faults diagnosis in squirrel-cage induction motor using MLP neural networks. We use neural healthy and faulty models of the behavior in order to detect and isolate some faults in machine. In the first part of this work, we have created a neural model for the healthy state using Matlab and a motor located in LGEB by acquirins data inputs and outputs of this engine. Then we detected the faults in the machine by residual generation. These residuals are not sufficient to isolate the existing faults. For this reason, we proposed additive neural networks to represent the faulty behaviors. From the analysis of these residuals and the choice of a threshold we propose a method capable of performing the detection and diagnosis of some faults in asynchronous machines with squirrel cage rotor.Keywords: faults diagnosis, neural networks, multi-models, squirrel-cage induction motor
Procedia PDF Downloads 6361547 Multivariate Data Analysis for Automatic Atrial Fibrillation Detection
Authors: Zouhair Haddi, Stephane Delliaux, Jean-Francois Pons, Ismail Kechaf, Jean-Claude De Haro, Mustapha Ouladsine
Abstract:
Atrial fibrillation (AF) has been considered as the most common cardiac arrhythmia, and a major public health burden associated with significant morbidity and mortality. Nowadays, telemedical approaches targeting cardiac outpatients situate AF among the most challenged medical issues. The automatic, early, and fast AF detection is still a major concern for the healthcare professional. Several algorithms based on univariate analysis have been developed to detect atrial fibrillation. However, the published results do not show satisfactory classification accuracy. This work was aimed at resolving this shortcoming by proposing multivariate data analysis methods for automatic AF detection. Four publicly-accessible sets of clinical data (AF Termination Challenge Database, MIT-BIH AF, Normal Sinus Rhythm RR Interval Database, and MIT-BIH Normal Sinus Rhythm Databases) were used for assessment. All time series were segmented in 1 min RR intervals window and then four specific features were calculated. Two pattern recognition methods, i.e., Principal Component Analysis (PCA) and Learning Vector Quantization (LVQ) neural network were used to develop classification models. PCA, as a feature reduction method, was employed to find important features to discriminate between AF and Normal Sinus Rhythm. Despite its very simple structure, the results show that the LVQ model performs better on the analyzed databases than do existing algorithms, with high sensitivity and specificity (99.19% and 99.39%, respectively). The proposed AF detection holds several interesting properties, and can be implemented with just a few arithmetical operations which make it a suitable choice for telecare applications.Keywords: atrial fibrillation, multivariate data analysis, automatic detection, telemedicine
Procedia PDF Downloads 2671546 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: Bahareh Golchin, Nooshin Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia PDF Downloads 1371545 Locus of Control, Metacognitive Knowledge, Metacognitive Regulation, and Student Performance in an Introductory Economics Course
Authors: Ahmad A. Kader
Abstract:
In the principles of Microeconomics course taught during the Fall Semester 2019, 158out of 179 students participated in the completion of two questionnaires and a survey describing their demographic and academic profiles. The two questionnaires include the 29 items of the Rotter Locus of Control Scale and the 52 items of the Schraw andDennisonMetacognitive Awareness Scale. The 52 items consist of 17 items describing knowledge of cognition and 37 items describing the regulation of cognition. The paper is intended to show the combined influence of locus of control, metacognitive knowledge, and metacognitive regulation on student performance. The survey covers variables that have been tested and recognized in economic education literature, which include GPA, gender, age, course level, race, student classification, whether the course was required or elective, employments, whether a high school economic course was taken, and attendance. Regression results show that of the economic education variables, GPA, classification, whether the course was required or elective, and attendance are the only significant variables in their influence on student grade. Of the educational psychology variables, the regression results show that the locus of control variable has a negative and significant effect, while the metacognitive knowledge variable has a positive and significant effect on student grade. Also, the adjusted R square value increased markedly with the addition of the locus of control, metacognitive knowledge, and metacognitive regulation variables to the regression equation. The t test results also show that students who are internally oriented and are high on the metacognitive knowledge scale significantly outperform students who are externally oriented and are low on the metacognitive knowledge scale. The implication of these results for educators is discussed in the paper.Keywords: locus of control, metacognitive knowledge, metacognitive regulation, student performance, economic education
Procedia PDF Downloads 1201544 Classification of Coughing and Breathing Activities Using Wearable and a Light-Weight DL Model
Authors: Subham Ghosh, Arnab Nandi
Abstract:
Background: The proliferation of Wireless Body Area Networks (WBAN) and Internet of Things (IoT) applications demonstrates the potential for continuous monitoring of physical changes in the body. These technologies are vital for health monitoring tasks, such as identifying coughing and breathing activities, which are necessary for disease diagnosis and management. Monitoring activities such as coughing and deep breathing can provide valuable insights into a variety of medical issues. Wearable radio-based antenna sensors, which are lightweight and easy to incorporate into clothing or portable goods, provide continuous monitoring. This mobility gives it a substantial advantage over stationary environmental sensors like as cameras and radar, which are constrained to certain places. Furthermore, using compressive techniques provides benefits such as reduced data transmission speeds and memory needs. These wearable sensors offer more advanced and diverse health monitoring capabilities. Methodology: This study analyzes the feasibility of using a semi-flexible antenna operating at 2.4 GHz (ISM band) and positioned around the neck and near the mouth to identify three activities: coughing, deep breathing, and idleness. Vector network analyzer (VNA) is used to collect time-varying complex reflection coefficient data from perturbed antenna nearfield. The reflection coefficient (S11) conveys nuanced information caused by simultaneous variations in the nearfield radiation of three activities across time. The signatures are sparsely represented with gaussian windowed Gabor spectrograms. The Gabor spectrogram is used as a sparse representation approach, which reassigns the ridges of the spectrogram images to improve their resolution and focus on essential components. The antenna is biocompatible in terms of specific absorption rate (SAR). The sparsely represented Gabor spectrogram pictures are fed into a lightweight deep learning (DL) model for feature extraction and classification. Two antenna locations are investigated in order to determine the most effective localization for three different activities. Findings: Cross-validation techniques were used on data from both locations. Due to the complex form of the recorded S11, separate analyzes and assessments were performed on the magnitude, phase, and their combination. The combination of magnitude and phase fared better than the separate analyses. Various sliding window sizes, ranging from 1 to 5 seconds, were tested to find the best window for activity classification. It was discovered that a neck-mounted design was effective at detecting the three unique behaviors.Keywords: activity recognition, antenna, deep-learning, time-frequency
Procedia PDF Downloads 91543 Design and Implementation of Testable Reversible Sequential Circuits Optimized Power
Authors: B. Manikandan, A. Vijayaprabhu
Abstract:
The conservative reversible gates are used to designed reversible sequential circuits. The sequential circuits are flip-flops and latches. The conservative logic gates are Feynman, Toffoli, and Fredkin. The design of two vectors testable sequential circuits based on conservative logic gates. All sequential circuit based on conservative logic gates can be tested for classical unidirectional stuck-at faults using only two test vectors. The two test vectors are all 1s, and all 0s. The designs of two vectors testable latches, master-slave flip-flops and double edge triggered (DET) flip-flops are presented. We also showed the application of the proposed approach toward 100% fault coverage for single missing/additional cell defect in the quantum- dot cellular automata (QCA) layout of the Fredkin gate. The conservative logic gates are in terms of complexity, speed, and area.Keywords: DET, QCA, reversible logic gates, POS, SOP, latches, flip flops
Procedia PDF Downloads 3041542 Impact of Forgiveness Therapy on Quality of Life of Parents of Children with Intellectual Disability
Authors: Prajakta Bhadgaonkar
Abstract:
Forgiveness is taught since birth in Indian tradition. However, delivering a disabled child is a trauma for the parents. They keep on blaming themselves for the fault, which they are not responsible. Hence, due to lack of forgiving oneself the quality of life of both parent and child gets affected. In forgiveness, person tries to relieve oneself from the feeling of hatred towards oneself or other person. Forgiveness helps move ahead in the life. Hence, one can handle problem more efficiently resulting into better quality of life. In this study, the 30 parents of children with intellectual disability were contacted to find out quality of life. They were administered standardized measure of quality of life (QOL). The children were between 6 to 8 years of age. Out of these 30 parents, 12 parents (7 females and 5 males) were given forgiveness therapy for three months span. After every one month, the QOL scale was administered. At the end of three months, the significant difference was observed in quality of life of parents of children with intellectual disability. Genderwise there was no significant difference between male and female on quality of life.Keywords: children with intellectual disability, forgiveness, parents, quality of life
Procedia PDF Downloads 3301541 Investigating the Morphological Patterns of Lip Prints and Their Effectiveness in Individualization and Gender Determination in Pakistani Population
Authors: Makhdoom Saad Wasim Ghouri, Muneeba Butt, Mohammad Ashraf Tahir, Rashid Bhatti, Akbar Ali, Abdul Rehman, Abdul Basit, Muzzamel Rehman, Shahbaz Aslam, Farakh Mansoor, Ahmad Fayyaz, Hadia Siddiqui
Abstract:
Lip print analysis (Cheiloscopy) is the new emerging technique that might be the guardian angel in establishing the personal identity. Cheiloscopy is basically the study of elevations and depressions present on the external surface of the lips. In our study, 600 lip prints samples were taken (300 males and 300 females). Lip prints of each individual were divided into four quadrants and the upper middle portion. For general classification, middle part of the lower lip almost 10 mm wide would be taken into consideration. After analysis of lip-prints, our results show that lip prints are the unique and permanent character of every individual. No two lip print was matched with each other even of the identical twins. Our study reveals that there is equal distribution of lip print patterns among all the four quadrants of lips and the upper middle portion; these distributions were statistically analyzed by applying chi-square test which shows the significant results. In general classification, 5 lip print types/patterns were studied, Type 1 (Vertical lines), Type 2 (Branched pattern), Type 3 (Intersected pattern), Type 4 (Reticular pattern) and Type 5 (Undetermined). Type 1 and Type 2 were found to be the most frequent patterns in female population, while Type 3 and Type 4 most commonly found in male population. These results were also analyzed by applying Chi-square test, and the results show significance statistically. Thus, establishing sex determination on the basis of lip print types among the gender. Type 5 was the least common pattern among genders.Keywords: cheiloscopy, distribution, quadrants, sex determination
Procedia PDF Downloads 2981540 A Generic Metamodel for Dependability Analysis
Authors: Moomen Chaari, Wolfgang Ecker, Thomas Kruse, Bogdan-Andrei Tabacaru
Abstract:
In our daily life, we frequently interact with complex systems which facilitate our mobility, enhance our access to information, and sometimes help us recover from illnesses or diseases. The reliance on these systems is motivated by the established evaluation and assessment procedures which are performed during the different phases of the design and manufacturing flow. Such procedures are aimed to qualify the system’s delivered services with respect to their availability, reliability, safety, and other properties generally referred to as dependability attributes. In this paper, we propose a metamodel based generic characterization of dependability concepts and describe an automation methodology to customize this characterization to different standards and contexts. When integrated in concrete design and verification environments, the proposed methodology promotes the reuse of already available dependability assessment tools and reduces the costs and the efforts required to create consistent and efficient artefacts for fault injection or error simulation.Keywords: dependability analysis, model-driven development, metamodeling, code generation
Procedia PDF Downloads 4861539 On the Relation between λ-Symmetries and μ-Symmetries of Partial Differential Equations
Authors: Teoman Ozer, Ozlem Orhan
Abstract:
This study deals with symmetry group properties and conservation laws of partial differential equations. We give a geometrical interpretation of notion of μ-prolongations of vector fields and of the related concept of μ-symmetry for partial differential equations. We show that these are in providing symmetry reduction of partial differential equations and systems and invariant solutions.Keywords: λ-symmetry, μ-symmetry, classification, invariant solution
Procedia PDF Downloads 3191538 Prevalence of Malocclusion and Assessment of Orthodontic Treatment Needs in Malay Transfusion-Dependent Thalassemia Patients
Authors: Mohamed H. Kosba, Heba A. Ibrahim, H. Rozita
Abstract:
Statement of the Problem: The life expectancy for transfusion-dependent thalassemia patients has increased dramatically with iron-chelation therapy and other modern management modalities. In these patients, the most dominant maxillofacial manifestations are protrusion of zygomatic bones and premaxilla due to the hyperplasia of bone marrow. The purpose of this study is to determine the prevalence of malocclusion and orthodontic treatment needs according to the Dental Aesthetic Index (DAI) among Malay transfusion-dependent thalassemia patients. Orientation: This is a cross-sectional study consist of 43 Malay transfusion-dependent thalassemia patients, 22 males, and 19 females with the mean age of 15.9 years old (SD 3.58). The subjects were selected randomly from patients attending Paediatrics and Internal Medicine Clinic at Hospital USM and Hospital Sultana Bahiyah. The subjects were assessed for malocclusion according to Angle’s classification, and orthodontic treatment needs using DAI. The results show that 22 of the subjects (51.1%) have class II malocclusion, 12 subjects (28%) have class І, while 9 subjects (20.9%) have class Ⅲ. The assessment of orthodontic treatment needs to reveal 22 cases (51.1%) fall in the normal/minor needs category, 12 subjects (28%) fall in the severe and very severe category, while 9 subjects (20.9%) fall in the definite category. Conclusion & Significance: Half of Malay transfusion-dependent thalassemia patients have Class Ⅱmalocclusion. About 28% had malocclusion and required orthodontic treatment. This research shows that Malay transfusion-dependent thalassemia may require orthodontic management; earlier intervention to reduce the complexity of the treatment later, suggesting functional appliance as a suitable treatment option for them, a twin block appliance together with headgear to restrict maxillary growth suggested for management. The current protocol implemented by the Malaysian Ministry of Health for the management of these patients seems to be sufficient since the result shows that about 28% require orthodontic treatment need, according to DAI.Keywords: prevalence, DAI, thalassaemia, angle classification
Procedia PDF Downloads 143