Search results for: detecting of envelope modulation on noise
522 A Method for Evaluating the Mechanical Stress on Mandibular Advancement Devices
Authors: Tsung-yin Lin, Yi-yu Lee, Ching-hua Hung
Abstract:
Snoring, the lay term for obstructive breathing during sleep, is one of the most prevalent of obnoxious human habits. Loud snoring usually makes others feel noisy and uncomfortable. Snoring also influences the sleep quality of snorers’ bed partners, because of the noise they do not get to sleep easily. Snoring causes the reduce of sleep quality leading to several medical problems, such as excessive daytime sleepiness, high blood pressure, increased risk for cardiovascular disease and cerebral vascular accident, and etc. There are many non-prescription devices offered for sale on the market, but very limited data are available to support a beneficial effect of these devices on snoring and use in treating obstructive sleep apnea (OSA). Mandibular advancement devices (MADs), also termed as the Mandibular reposition devices (MRDs) are removable devices which are worn at night during sleep. Most devices require dental impression, bite registration, and fabrication by a dental laboratory. Those devices are fixed to upper and lower teeth and are adjusted to advance the mandible. The amount of protrusion is adjusted to meet the therapeutic requirements, comfort, and tolerance. Many devices have a fixed degree of advancement. Some are adjustable in a limited degree. This study focuses on the stress analysis of Mandibular Advancement Devices (MADs), which are considered as a standard treatment of snoring that promoted by American Academy of Sleep Medicine (AASM). This paper proposes a new MAD design, and the finite element analysis (FEA) is introduced to precede the stress simulation for this MAD.Keywords: finite element analysis, mandibular advancement devices, mechanical stress, snoring
Procedia PDF Downloads 356521 The Role of Autophagy Modulation in Angiotensin-II Induced Hypertrophy
Authors: Kitti Szoke, Laszlo Szoke, Attila Czompa, Arpad Tosaki, Istvan Lekli
Abstract:
Autophagy plays an important role in cardiac hypertrophy, which is one of the most common causes of heart failure in the world. This self-degradative catabolic process, responsible for protein quality control, balancing sources of energy at critical times, and elimination of damaged organelles. The autophagic activity can be triggered by starvation, oxidative stress, or pharmacological agents, like rapamycin. This induced autophagy can promote cell survival during starvation or pathological stress. In this study, it is investigated the effect of the induced autophagic process on angiotensin induced hypertrophic H9c2 cells. In our study, it is used H9c2 cells as an in vitro model. To induce hypertrophy, cells were treated with 10000 nM angiotensin-II, and to activate autophagy, 100 nM rapamycin treatment was used. The following groups were formed: 1: control, 2: 10000 nM AT-II, 3: 100 nM rapamycin, 4: 100 nM rapamycin pretreatment then 10000 nM AT-II. The cell viability was examined via MTT (cell proliferation assay) assay. The cells were stained with rhodamine-conjugated phalloidin and DAPI to visualize F-actin filaments and cell nuclei then the cell size alteration was examined in a fluorescence microscope. Furthermore, the expression levels of autophagic and apoptotic proteins such as Beclin-1, p62, LC3B-II, Cleaved Caspase-3 were evaluated by Western blot. MTT assay result suggests that the used pharmaceutical agents in the tested concentrations did not have a toxic effect; however, at group 3, a slight decrement was detected in cell viability. In response to AT-II treatment, a significant increase was detected in the cell size; cells became hypertrophic. However, rapamycin pretreatment slightly reduced the cell size compared to group 2. Western blot results showed that AT-II treatment-induced autophagy, because the increased expression of Beclin-1, p62, LC3B-II were observed. However, due to the incomplete autophagy, the apoptotic Cleaved Caspase-3 expression also increased. Rapamycin pretreatment up-regulated Beclin-1 and LC3B-II, down-regulated p62 and Cleaved Caspase-3, indicating that rapamycin-induced autophagy can restore the normal autophagic flux. Taken together, our results suggest that rapamycin activated autophagy reduces angiotensin-II induced hypertrophy.Keywords: angiotensin-II, autophagy, H9c2 cell line, hypertrophy, rapamycin
Procedia PDF Downloads 147520 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System
Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi
Abstract:
Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.Keywords: channel estimation, OFDM, pilot-assist, VLC
Procedia PDF Downloads 180519 Anticancer Activity of Milk Fat Rich in Conjugated Linoleic Acid Against Ehrlich Ascites Carcinoma Cells in Female Swiss Albino Mice
Authors: Diea Gamal Abo El-Hassan, Salwa Ahmed Aly, Abdelrahman Mahmoud Abdelgwad
Abstract:
The major conjugated linoleic acid (CLA) isomers have anticancer effect, especially breast cancer cells, inhibits cell growth and induces cell death. Also, CLA has several health benefits in vivo, including antiatherogenesis, antiobesity, and modulation of immune function. The present study aimed to assess the safety and anticancer effects of milk fat CLA against in vivo Ehrlich ascites carcinoma (EAC) in female Swiss albino mice. This was based on acute toxicity study, detection of the tumor growth, life span of EAC bearing hosts, and simultaneous alterations in the hematological, biochemical, and histopathological profiles. Materials and Methods: One hundred and fifty adult female mice were equally divided into five groups. Groups (1-2) were normal controls, and Groups (3-5) were tumor transplanted mice (TTM) inoculated intraperitoneally with EAC cells (2×106 /0.2 mL). Group (3) was (TTM positive control). Group (4) TTM fed orally on balanced diet supplemented with milk fat CLA (40 mg CLA/kg body weight). Group (5) TTM fed orally on balanced diet supplemented with the same level of CLA 28 days before tumor cells inoculation. Blood samples and specimens from liver and kidney were collected from each group. The effect of milk fat CLA on the growth of tumor, life span of TTM, and simultaneous alterations in the hematological, biochemical, and histopathological profiles were examined. Results: For CLA treated TTM, significant decrease in tumor weight, ascetic volume, viable Ehrlich cells accompanied with increase in life span were observed. Hematological and biochemical profiles reverted to more or less normal levels and histopathology showed minimal effects. Conclusion: The present study proved the safety and anticancer efficiency of milk fat CLA and provides a scientific basis for its medicinal use as anticancer attributable to the additive or synergistic effects of its isomers.Keywords: anticancer activity, conjugated linoleic acid, Ehrlich ascites carcinoma, % increase in life span, mean survival time, tumor transplanted mice.
Procedia PDF Downloads 90518 Development of a Few-View Computed Tomographic Reconstruction Algorithm Using Multi-Directional Total Variation
Authors: Chia Jui Hsieh, Jyh Cheng Chen, Chih Wei Kuo, Ruei Teng Wang, Woei Chyn Chu
Abstract:
Compressed sensing (CS) based computed tomographic (CT) reconstruction algorithm utilizes total variation (TV) to transform CT image into sparse domain and minimizes L1-norm of sparse image for reconstruction. Different from the traditional CS based reconstruction which only calculates x-coordinate and y-coordinate TV to transform CT images into sparse domain, we propose a multi-directional TV to transform tomographic image into sparse domain for low-dose reconstruction. Our method considers all possible directions of TV calculations around a pixel, so the sparse transform for CS based reconstruction is more accurate. In 2D CT reconstruction, we use eight-directional TV to transform CT image into sparse domain. Furthermore, we also use 26-directional TV for 3D reconstruction. This multi-directional sparse transform method makes CS based reconstruction algorithm more powerful to reduce noise and increase image quality. To validate and evaluate the performance of this multi-directional sparse transform method, we use both Shepp-Logan phantom and a head phantom as the targets for reconstruction with the corresponding simulated sparse projection data (angular sampling interval is 5 deg and 6 deg, respectively). From the results, the multi-directional TV method can reconstruct images with relatively less artifacts compared with traditional CS based reconstruction algorithm which only calculates x-coordinate and y-coordinate TV. We also choose RMSE, PSNR, UQI to be the parameters for quantitative analysis. From the results of quantitative analysis, no matter which parameter is calculated, the multi-directional TV method, which we proposed, is better.Keywords: compressed sensing (CS), low-dose CT reconstruction, total variation (TV), multi-directional gradient operator
Procedia PDF Downloads 256517 Setting the Baseline for a Sentinel System for the Identification of Occupational Risk Factors in Africa
Authors: Menouni Aziza, Chbihi Kaoutar, Duca Radu Corneliu, Gilissen Liesbeth, Bounou Salim, Godderis Lode, El Jaafari Samir
Abstract:
In Africa, environmental and occupational health risks are mostly underreported. The aim of this research is to develop and implement a sentinel surveillance system comprising training and guidance of occupational physicians (OC) who will report new work-related diseases in African countries. A group of 30 OC are recruited and trained in each of the partner countries (Morocco, Benin and Ethiopia). Each committed OC is asked to recruit 50 workers during a consultation in a time-frame of 6 months (1500 workers per country). Workers are asked to fill out an online questionnaire about their health status and work conditions, including exposure to 20 chemicals. Urine and blood samples are then collected for human biomonitoring of common exposures. Some preliminary results showed that 92% of the employees surveyed are exposed to physical constraints, 44% to chemical agents, and 24% to biological agents. The most common physical constraints are manual handling of loads, noise pollution and thermal pollution. The most frequent chemical risks are exposure to pesticides and fuels. This project will allow a better understanding of effective sentinel systems as a promising method to gather high quality data, which can support policy-making in terms of preventing emerging work-related diseases.Keywords: sentinel system, occupational diseases, human biomonitoring, Africa
Procedia PDF Downloads 82516 Opto-Thermal Frequency Modulation of Phase Change Micro-Electro-Mechanical Systems
Authors: Syed A. Bukhari, Ankur Goswmai, Dale Hume, Thomas Thundat
Abstract:
Here we demonstrate mechanical detection of photo-induced Insulator to metal transition (MIT) in ultra-thin vanadium dioxide (VO₂) micro strings by using < 100 µW of optical power. Highly focused laser beam heated the string locally resulting in through plane and along axial heat diffusion. Localized temperature increase can cause temperature rise > 60 ºC. The heated region of VO₂ can transform from insulating (monoclinic) to conducting (rutile) phase leading to lattice compressions and stiffness increase in the resonator. The mechanical frequency of the resonator can be tuned by changing optical power and wavelength. The first mode resonance frequency was tuned in three different ways. A decrease in frequency below a critical optical power, a large increase between 50-120 µW followed by a large decrease in frequency for optical powers greater than 120 µW. The dynamic mechanical response was studied as a function of incident optical power and gas pressure. The resonance frequency and amplitude of vibration were found to be decreased with increasing laser power from 25-38 µW and increased by1-2 % when the laser power was further increased to 52 µW. The transition in films was induced and detected by a single pump and probe source and by employing external optical sources of different wavelengths. This trend in dynamic parameters of the strings can be co-related with reversible Insulator to metal transition in VO₂ films which creates change in density of the material and hence the overall stiffness of the strings leading to changes in string dynamics. The increase in frequency at a particular optical power manifests a transition to a more ordered metallic phase which tensile stress onto the string. The decrease in frequency at higher optical powers can be correlated with poor phonon thermal conductivity of VO₂ in conducting phase. Poor thermal conductivity of VO₂ can force in-plane penetration of heat causing the underneath SiN supporting VO₂ which can result as a decrease in resonance frequency. This noninvasive, non-contact laser-based excitation and detection of Insulator to metal transition using micro strings resonators at room temperature and with laser power in few µWs is important for low power electronics, and optical switching applications.Keywords: thermal conductivity, vanadium dioxide, MEMS, frequency tuning
Procedia PDF Downloads 120515 Mitigation of Interference in Satellite Communications Systems via a Cross-Layer Coding Technique
Authors: Mario A. Blanco, Nicholas Burkhardt
Abstract:
An important problem in satellite communication systems which operate in the Ka and EHF frequency bands consists of the overall degradation in link performance of mobile terminals due to various types of degradations in the link/channel, such as fading, blockage of the link to the satellite (especially in urban environments), intentional as well as other types of interference, etc. In this paper, we focus primarily on the interference problem, and we develop a very efficient and cost-effective solution based on the use of fountain codes. We first introduce a satellite communications (SATCOM) terminal uplink interference channel model that is classically used against communication systems that use spread-spectrum waveforms. We then consider the use of fountain codes, with focus on Raptor codes, as our main mitigation technique to combat the degradation in link/receiver performance due to the interference signal. The performance of the receiver is obtained in terms of average probability of bit and message error rate as a function of bit energy-to-noise density ratio, Eb/N0, and other parameters of interest, via a combination of analysis and computer simulations, and we show that the use of fountain codes is extremely effective in overcoming the effects of intentional interference on the performance of the receiver and associated communication links. We then show this technique can be extended to mitigate other types of SATCOM channel degradations, such as those caused by channel fading, shadowing, and hard-blockage of the uplink signal.Keywords: SATCOM, interference mitigation, fountain codes, turbo codes, cross-layer
Procedia PDF Downloads 361514 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 78513 Detecting Hate Speech And Cyberbullying Using Natural Language Processing
Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão
Abstract:
Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning
Procedia PDF Downloads 228512 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 214511 Detecting Local Clusters of Childhood Malnutrition in the Island Province of Marinduque, Philippines Using Spatial Scan Statistic
Authors: Novee Lor C. Leyso, Maylin C. Palatino
Abstract:
Under-five malnutrition continues to persist in the Philippines, particularly in the island Province of Marinduque, with prevalence of some forms of malnutrition even worsening in recent years. Local spatial cluster detection provides a spatial perspective in understanding this phenomenon as key in analyzing patterns of geographic variation, identification of community-appropriate programs and interventions, and focused targeting on high-risk areas. Using data from a province-wide household-based census conducted in 2014–2016, this study aimed to determine and evaluate spatial clusters of under-five malnutrition, across the province and within each municipality at the individual level using household location. Malnutrition was defined as weight-for-age z-score that fall outside the 2 standard deviations from the median of the WHO reference population. The Kulldorff’s elliptical spatial scan statistic in binomial model was used to locate clusters with high-risk of malnutrition, while adjusting for age and membership to government conditional cash transfer program as proxy for socio-economic status. One large significant cluster of under-five malnutrition was found southwest of the province, in which living in these areas at least doubles the risk of malnutrition. Additionally, at least one significant cluster were identified within each municipality—mostly located along the coastal areas. All these indicate apparent geographical variations across and within municipalities in the province. There were also similarities and disparities in the patterns of risk of malnutrition in each cluster across municipalities, and even within municipality, suggesting underlying causes at work that warrants further investigation. Therefore, community-appropriate programs and interventions should be identified and should be focused on high-risk areas to maximize limited government resources. Further studies are also recommended to determine factors affecting variations in childhood malnutrition considering the evidence of spatial clustering found in this study.Keywords: Binomial model, Kulldorff’s elliptical spatial scan statistic, Philippines, under-five malnutrition
Procedia PDF Downloads 140510 Ending Wars Over Water: Evaluating the Extent to Which Artificial Intelligence Can Be Used to Predict and Prevent Transboundary Water Conflicts
Authors: Akhila Potluru
Abstract:
Worldwide, more than 250 bodies of water are transboundary, meaning they cross the political boundaries of multiple countries. This creates a system of hydrological, economic, and social interdependence between communities reliant on these water sources. Transboundary water conflicts can occur as a result of this intense interdependence. Many factors contribute to the sparking of transboundary water conflicts, ranging from natural hydrological factors to hydro-political interactions. Previous attempts to predict transboundary water conflicts by analysing changes or trends in the contributing factors have typically failed because patterns in the data are hard to identify. However, there is potential for artificial intelligence and machine learning to fill this gap and identify future ‘hotspots’ up to a year in advance by identifying patterns in data where humans can’t. This research determines the extent to which AI can be used to predict and prevent transboundary water conflicts. This is done via a critical literature review of previous case studies and datasets where AI was deployed to predict water conflict. This research not only delivered a more nuanced understanding of previously undervalued factors that contribute toward transboundary water conflicts (in particular, culture and disinformation) but also by detecting conflict early, governance bodies can engage in processes to de-escalate conflict by providing pre-emptive solutions. Looking forward, this gives rise to significant policy implications and water-sharing agreements, which may be able to prevent water conflicts from developing into wide-scale disasters. Additionally, AI can be used to gain a fuller picture of water-based conflicts in areas where security concerns mean it is not possible to have staff on the ground. Therefore, AI enhances not only the depth of our knowledge about transboundary water conflicts but also the breadth of our knowledge. With demand for water constantly growing, competition between countries over shared water will increasingly lead to water conflict. There has never been a more significant time for us to be able to accurately predict and take precautions to prevent global water conflicts.Keywords: artificial intelligence, machine learning, transboundary water conflict, water management
Procedia PDF Downloads 105509 Design and Analysis of Crankshaft Using Al-Al2O3 Composite Material
Authors: Palanisamy Samyraj, Sriram Yogesh, Kishore Kumar, Vaishak Cibi
Abstract:
The project is about design and analysis of crankshaft using Al-Al2O3 composite material. The project is mainly concentrated across two areas one is to design and analyze the composite material, and the other is to work on the practical model. Growing competition and the growing concern for the environment has forced the automobile manufactures to meet conflicting demands such as increased power and performance, lower fuel consumption, lower pollution emission and decrease noise and vibration. Metal matrix composites offer good properties for a number of automotive components. The work reports on studies on Al-Al2O3 as the possible alternative material for a crank shaft. These material have been considered for use in various components in engines due to the high amount of strength to weight ratio. These materials are significantly taken into account for their light weight, high strength, high specific modulus, low co-efficient of thermal expansion, good air resistance properties. In addition high specific stiffness, superior high temperature, mechanical properties and oxidation resistance of Al2O3 have developed some advanced materials that are Al-Al2O3 composites. Crankshafts are used in automobile industries. Crankshaft is connected to the connecting rod for the movement of the piston which is subjected to high stresses which cause the wear of the crankshaft. Hence using composite material in crankshaft gives good fuel efficiency, low manufacturing cost, less weight.Keywords: metal matrix composites, Al-Al2O3, high specific modulus, strength to weight ratio
Procedia PDF Downloads 275508 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene
Authors: Jigg Pelayo, Ricardo Villar
Abstract:
Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.Keywords: algorithm, LiDAR, object recognition, OBIA
Procedia PDF Downloads 244507 The Use of Correlation Difference for the Prediction of Leakage in Pipeline Networks
Authors: Mabel Usunobun Olanipekun, Henry Ogbemudia Omoregbee
Abstract:
Anomalies such as water pipeline and hydraulic or petrochemical pipeline network leakages and bursts have significant implications for economic conditions and the environment. In order to ensure pipeline systems are reliable, they must be efficiently controlled. Wireless Sensor Networks (WSNs) have become a powerful network with critical infrastructure monitoring systems for water, oil and gas pipelines. The loss of water, oil and gas is inevitable and is strongly linked to financial costs and environmental problems, and its avoidance often leads to saving of economic resources. Substantial repair costs and the loss of precious natural resources are part of the financial impact of leaking pipes. Pipeline systems experts have implemented various methodologies in recent decades to identify and locate leakages in water, oil and gas supply networks. These methodologies include, among others, the use of acoustic sensors, measurements, abrupt statistical analysis etc. The issue of leak quantification is to estimate, given some observations about that network, the size and location of one or more leaks in a water pipeline network. In detecting background leakage, however, there is a greater uncertainty in using these methodologies since their output is not so reliable. In this work, we are presenting a scalable concept and simulation where a pressure-driven model (PDM) was used to determine water pipeline leakage in a system network. These pressure data were collected with the use of acoustic sensors located at various node points after a predetermined distance apart. We were able to determine with the use of correlation difference to determine the leakage point locally introduced at a predetermined point between two consecutive nodes, causing a substantial pressure difference between in a pipeline network. After de-noising the signal from the sensors at the nodes, we successfully obtained the exact point where we introduced the local leakage using the correlation difference model we developed.Keywords: leakage detection, acoustic signals, pipeline network, correlation, wireless sensor networks (WSNs)
Procedia PDF Downloads 109506 Voice Liveness Detection Using Kolmogorov Arnold Networks
Authors: Arth J. Shah, Madhu R. Kamble
Abstract:
Voice biometric liveness detection is customized to certify an authentication process of the voice data presented is genuine and not a recording or synthetic voice. With the rise of deepfakes and other equivalently sophisticated spoofing generation techniques, it’s becoming challenging to ensure that the person on the other end is a live speaker or not. Voice Liveness Detection (VLD) system is a group of security measures which detect and prevent voice spoofing attacks. Motivated by the recent development of the Kolmogorov-Arnold Network (KAN) based on the Kolmogorov-Arnold theorem, we proposed KAN for the VLD task. To date, multilayer perceptron (MLP) based classifiers have been used for the classification tasks. We aim to capture not only the compositional structure of the model but also to optimize the values of univariate functions. This study explains the mathematical as well as experimental analysis of KAN for VLD tasks, thereby opening a new perspective for scientists to work on speech and signal processing-based tasks. This study emerges as a combination of traditional signal processing tasks and new deep learning models, which further proved to be a better combination for VLD tasks. The experiments are performed on the POCO and ASVSpoof 2017 V2 database. We used Constant Q-transform, Mel, and short-time Fourier transform (STFT) based front-end features and used CNN, BiLSTM, and KAN as back-end classifiers. The best accuracy is 91.26 % on the POCO database using STFT features with the KAN classifier. In the ASVSpoof 2017 V2 database, the lowest EER we obtained was 26.42 %, using CQT features and KAN as a classifier.Keywords: Kolmogorov Arnold networks, multilayer perceptron, pop noise, voice liveness detection
Procedia PDF Downloads 39505 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach
Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed
Abstract:
Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model
Procedia PDF Downloads 462504 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning
Authors: Yong Chen
Abstract:
To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference
Procedia PDF Downloads 120503 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death
Procedia PDF Downloads 342502 Autistic Traits and Multisensory Integration–Using a Size-Weight Illusion Paradigm
Authors: Man Wai Lei, Charles Mark Zaroff
Abstract:
Objective: A majority of studies suggest that people with Autism Spectrum Disorder (ASD) have multisensory integration deficits. However, normal and even supranormal multisensory integration abilities have also been reported. Additionally, little of this work has been undertaken utilizing a dimensional conceptualization of ASD; i.e., a broader autism phenotype. Utilizing methodology that controls for common potential confounds, the current study aimed to examine if deficits in multisensory integration are associated with ASD traits in a non-clinical population. The contribution of affective versus non-affective components of sensory hypersensitivity to multisensory integration was also examined. Methods: Participants were 147 undergraduate university students in Macau, a Special Administrative Region of China, of Chinese ethnicity, aged 16 to 21 (Mean age = 19.13; SD = 1.07). Participants completed the Autism-Spectrum Quotient, the Sensory Perception Quotient, and the Adolescent/Adult Sensory Profile, in order to measure ASD traits, non-affective, and affective aspects of sensory/perceptual hypersensitivity, respectively. In order to explore multisensory integration across visual and haptic domains, participants were asked to judge which one of two equally weighted, but different sized cylinders was heavier, as a means of detecting the presence of the size-weight illusion (SWI). Results: ASD trait level was significantly and negatively correlated with susceptibility to the SWI (p < 0.05); this correlation was not associated with either accuracy in weight discrimination or gender. Examining the top decile of the non-normally distributed SWI scores revealed a significant negative association with sensation avoiding, but not other aspects of effective or non-effective sensory hypersensitivity. Conclusion and Implications: Within the normal population, a greater degree of ASD traits is associated with a lower likelihood of multisensory integration; echoing was often found in individuals with a clinical diagnosis of ASD, and providing further evidence for the dimensional nature of this disorder. This tendency appears to be associated with dysphoric emotional reactions to sensory input.Keywords: Autism Spectrum Disorder, dimensional, multisensory integration, size-weight illusion
Procedia PDF Downloads 482501 Nonlinear Passive Shunt for Electroacoustic Absorbers Using Nonlinear Energy Sink
Authors: Diala Bitar, Emmanuel Gourdon, Claude H. Lamarque, Manuel Collet
Abstract:
Acoustic absorber devices play an important role reducing the noise at the propagation and reception paths. An electroacoustic absorber consists of a loudspeaker coupled to an electric shunt circuit, where the membrane is playing the role of an absorber/reflector of sound. Although the use of linear shunt resistors at the transducer terminals, has shown to improve the performances of the dynamical absorbers, it is nearly efficient in a narrow frequency band. Therefore, and since nonlinear phenomena are promising for their ability to absorb the vibrations and sound on a larger frequency range, we propose to couple a nonlinear electric shunt circuit at the loudspeaker terminals. Then, the equivalent model can be described by a 2 degrees of freedom system, consisting of a primary linear oscillator describing the dynamics of the loudspeaker membrane, linearly coupled to a cubic nonlinear energy sink (NES). The system is analytically treated for the case of 1:1 resonance, using an invariant manifold approach at different time scales. The proposed methodology enables us to detect the equilibrium points and fold singularities at the first slow time scales, providing a predictive tool to design the nonlinear circuit shunt during the energy exchange process. The preliminary results are promising; a significant improvement of acoustic absorption performances are obtained.Keywords: electroacoustic absorber, multiple-time-scale with small finite parameter, nonlinear energy sink, nonlinear passive shunt
Procedia PDF Downloads 220500 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis
Authors: Mohamed El Morsy, Gabriela Achtenová
Abstract:
Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals
Procedia PDF Downloads 379499 Resonant Fluorescence in a Two-Level Atom and the Terahertz Gap
Authors: Nikolai N. Bogolubov, Andrey V. Soldatov
Abstract:
Terahertz radiation occupies a range of frequencies somewhere from 100 GHz to approximately 10 THz, just between microwaves and infrared waves. This range of frequencies holds promise for many useful applications in experimental applied physics and technology. At the same time, reliable, simple techniques for generation, amplification, and modulation of electromagnetic radiation in this range are far from been developed enough to meet the requirements of its practical usage, especially in comparison to the level of technological abilities already achieved for other domains of the electromagnetic spectrum. This situation of relative underdevelopment of this potentially very important range of electromagnetic spectrum is known under the name of the 'terahertz gap.' Among other things, technological progress in the terahertz area has been impeded by the lack of compact, low energy consumption, easily controlled and continuously radiating terahertz radiation sources. Therefore, development of new techniques serving this purpose as well as various devices based on them is of obvious necessity. No doubt, it would be highly advantageous to employ the simplest of suitable physical systems as major critical components in these techniques and devices. The purpose of the present research was to show by means of conventional methods of non-equilibrium statistical mechanics and the theory of open quantum systems, that a thoroughly studied two-level quantum system, also known as an one-electron two-level 'atom', being driven by external classical monochromatic high-frequency (e.g. laser) field, can radiate continuously at much lower (e.g. terahertz) frequency in the fluorescent regime if the transition dipole moment operator of this 'atom' possesses permanent non-equal diagonal matrix elements. This assumption contradicts conventional assumption routinely made in quantum optics that only the non-diagonal matrix elements persist. The conventional assumption is pertinent to natural atoms and molecules and stems from the property of spatial inversion symmetry of their eigenstates. At the same time, such an assumption is justified no more in regard to artificially manufactured quantum systems of reduced dimensionality, such as, for example, quantum dots, which are often nicknamed 'artificial atoms' due to striking similarity of their optical properties to those ones of the real atoms. Possible ways to experimental observation and practical implementation of the predicted effect are discussed too.Keywords: terahertz gap, two-level atom, resonant fluorescence, quantum dot, resonant fluorescence, two-level atom
Procedia PDF Downloads 271498 An Intelligent Scheme Switching for MIMO Systems Using Fuzzy Logic Technique
Authors: Robert O. Abolade, Olumide O. Ajayi, Zacheaus K. Adeyemo, Solomon A. Adeniran
Abstract:
Link adaptation is an important strategy for achieving robust wireless multimedia communications based on quality of service (QoS) demand. Scheme switching in multiple-input multiple-output (MIMO) systems is an aspect of link adaptation, and it involves selecting among different MIMO transmission schemes or modes so as to adapt to the varying radio channel conditions for the purpose of achieving QoS delivery. However, finding the most appropriate switching method in MIMO links is still a challenge as existing methods are either computationally complex or not always accurate. This paper presents an intelligent switching method for the MIMO system consisting of two schemes - transmit diversity (TD) and spatial multiplexing (SM) - using fuzzy logic technique. In this method, two channel quality indicators (CQI) namely average received signal-to-noise ratio (RSNR) and received signal strength indicator (RSSI) are measured and are passed as inputs to the fuzzy logic system which then gives a decision – an inference. The switching decision of the fuzzy logic system is fed back to the transmitter to switch between the TD and SM schemes. Simulation results show that the proposed fuzzy logic – based switching technique outperforms conventional static switching technique in terms of bit error rate and spectral efficiency.Keywords: channel quality indicator, fuzzy logic, link adaptation, MIMO, spatial multiplexing, transmit diversity
Procedia PDF Downloads 152497 Lithuanian Sign Language Literature: Metaphors at the Phonological Level
Authors: Anželika Teresė
Abstract:
In order to solve issues in sign language linguistics, address matters pertaining to maintaining high quality of sign language (SL) translation, contribute to dispelling misconceptions about SL and deaf people, and raise awareness and understanding of the deaf community heritage, this presentation discusses literature in Lithuanian Sign Language (LSL) and inherent metaphors that are created by using the phonological parameter –handshape, location, movement, palm orientation and nonmanual features. The study covered in this presentation is twofold, involving both the micro-level analysis of metaphors in terms of phonological parameters as a sub-lexical feature and the macro-level analysis of the poetic context. Cognitive theories underlie research of metaphors in sign language literature in a range of SL. The study follows this practice. The presentation covers the qualitative analysis of 34 pieces of LSL literature. The analysis employs ELAN software widely used in SL research. The target is to examine how specific types of each phonological parameter are used for the creation of metaphors in LSL literature and what metaphors are created. The results of the study show that LSL literature employs a range of metaphors created by using classifier signs and by modifying the established signs. The study also reveals that LSL literature tends to create reference metaphors indicating status and power. As the study shows, LSL poets metaphorically encode status by encoding another meaning in the same sign, which results in creating double metaphors. The metaphor of identity has been determined. Notably, the poetic context has revealed that the latter metaphor can also be identified as a metaphor for life. The study goes on to note that deaf poets create metaphors related to the importance of various phenomena significance of the lyrical subject. Notably, the study has allowed detecting locations, nonmanual features and etc., never mentioned in previous SL research as used for the creation of metaphors.Keywords: Lithuanian sign language, sign language literature, sign language metaphor, metaphor at the phonological level, cognitive linguistics
Procedia PDF Downloads 136496 Intrusion Detection in SCADA Systems
Authors: Leandros A. Maglaras, Jianmin Jiang
Abstract:
The protection of the national infrastructures from cyberattacks is one of the main issues for national and international security. The funded European Framework-7 (FP7) research project CockpitCI introduces intelligent intrusion detection, analysis and protection techniques for Critical Infrastructures (CI). The paradox is that CIs massively rely on the newest interconnected and vulnerable Information and Communication Technology (ICT), whilst the control equipment, legacy software/hardware, is typically old. Such a combination of factors may lead to very dangerous situations, exposing systems to a wide variety of attacks. To overcome such threats, the CockpitCI project combines machine learning techniques with ICT technologies to produce advanced intrusion detection, analysis and reaction tools to provide intelligence to field equipment. This will allow the field equipment to perform local decisions in order to self-identify and self-react to abnormal situations introduced by cyberattacks. In this paper, an intrusion detection module capable of detecting malicious network traffic in a Supervisory Control and Data Acquisition (SCADA) system is presented. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automates SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detects anomalies in the system real time. The module is part of an IDS (intrusion detection system) developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF messages that carry information about the source of the incident, the time and a classification of the alarm.Keywords: cyber-security, SCADA systems, OCSVM, intrusion detection
Procedia PDF Downloads 552495 Optimum Design of Hybrid (Metal-Composite) Mechanical Power Transmission System under Uncertainty by Convex Modelling
Authors: Sfiso Radebe
Abstract:
The design models dealing with flawless composite structures are in abundance, where the mechanical properties of composite structures are assumed to be known a priori. However, if the worst case scenario is assumed, where material defects combined with processing anomalies in composite structures are expected, a different solution is attained. Furthermore, if the system being designed combines in series hybrid elements, individually affected by material constant variations, it implies that a different approach needs to be taken. In the body of literature, there is a compendium of research that investigates different modes of failure affecting hybrid metal-composite structures. It covers areas pertaining to the failure of the hybrid joints, structural deformation, transverse displacement, the suppression of vibration and noise. In the present study a system employing a combination of two or more hybrid power transmitting elements will be explored for the least favourable dynamic loads as well as weight minimization, subject to uncertain material properties. Elastic constants are assumed to be uncertain-but-bounded quantities varying slightly around their nominal values where the solution is determined using convex models of uncertainty. Convex analysis of the problem leads to the computation of the least favourable solution and ultimately to a robust design. This approach contrasts with a deterministic analysis where the average values of elastic constants are employed in the calculations, neglecting the variations in the material properties.Keywords: convex modelling, hybrid, metal-composite, robust design
Procedia PDF Downloads 211494 A Case Study on Post-Occupancy Evaluation of User Satisfaction in Higher Educational Buildings
Authors: Yuanhong Zhao, Qingping Yang, Andrew Fox, Tao Zhang
Abstract:
Post-occupancy evaluation (POE) is a systematic approach to assess the actual building performance after the building has been occupied for some time. In this paper, a structured POE assessment was conducted using the building use survey (BUS) methodology in two higher educational buildings in the United Kingdom. This study aims to help close the building performance gap, provide optimized building operation suggestions, and to improve occupants’ satisfaction level. In this research, the questionnaire survey investigated the influences of environmental factors on user satisfaction from the main aspects of building overall design, thermal comfort, perceived control, indoor environment quality for noise, lighting, ventilation, and other non-environmental factors, such as the background information about age, sex, time in buildings, workgroup size, and so on. The results indicate that the occupant satisfaction level with the main aspects of building overall design, indoor environment quality, and thermal comfort in summer and winter on both two buildings, which is lower than the benchmark data. The feedback of this POE assessment has been reported to the building management team to allow managers to develop high-performance building operation plans. Finally, this research provided improvement suggestions to the building operation system to narrow down the performance gap and improve the user work experience satisfaction and productivity level.Keywords: building performance assessment systems, higher educational buildings, post-occupancy evaluation, user satisfaction
Procedia PDF Downloads 152493 The Clustering of Multiple Sclerosis Subgroups through L2 Norm Multifractal Denoising Technique
Authors: Yeliz Karaca, Rana Karabudak
Abstract:
Multifractal Denoising techniques are used in the identification of significant attributes by removing the noise of the dataset. Magnetic resonance (MR) image technique is the most sensitive method so as to identify chronic disorders of the nervous system such as Multiple Sclerosis. MRI and Expanded Disability Status Scale (EDSS) data belonging to 120 individuals who have one of the subgroups of MS (Relapsing Remitting MS (RRMS), Secondary Progressive MS (SPMS), Primary Progressive MS (PPMS)) as well as 19 healthy individuals in the control group have been used in this study. The study is comprised of the following stages: (i) L2 Norm Multifractal Denoising technique, one of the multifractal technique, has been used with the application on the MS data (MRI and EDSS). In this way, the new dataset has been obtained. (ii) The new MS dataset obtained from the MS dataset and L2 Multifractal Denoising technique has been applied to the K-Means and Fuzzy C Means clustering algorithms which are among the unsupervised methods. Thus, the clustering performances have been compared. (iii) In the identification of significant attributes in the MS dataset through the Multifractal denoising (L2 Norm) technique using K-Means and FCM algorithms on the MS subgroups and control group of healthy individuals, excellent performance outcome has been yielded. According to the clustering results based on the MS subgroups obtained in the study, successful clustering results have been obtained in the K-Means and FCM algorithms by applying the L2 norm of multifractal denoising technique for the MS dataset. Clustering performance has been more successful with the MS Dataset (L2_Norm MS Data Set) K-Means and FCM in which significant attributes are obtained by applying L2 Norm Denoising technique.Keywords: clinical decision support, clustering algorithms, multiple sclerosis, multifractal techniques
Procedia PDF Downloads 168