Search results for: artificial intelligence and genetic algorithms
1516 Global Developmental Delay and Its Association with Risk Factors: Validation by Structural Equation Modelling
Authors: Bavneet Kaur Sidhu, Manoj Tiwari
Abstract:
Global Developmental Delay (GDD) is a common pediatric condition. Etiologies of GDD might, however, differ in developing countries. In the last decade, sporadic families are being reported in various countries. As to the author’s best knowledge, many risk factors and their correlation with the prevalence of GDD have been studied but its statistical correlation has not been done. Thus we propose the present study by targeting the risk factor, prevalence and their statistical correlation with GDD. FMR1 gene was studied to confirm the disease and its penetrance. A complete questionnaire-based performance was designed for the statistical studies having a personal, past and present medical history along with their socio-economic status as well. Methods: We distributed the children’s age in 4 different age groups having 5-year intervals and applied structural equation modeling (SEM) techniques, Spearman’s rank correlation coefficient, Karl Pearson correlation coefficient, and chi-square test.Result: A total of 1100 families were enrolled for this study; among them, 330 were clinically and biologically confirmed (radiological studies) for the disease, 204 were males (61.8%), 126 were females (38.18%). We found that 27.87% were genetic and 72.12 were sporadic, out of 72.12 %, 43.277% cases from urban and 56.72% from the rural locality, the mothers' literacy rate was 32.12% and working women numbers were 41.21%. Conclusions: There is a significant association between mothers' age and GDD prevalence, which is also followed by mothers' literacy rate and mothers' occupation, whereas there was no association between fathers' age and GDD.Keywords: global developmental delay, FMR1 gene, spearman’ rank correlation coefficient, structural equation modeling
Procedia PDF Downloads 1351515 A Case Study of Deep Learning for Disease Detection in Crops
Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell
Abstract:
In the precision agriculture area, one of the main tasks is the automated detection of diseases in crops. Machine Learning algorithms have been studied in recent decades for such tasks in view of their potential for improving economic outcomes that automated disease detection may attain over crop fields. The latest generation of deep learning convolution neural networks has presented significant results in the area of image classification. In this way, this work has tested the implementation of an architecture of deep learning convolution neural network for the detection of diseases in different types of crops. A data augmentation strategy was used to meet the requirements of the algorithm implemented with a deep learning framework. Two test scenarios were deployed. The first scenario implemented a neural network under images extracted from a controlled environment while the second one took images both from the field and the controlled environment. The results evaluated the generalisation capacity of the neural networks in relation to the two types of images presented. Results yielded a general classification accuracy of 59% in scenario 1 and 96% in scenario 2.Keywords: convolutional neural networks, deep learning, disease detection, precision agriculture
Procedia PDF Downloads 2591514 Genome-Wide Analysis of Long Terminal Repeat (LTR) Retrotransposons in Rabbit (Oryctolagus cuniculus)
Authors: Zeeshan Khan, Faisal Nouroz, Shumaila Noureen
Abstract:
European or common rabbit (Oryctolagus cuniculus) belongs to class Mammalia, order Lagomorpha of family Leporidae. They are distributed worldwide and are native to Europe (France, Spain and Portugal) and Africa (Morocco and Algeria). LTR retrotransposons are major Class I mobile genetic elements of eukaryotic genomes and play a crucial role in genome expansion, evolution and diversification. They were mostly annotated in various genomes by conventional approaches of homology searches, which restricted the annotation of novel elements. Present work involved de novo identification of LTR retrotransposons by LTR_FINDER in haploid genome of rabbit (2247.74 Mb) distributed in 22 chromosomes, of which 7,933 putative full-length or partial copies were identified containing 69.38 Mb of elements, accounting 3.08% of the genome. Highest copy numbers (731) were found on chromosome 7, followed by chromosome 12 (705), while the lowest copy numbers (27) were detected in chromosome 19 with no elements identified from chromosome 21 due to partially sequenced chromosome, unidentified nucleotides (N) and repeated simple sequence repeats (SSRs). The identified elements ranged in sizes from 1.2 - 25.8 Kb with average sizes between 2-10 Kb. Highest percentage (4.77%) of elements was found in chromosome 15, while lowest (0.55%) in chromosome 19. The most frequent tRNA type was Arginine present in majority of the elements. Based on gained results, it was estimated that rabbit exhibits 15,866 copies having 137.73 Mb of elements accounting 6.16% of diploid genome (44 chromosomes). Further molecular analyses will be helpful in chromosomal localization and distribution of these elements on chromosomes.Keywords: rabbit, LTR retrotransposons, genome, chromosome
Procedia PDF Downloads 1491513 Computational, Human, and Material Modalities: An Augmented Reality Workflow for Building form Found Textile Structures
Authors: James Forren
Abstract:
This research paper details a recent demonstrator project in which digital form found textile structures were built by human craftspersons wearing augmented reality (AR) head-worn displays (HWDs). The project utilized a wet-state natural fiber / cementitious matrix composite to generate minimal bending shapes in tension which, when cured and rotated, performed as minimal-bending compression members. The significance of the project is that it synthesizes computational structural simulations with visually guided handcraft production. Computational and physical form-finding methods with textiles are well characterized in the development of architectural form. One difficulty, however, is physically building computer simulations: often requiring complicated digital fabrication workflows. However, AR HWDs have been used to build a complex digital form from bricks, wood, plastic, and steel without digital fabrication devices. These projects utilize, instead, the tacit knowledge motor schema of the human craftsperson. Computational simulations offer unprecedented speed and performance in solving complex structural problems. Human craftspersons possess highly efficient complex spatial reasoning motor schemas. And textiles offer efficient form-generating possibilities for individual structural members and overall structural forms. This project proposes that the synthesis of these three modalities of structural problem-solving – computational, human, and material - may not only develop efficient structural form but offer further creative potentialities when the respective intelligence of each modality is productively leveraged. The project methodology pertains to its three modalities of production: 1) computational, 2) human, and 3) material. A proprietary three-dimensional graphic statics simulator generated a three-legged arch as a wireframe model. This wireframe was discretized into nine modules, three modules per leg. Each module was modeled as a woven matrix of one-inch diameter chords. And each woven matrix was transmitted to a holographic engine running on HWDs. Craftspersons wearing the HWDs then wove wet cementitious chords within a simple falsework frame to match the minimal bending form displayed in front of them. Once the woven components cured, they were demounted from the frame. The components were then assembled into a full structure using the holographically displayed computational model as a guide. The assembled structure was approximately eighteen feet in diameter and ten feet in height and matched the holographic model to under an inch of tolerance. The construction validated the computational simulation of the minimal bending form as it was dimensionally stable for a ten-day period, after which it was disassembled. The demonstrator illustrated the facility with which computationally derived, a structurally stable form could be achieved by the holographically guided, complex three-dimensional motor schema of the human craftsperson. However, the workflow traveled unidirectionally from computer to human to material: failing to fully leverage the intelligence of each modality. Subsequent research – a workshop testing human interaction with a physics engine simulation of string networks; and research on the use of HWDs to capture hand gestures in weaving seeks to develop further interactivity with rope and chord towards a bi-directional workflow within full-scale building environments.Keywords: augmented reality, cementitious composites, computational form finding, textile structures
Procedia PDF Downloads 1751512 Daylightophil Approach towards High-Performance Architecture for Hybrid-Optimization of Visual Comfort and Daylight Factor in BSk
Authors: Mohammadjavad Mahdavinejad, Hadi Yazdi
Abstract:
The greatest influence we have from the world is shaped through the visual form, thus light is an inseparable element in human life. The use of daylight in visual perception and environment readability is an important issue for users. With regard to the hazards of greenhouse gas emissions from fossil fuels, and in line with the attitudes on the reduction of energy consumption, the correct use of daylight results in lower levels of energy consumed by artificial lighting, heating and cooling systems. Windows are usually the starting points for analysis and simulations to achieve visual comfort and energy optimization; therefore, attention should be paid to the orientation of buildings to minimize electrical energy and maximize the use of daylight. In this paper, by using the Design Builder Software, the effect of the orientation of an 18m2(3m*6m) room with 3m height in city of Tehran has been investigated considering the design constraint limitations. In these simulations, the dimensions of the building have been changed with one degree and the window is located on the smaller face (3m*3m) of the building with 80% ratio. The results indicate that the orientation of building has a lot to do with energy efficiency to meet high-performance architecture and planning goals and objectives.Keywords: daylight, window, orientation, energy consumption, design builder
Procedia PDF Downloads 2341511 Erythrophagocytic Role of Mast Cells in vitro and in vivo during Oxidative Stress
Authors: Priyanka Sharma, Niti Puri
Abstract:
Anemia develops when blood lacks enough healthy erythrocytes. Past studies indicated that anemia, inflammatory process, and oxidative stress are interconnected. Erythrocytes are continuously exposed to reactive oxygen species (ROS) during circulation, due to normal aerobic cellular metabolism and also pathology of inflammatory diseases. Systemic mastocytosis and genetic depletion of mast cells have been shown to affect anaemia. In the present study, we attempted to reveal whether mast cells have a direct role in clearance or erythrophagocytosis of normal or oxidatively damaged erythrocytes. Murine erythrocytes were treated with tert-butyl hydroperoxidase (t-BHP), an agent that induces oxidative damage and mimics in vivo oxidative stress. Normal and oxidatively damaged erythrocytes were labeled with carboxyfluorescein succinimidyl ester (CFSE) to track erythrophagocytosis. We show, for the first time, direct erythrophagocytosis of oxidatively damaged erythrocytes in vitro by RBL-2H3 mast cells as well as in vivo by murine peritoneal mast cells. Also, activated mast cells, as may be present in inflammatory conditions, showed a significant increase in the uptake of oxidatively damaged erythrocytes than resting mast cells. This suggests the involvement of mast cells in erythrocyte clearance during oxidative stress or inflammatory disorders. Partial inhibition of phagocytosis by various inhibitors indicated that this process may be controlled by several pathways. Hence, our study provides important evidence for involvement of mast cells in severe anemia due to inflammation and oxidative stress and might be helpful to circumvent the adverse anemic disorders.Keywords: mast cells, anemia, erythrophagocytosis, oxidatively damaged erythrocytes
Procedia PDF Downloads 2541510 Optimal Design Solution in "The Small Module" Within the Possibilities of Ecology, Environmental Science/Engineering, and Economics
Authors: Hassan Wajid
Abstract:
We will commend accommodating an environmentally friendly architectural proposal that is extremely common/usual but whose features will make it a sustainable space. In this experiment, the natural and artificial built space is being proposed in such a way that deals with Environmental, Ecological, and Economic Criteria under different climatic conditions. Moreover, the criteria against ecology-environment-economics reflect in the different modules which are being experimented with and analyzed by multiple research groups. The ecological, environmental, and economic services are provided used as units of production side by side, resulting in local job creation and saving resources, for instance, conservation of rainwater, soil formation or protection, less energy consumption to achieve Net Zero, and a stable climate as a whole. The synthesized results from the collected data suggest several aspects to consider when designing buildings for beginning the design process under the supervision of instructors/directors who are responsible for developing curricula and sustainable goals. Hence, the results of the research and the suggestions will benefit the sustainable design through multiple results, heat analysis of different small modules, and comparisons. As a result, it is depleted as the resources are either consumed or the pollution contaminates the resources.Keywords: optimization, ecology, environment, sustainable solution
Procedia PDF Downloads 731509 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 2231508 Association of Gln223Arg Polymorphism of Gene LEPR, Levels of Leptin and Nourishing Habits in Mexican Adolescents with Morbid Obesity
Authors: Claudia Patricia Beltran Miranda, Mónica López Anaya, Mónica Navarro Meza, Maria Leonor Valderrama Chairez
Abstract:
Background: Mexico occupies the second world-wide place of morbid obese people (10- 12 million). Factors that predispose the development of MO are genetic, environmental, physiological, psycho-social and behavioral (nourishing habits). Objective: Associate Gln223Arg polymorphism of gene LEPR, levels of leptina and nourishing habits with the presence of morbid obesity in adolescents of the south of Jalisco (México). Methods: In 41 adolescents (18 normal weight and 23 morbid obesity) of 12 to 19 years of age, both sexes that were measure size and weight with tanita scale and stadimeter to determine IMC. Morbid obesity was determined by tables of the WHO and was established with a standard deviation >3. The Gln223Arg polymorphism have been identify by PCR and leptina levels by ELISA. Nourishing habits were evaluate by the questionnaire the Adolescent Food Habits Checklist. The statistical analysis was performed to compare mean scores obtained from the questionnaire when we compare morbid obesity vs. normal weight adolescents with p=0.03 and a significance of 95%. Results: frequencies alellics and genics were not stadistics significatives p= 0,011 and p=0,279 respectly when were compared between normal weight adolescents and morbib obesity Leptin levels and nourishing habits were associated with morbid obesity. The polymorphism not shown significance with morbid obesity. Conclusions: Dietary habits and leptin levels in adolescents are important factor that predisposes the development of obesity in adolescents. The presence of the polymorphism are not associated with morbid obesity in these subjects.Keywords: leptin, nourishing habits, morbid obesity, polymorphism
Procedia PDF Downloads 5741507 Comparing the Detection of Autism Spectrum Disorder within Males and Females Using Machine Learning Techniques
Authors: Joseph Wolff, Jeffrey Eilbott
Abstract:
Autism Spectrum Disorders (ASD) are a spectrum of social disorders characterized by deficits in social communication, verbal ability, and interaction that can vary in severity. In recent years, researchers have used magnetic resonance imaging (MRI) to help detect how neural patterns in individuals with ASD differ from those of neurotypical (NT) controls for classification purposes. This study analyzed the classification of ASD within males and females using functional MRI data. Functional connectivity (FC) correlations among brain regions were used as feature inputs for machine learning algorithms. Analysis was performed on 558 cases from the Autism Brain Imaging Data Exchange (ABIDE) I dataset. When trained specifically on females, the algorithm underperformed in classifying the ASD subset of our testing population. Although the subject size was relatively smaller in the female group, the manual matching of both male and female training groups helps explain the algorithm’s bias, indicating the altered sex abnormalities in functional brain networks compared to typically developing peers. These results highlight the importance of taking sex into account when considering how generalizations of findings on males with ASD apply to females.Keywords: autism spectrum disorder, machine learning, neuroimaging, sex differences
Procedia PDF Downloads 2091506 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes
Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales
Abstract:
In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.Keywords: calibration, data modeling, industrial processes, machine learning
Procedia PDF Downloads 2991505 Hydro-Gravimetric Ann Model for Prediction of Groundwater Level
Authors: Jayanta Kumar Ghosh, Swastik Sunil Goriwale, Himangshu Sarkar
Abstract:
Groundwater is one of the most valuable natural resources that society consumes for its domestic, industrial, and agricultural water supply. Its bulk and indiscriminate consumption affects the groundwater resource. Often, it has been found that the groundwater recharge rate is much lower than its demand. Thus, to maintain water and food security, it is necessary to monitor and management of groundwater storage. However, it is challenging to estimate groundwater storage (GWS) by making use of existing hydrological models. To overcome the difficulties, machine learning (ML) models are being introduced for the evaluation of groundwater level (GWL). Thus, the objective of this research work is to develop an ML-based model for the prediction of GWL. This objective has been realized through the development of an artificial neural network (ANN) model based on hydro-gravimetry. The model has been developed using training samples from field observations spread over 8 months. The developed model has been tested for the prediction of GWL in an observation well. The root means square error (RMSE) for the test samples has been found to be 0.390 meters. Thus, it can be concluded that the hydro-gravimetric-based ANN model can be used for the prediction of GWL. However, to improve the accuracy, more hydro-gravimetric parameter/s may be considered and tested in future.Keywords: machine learning, hydro-gravimetry, ground water level, predictive model
Procedia PDF Downloads 1271504 Intelligent Platform for Photovoltaic Park Operation and Maintenance
Authors: Andreas Livera, Spyros Theocharides, Michalis Florides, Charalambos Anastassiou
Abstract:
A main challenge in the quest for ensuring quality of operation, especially for photovoltaic (PV) systems, is to safeguard the reliability and optimal performance by detecting and diagnosing potential failures and performance losses at early stages or before the occurrence through real-time monitoring, supervision, fault detection, and predictive maintenance. The purpose of this work is to present the functionalities and results related to the development and validation of a software platform for PV assets diagnosis and maintenance. The platform brings together proprietary hardware sensors and software algorithms to enable the early detection and prediction of the most common and critical faults in PV systems. It was validated using field measurements from operating PV systems. The results showed the effectiveness of the platform for detecting faults and losses (e.g., inverter failures, string disconnections, and potential induced degradation) at early stages, forecasting PV power production while also providing recommendations for maintenance actions. Increased PV energy yield production and revenue can be thus achieved while also minimizing operation and maintenance (O&M) costs.Keywords: failure detection and prediction, operation and maintenance, performance monitoring, photovoltaic, platform, recommendations, predictive maintenance
Procedia PDF Downloads 491503 Characterization of Bacteriophage for Biocontrol of Pseudomonas syringae, Causative Agent of Canker in Prunus spp.
Authors: Mojgan Rabiey, Shyamali Roy, Billy Quilty, Ryan Creeth, George Sundin, Robert W. Jackson
Abstract:
Bacterial canker is a major disease of Prunus species such as cherry (Prunus avium). It is caused by Pseudomonas syringae species including P. syringae pv. syringae (Pss) and P. syringae pv. morsprunorum race 1 (Psm1) and race 2 (Psm2). Concerns over the environmental impact of, and developing resistance to, copper controls call for alternative approaches to disease management. One method of control could be achieved using naturally occurring bacteriophage (phage) infective to the bacterial pathogens. Phages were isolated from soil, leaf, and bark of cherry trees in five locations in the South East of England. The phages were assessed for their host range against strains of Pss, Psm1, and Psm2. The phages exhibited a differential ability to infect and lyse different Pss and Psm isolates as well as some other P. syringae pathovars. However, the phages were unable to infect beneficial bacteria such as Pseudomonas fluorescens. A subset of 18 of these phages were further characterised genetically (Random Amplification of Polymorphic DNA-PCR fingerprinting and sequencing) and using electron microscopy. The phages are tentatively identified as belonging to the order Caudovirales and the families Myoviridae, Podoviridae, and Siphoviridae, with genetic material being dsDNA. Future research will fully sequence the phage genomes. The efficacy of the phage, both individually and in cocktails, to reduce disease progression in vivo will be investigated to understand the potential for practical use of these phages as biocontrol agents.Keywords: bacteriophage, pseudomonas, bacterial cancker, biological control
Procedia PDF Downloads 1511502 Clustering Categorical Data Using the K-Means Algorithm and the Attribute’s Relative Frequency
Authors: Semeh Ben Salem, Sami Naouali, Moetez Sallami
Abstract:
Clustering is a well known data mining technique used in pattern recognition and information retrieval. The initial dataset to be clustered can either contain categorical or numeric data. Each type of data has its own specific clustering algorithm. In this context, two algorithms are proposed: the k-means for clustering numeric datasets and the k-modes for categorical datasets. The main encountered problem in data mining applications is clustering categorical dataset so relevant in the datasets. One main issue to achieve the clustering process on categorical values is to transform the categorical attributes into numeric measures and directly apply the k-means algorithm instead the k-modes. In this paper, it is proposed to experiment an approach based on the previous issue by transforming the categorical values into numeric ones using the relative frequency of each modality in the attributes. The proposed approach is compared with a previously method based on transforming the categorical datasets into binary values. The scalability and accuracy of the two methods are experimented. The obtained results show that our proposed method outperforms the binary method in all cases.Keywords: clustering, unsupervised learning, pattern recognition, categorical datasets, knowledge discovery, k-means
Procedia PDF Downloads 2591501 Raising Test of English for International Communication (TOEIC) Scores through Purpose-Driven Vocabulary Acquisition
Authors: Edward Sarich, Jack Ryan
Abstract:
In contrast to learning new vocabulary incidentally in one’s first language, foreign language vocabulary is often acquired purposefully, because a lack of natural exposure requires it to be studied in an artificial environment. It follows then that foreign language vocabulary may be more efficiently acquired if it is purpose-driven, or linked to a clear and desirable outcome. The research described in this paper relates to the early stages of what is seen as a long-term effort to measure the effectiveness of a methodology for purpose-driven foreign language vocabulary instruction, specifically by analyzing whether directed studying from high-frequency vocabulary lists leads to an improvement in Test of English for International Communication (TOEIC) scores. The research was carried out in two sections of a first-year university English composition class at a small university in Japan. The results seem to indicate that purposeful study from relevant high-frequency vocabulary lists can contribute to raising TOEIC scores and that the test preparation methodology used in this study was thought by students to be beneficial in helping them to prepare to take this high-stakes test.Keywords: corpus vocabulary, language asssessment, second language vocabulary acquisition, TOEIC test preparation
Procedia PDF Downloads 1491500 Decision Making Approach through Generalized Fuzzy Entropy Measure
Authors: H. D. Arora, Anjali Dhiman
Abstract:
Uncertainty is found everywhere and its understanding is central to decision making. Uncertainty emerges as one has less information than the total information required describing a system and its environment. Uncertainty and information are so closely associated that the information provided by an experiment for example, is equal to the amount of uncertainty removed. It may be pertinent to point out that uncertainty manifests itself in several forms and various kinds of uncertainties may arise from random fluctuations, incomplete information, imprecise perception, vagueness etc. For instance, one encounters uncertainty due to vagueness in communication through natural language. Uncertainty in this sense is represented by fuzziness resulting from imprecision of meaning of a concept expressed by linguistic terms. Fuzzy set concept provides an appropriate mathematical framework for dealing with the vagueness. Both information theory, proposed by Shannon (1948) and fuzzy set theory given by Zadeh (1965) plays an important role in human intelligence and various practical problems such as image segmentation, medical diagnosis etc. Numerous approaches and theories dealing with inaccuracy and uncertainty have been proposed by different researcher. In the present communication, we generalize fuzzy entropy proposed by De Luca and Termini (1972) corresponding to Shannon entropy(1948). Further, some of the basic properties of the proposed measure were examined. We also applied the proposed measure to the real life decision making problem.Keywords: entropy, fuzzy sets, fuzzy entropy, generalized fuzzy entropy, decision making
Procedia PDF Downloads 4501499 [Keynote Talk]: Analysis of Intelligent Based Fault Tolerant Capability System for Solar Photovoltaic Energy Conversion
Authors: Albert Alexander Stonier
Abstract:
Due to the fossil fuel exhaustion and environmental pollution, renewable energy sources especially solar photovoltaic system plays a predominant role in providing energy to the consumers. It has been estimated that by 2050 the renewable energy sources will satisfy 50% of the total energy requirement of the world. In this context, the faults in the conversion process require a special attention which is considered as a major problem. A fault which remains even for a few seconds will cause undesirable effects to the system. The presentation comprises of the analysis, causes, effects and mitigation methods of various faults occurring in the entire solar photovoltaic energy conversion process. In order to overcome the faults in the system, an intelligent based artificial neural networks and fuzzy logic are proposed which can significantly mitigate the faults. Hence the presentation intends to find the problem in renewable energy and provides the possible solution to overcome it with simulation and experimental results. The work performed in a 3kWp solar photovoltaic plant whose results cites the improvement in reliability, availability, power quality and fault tolerant ability.Keywords: solar photovoltaic, power electronics, power quality, PWM
Procedia PDF Downloads 2811498 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network
Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar
Abstract:
Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network
Procedia PDF Downloads 5171497 GPU Accelerated Fractal Image Compression for Medical Imaging in Parallel Computing Platform
Authors: Md. Enamul Haque, Abdullah Al Kaisan, Mahmudur R. Saniat, Aminur Rahman
Abstract:
In this paper, we have implemented both sequential and parallel version of fractal image compression algorithms using CUDA (Compute Unified Device Architecture) programming model for parallelizing the program in Graphics Processing Unit for medical images, as they are highly similar within the image itself. There is several improvements in the implementation of the algorithm as well. Fractal image compression is based on the self similarity of an image, meaning an image having similarity in majority of the regions. We take this opportunity to implement the compression algorithm and monitor the effect of it using both parallel and sequential implementation. Fractal compression has the property of high compression rate and the dimensionless scheme. Compression scheme for fractal image is of two kinds, one is encoding and another is decoding. Encoding is very much computational expensive. On the other hand decoding is less computational. The application of fractal compression to medical images would allow obtaining much higher compression ratios. While the fractal magnification an inseparable feature of the fractal compression would be very useful in presenting the reconstructed image in a highly readable form. However, like all irreversible methods, the fractal compression is connected with the problem of information loss, which is especially troublesome in the medical imaging. A very time consuming encoding process, which can last even several hours, is another bothersome drawback of the fractal compression.Keywords: accelerated GPU, CUDA, parallel computing, fractal image compression
Procedia PDF Downloads 3361496 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks
Authors: Tripatjot S. Panag, J. S. Dhillon
Abstract:
The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.Keywords: coverage, disjoint sets, heuristic, lifetime, scheduling, Wireless sensor networks, WSN
Procedia PDF Downloads 4521495 A Neural Network Model to Simulate Urban Air Temperatures in Toulouse, France
Authors: Hiba Hamdi, Thomas Corpetti, Laure Roupioz, Xavier Briottet
Abstract:
Air temperatures are generally higher in cities than in their rural surroundings. The overheating of cities is a direct consequence of increasing urbanization, characterized by the artificial filling of soils, the release of anthropogenic heat, and the complexity of urban geometry. This phenomenon, referred to as urban heat island (UHI), is more prevalent during heat waves, which have increased in frequency and intensity in recent years. In the context of global warming and urban population growth, helping urban planners implement UHI mitigation and adaptation strategies is critical. In practice, the study of UHI requires air temperature information at the street canyon level, which is difficult to obtain. Many urban air temperature simulation models have been proposed (mostly based on physics or statistics), all of which require a variety of input parameters related to urban morphology, land use, material properties, or meteorological conditions. In this paper, we build and evaluate a neural network model based on Urban Weather Generator (UWG) model simulations and data from meteorological stations that simulate air temperature over Toulouse, France, on days favourable to UHI.Keywords: air temperature, neural network model, urban heat island, urban weather generator
Procedia PDF Downloads 911494 Reducing Total Harmonic Content of 9-Level Inverter by Use of Cuckoo Algorithm
Authors: Mahmoud Enayati, Sirous Mohammadi
Abstract:
In this paper, a novel procedure to find the firing angles of the multilevel inverters of supply voltage and, consequently, to decline the total harmonic distortion (THD), has been presented. In order to eliminate more harmonics in the multilevel inverters, its number of levels can be lessened or pulse width modulation waveform, in which more than one switching occur in each level, be used. Both cases complicate the non-algebraic equations and their solution cannot be performed by the conventional methods for the numerical solution of nonlinear equations such as Newton-Raphson method. In this paper, Cuckoo algorithm is used to compute the optimal firing angle of the pulse width modulation voltage waveform in the multilevel inverter. These angles should be calculated in such a way that the voltage amplitude of the fundamental frequency be generated while the total harmonic distortion of the output voltage be small. The simulation and theoretical results for the 9-levels inverter offer the high applicability of the proposed algorithm to identify the suitable firing angles for declining the low order harmonics and generate a waveform whose total harmonic distortion is very small and it is almost a sinusoidal waveform.Keywords: evolutionary algorithms, multilevel inverters, total harmonic content, Cuckoo Algorithm
Procedia PDF Downloads 5321493 Effects of Chemicals in Elderly
Authors: Ali Kuzu
Abstract:
There are about 800 thousand chemicals in our environment and the number is increasing more than a thousand every year. While most of these chemicals are used as components in various consumer products, some are faced as industrial waste in the environment. Unfortunately, many of these chemicals are hazardous and affect humans. According to the “International Program on Chemical Safety” of World Health Organization; Among the chronic health effects of chemicals, cancer is of major concern. Many substances have found in recent years to be carcinogenic in one or more species of laboratory animals. Especially with respect to long-term effects, the response to a chemical may vary, quantitatively or qualitatively, in different groups of individuals depending on predisposing conditions, such as nutritional status, disease status, current infection, climatic extremes, and genetic features, sex and age of the individuals. Understanding the response of such specific risk groups is an important area of toxicology research. People with age 65+ is defined as “aged (or elderly)”. The elderly population in the world is about 600 million, which corresponds to ~8 percent of the world population. While every 1 of each 4 people is aged in Japan, the elderly population is quite close to 20 percent in many developed countries. And elderly population in these countries is growing more rapidly than the total population. The negative effects of chemicals on elderly take an important place in health-care related issues in last decades. The aged population is more susceptible to the harmful effects of environmental chemicals. According to the poor health of the organ systems in elderly, the ability of their body to eliminate the harmful effects and chemical substances from their body is also poor. With the increasing life expectancy, more and more people will face problems associated with chemical residues.Keywords: elderly, chemicals’ effects, aged care, care need
Procedia PDF Downloads 4561492 Detectability Analysis of Typical Aerial Targets from Space-Based Platforms
Authors: Yin Zhang, Kai Qiao, Xiyang Zhi, Jinnan Gong, Jianming Hu
Abstract:
In order to achieve effective detection of aerial targets over long distances from space-based platforms, the mechanism of interaction between the radiation characteristics of the aerial targets and the complex scene environment including the sunlight conditions, underlying surfaces and the atmosphere are analyzed. A large simulated database of space-based radiance images is constructed considering several typical aerial targets, target working modes (flight velocity and altitude), illumination and observation angles, background types (cloud, ocean, and urban areas) and sensor spectrums ranging from visible to thermal infrared. The target detectability is characterized by the signal-to-clutter ratio (SCR) extracted from the images. The influence laws of the target detectability are discussed under different detection bands and instantaneous fields of view (IFOV). Furthermore, the optimal center wavelengths and widths of the detection bands are suggested, and the minimum IFOV requirements are proposed. The research can provide theoretical support and scientific guidance for the design of space-based detection systems and on-board information processing algorithms.Keywords: space-based detection, aerial targets, detectability analysis, scene environment
Procedia PDF Downloads 1441491 Prediction of Survival Rate after Gastrointestinal Surgery Based on The New Japanese Association for Acute Medicine (JAAM Score) With Neural Network Classification Method
Authors: Ayu Nabila Kusuma Pradana, Aprinaldi Jasa Mantau, Tomohiko Akahoshi
Abstract:
The incidence of Disseminated intravascular coagulation (DIC) following gastrointestinal surgery has a poor prognosis. Therefore, it is important to determine the factors that can predict the prognosis of DIC. This study will investigate the factors that may influence the outcome of DIC in patients after gastrointestinal surgery. Eighty-one patients were admitted to the intensive care unit after gastrointestinal surgery in Kyushu University Hospital from 2003 to 2021. Acute DIC scores were estimated using the new Japanese Association for Acute Medicine (JAAM) score from before and after surgery from day 1, day 3, and day 7. Acute DIC scores will be compared with The Sequential Organ Failure Assessment (SOFA) score, platelet count, lactate level, and a variety of biochemical parameters. This study applied machine learning algorithms to predict the prognosis of DIC after gastrointestinal surgery. The results of this study are expected to be used as an indicator for evaluating patient prognosis so that it can increase life expectancy and reduce mortality from cases of DIC patients after gastrointestinal surgery.Keywords: the survival rate, gastrointestinal surgery, JAAM score, neural network, machine learning, disseminated intravascular coagulation (DIC)
Procedia PDF Downloads 2601490 Active Space Debris Removal by Extreme Ultraviolet Radiation
Authors: A. Anandha Selvan, B. Malarvizhi
Abstract:
In recent year the problem of space debris have become very serious. The mass of the artificial objects in orbit increased quite steadily at the rate of about 145 metric tons annually, leading to a total tally of approximately 7000 metric tons. Now most of space debris object orbiting in LEO region about 97%. The catastrophic collision can be mostly occurred in LEO region, where this collision generate the new debris. Thus, we propose a concept for cleaning the space debris in the region of thermosphere by passing the Extreme Ultraviolet (EUV) radiation to in front of space debris object from the re-orbiter. So in our concept the Extreme Ultraviolet (EUV) radiation will create the thermosphere expansion by reacting with atmospheric gas particles. So the drag is produced in front of the space debris object by thermosphere expansion. This drag force is high enough to slow down the space debris object’s relative velocity. Therefore the space debris object gradually reducing the altitude and finally enter into the earth’s atmosphere. After the first target is removed, the re-orbiter can be goes into next target. This method remove the space debris object without catching debris object. Thus it can be applied to a wide range of debris object without regard to their shapes or rotation. This paper discusses the operation of re-orbiter for removing the space debris in thermosphere region.Keywords: active space debris removal, space debris, LEO, extreme ultraviolet, re-orbiter, thermosphere
Procedia PDF Downloads 4621489 Optimizing E-commerce Retention: A Detailed Study of Machine Learning Techniques for Churn Prediction
Authors: Saurabh Kumar
Abstract:
In the fiercely competitive landscape of e-commerce, understanding and mitigating customer churn has become paramount for sustainable business growth. This paper presents a thorough investigation into the application of machine learning techniques for churn prediction in e-commerce, aiming to provide actionable insights for businesses seeking to enhance customer retention strategies. We conduct a comparative study of various machine learning algorithms, including traditional statistical methods and ensemble techniques, leveraging a rich dataset sourced from Kaggle. Through rigorous evaluation, we assess the predictive performance, interpretability, and scalability of each method, elucidating their respective strengths and limitations in capturing the intricate dynamics of customer churn. We identified the XGBoost classifier to be the best performing. Our findings not only offer practical guidelines for selecting suitable modeling approaches but also contribute to the broader understanding of customer behavior in the e-commerce domain. Ultimately, this research equips businesses with the knowledge and tools necessary to proactively identify and address churn, thereby fostering long-term customer relationships and sustaining competitive advantage.Keywords: customer churn, e-commerce, machine learning techniques, predictive performance, sustainable business growth
Procedia PDF Downloads 291488 Apolipoprotein A1 -75 G to a Substitution and Its Relationship with Serum ApoA1 Levels among Indian Punjabi Population
Authors: Savjot Kaur, Mridula Mahajan, AJS Bhanwer, Santokh Singh, Kawaljit Matharoo
Abstract:
Background: Disorders of lipid metabolism and genetic predisposition are CAD risk factors. ApoA1 is the apolipoprotein component of anti-atherogenic high density lipoprotein (HDL) particles. The protective action of HDL and ApoA1 is attributed to their central role in reverse cholesterol transport (RCT). Aim: This study was aimed at identifying sequence variations in ApoA1 (-75G>A) and its association with serum ApoA1 levels. Methods: A total of 300 CAD patients and 300 Normal individuals (controls) were analyzed. PCR-RFLP method was used to determine the DNA polymorphism in the ApoA1 gene, PCR products digested with restriction enzyme MspI, followed by Agarose Gel Electrophoresis. Serum apolipoprotein A1 concentration was estimated with immunoturbidimetric method. Results: Deviation from Hardy- Weinberg Equilibrium (HWE) was observed for this gene variant. The A- allele frequency was higher among Coronary Artery disease patients (53.8) compared to controls (45.5), p= 0.004, O.R= 1.38(1.11-1.75). Under recessive model analysis (AA vs. GG+GA) AA genotype of ApoA1 G>A substitution conferred ~1 fold increased risk towards CAD susceptibility (p= 0.002, OR= 1.72(1.2-2.43). With serum ApoA1 levels < 107 A allele frequency was higher among CAD cases (50) as compared to controls (43.4) [p=0.23, OR= 1.2(0.84-2)] and there was zero % occurrence of A allele frequency in individuals with ApoA1 levels > 177. Conclusion: Serum ApoA1 levels were associated with ApoA1 promoter region variation and influence CAD risk. The individuals with the APOA1 -75 A allele confer excess hazard of developing CAD as a result of its effect on low serum concentrations of ApoA1.Keywords: apolipoprotein A1 (G>A) gene polymorphism, coronary artery disease (CAD), reverse cholesterol transport (RCT)
Procedia PDF Downloads 3171487 Design Intelligence in Garment Design Between Technical Creativity and Artistic Creativity
Authors: Kanwar Varinder Pal Singh
Abstract:
Art is one of the five secondary sciences next to the social sciences. As per the single essential concept in garment design, it is the coexistence and co-creation of two aspects of reality: Ultimate reality and apparent or conventional reality. All phenomena possess two natures: That which is revealed by correct perception and that which is induced by deceptive perception. The object of correct perception is the ultimate reality, the object of deceptive perception is conventional reality. The same phenomenon, therefore, may be perceived according to its ultimate nature or its apparent nature. Ultimate reality is also called ‘emptiness’. Emptiness does not mean that all phenomena are nothing but do not exist in themselves. Although phenomena, the universe, thoughts, beings, time, and so on, seem very real in themselves, ultimately, they are not. Each one of us can perceive the changing and unpredictable nature of existence. This transitory nature of phenomena, impermanence, is the first sign of emptiness. Sometimes, the interdependence of phenomena leads to ultimate reality, which is nothing but emptiness, e.g., a rainbow, which is an effect due to the function of ‘sun rays,’ ‘rain,’ and ‘time.’ In light of the above, to achieve decision-making for the global desirability of garment design, the coexistence of artistic and technical creativity must achieve an object of correct perception, i.e., ultimate reality. This paper mentions the decision-making technique as semiotic engineering, both subjective and objective.Keywords: global desirability, social desirability, comfort desirability, handle desirability, overall desirability
Procedia PDF Downloads 11