Search results for: neural networking algorithm
672 Distances over Incomplete Diabetes and Breast Cancer Data Based on Bhattacharyya Distance
Authors: Loai AbdAllah, Mahmoud Kaiyal
Abstract:
Missing values in real-world datasets are a common problem. Many algorithms were developed to deal with this problem, most of them replace the missing values with a fixed value that was computed based on the observed values. In our work, we used a distance function based on Bhattacharyya distance to measure the distance between objects with missing values. Bhattacharyya distance, which measures the similarity of two probability distributions. The proposed distance distinguishes between known and unknown values. Where the distance between two known values is the Mahalanobis distance. When, on the other hand, one of them is missing the distance is computed based on the distribution of the known values, for the coordinate that contains the missing value. This method was integrated with Wikaya, a digital health company developing a platform that helps to improve prevention of chronic diseases such as diabetes and cancer. In order for Wikaya’s recommendation system to work distance between users need to be measured. Since there are missing values in the collected data, there is a need to develop a distance function distances between incomplete users profiles. To evaluate the accuracy of the proposed distance function in reflecting the actual similarity between different objects, when some of them contain missing values, we integrated it within the framework of k nearest neighbors (kNN) classifier, since its computation is based only on the similarity between objects. To validate this, we ran the algorithm over diabetes and breast cancer datasets, standard benchmark datasets from the UCI repository. Our experiments show that kNN classifier using our proposed distance function outperforms the kNN using other existing methods.Keywords: missing values, incomplete data, distance, incomplete diabetes data
Procedia PDF Downloads 225671 Parkinson’s Disease Detection Analysis through Machine Learning Approaches
Authors: Muhtasim Shafi Kader, Fizar Ahmed, Annesha Acharjee
Abstract:
Machine learning and data mining are crucial in health care, as well as medical information and detection. Machine learning approaches are now being utilized to improve awareness of a variety of critical health issues, including diabetes detection, neuron cell tumor diagnosis, COVID 19 identification, and so on. Parkinson’s disease is basically a disease for our senior citizens in Bangladesh. Parkinson's Disease indications often seem progressive and get worst with time. People got affected trouble walking and communicating with the condition advances. Patients can also have psychological and social vagaries, nap problems, hopelessness, reminiscence loss, and weariness. Parkinson's disease can happen in both men and women. Though men are affected by the illness at a proportion that is around partial of them are women. In this research, we have to get out the accurate ML algorithm to find out the disease with a predictable dataset and the model of the following machine learning classifiers. Therefore, nine ML classifiers are secondhand to portion study to use machine learning approaches like as follows, Naive Bayes, Adaptive Boosting, Bagging Classifier, Decision Tree Classifier, Random Forest classifier, XBG Classifier, K Nearest Neighbor Classifier, Support Vector Machine Classifier, and Gradient Boosting Classifier are used.Keywords: naive bayes, adaptive boosting, bagging classifier, decision tree classifier, random forest classifier, XBG classifier, k nearest neighbor classifier, support vector classifier, gradient boosting classifier
Procedia PDF Downloads 130670 Global Navigation Satellite System and Precise Point Positioning as Remote Sensing Tools for Monitoring Tropospheric Water Vapor
Authors: Panupong Makvichian
Abstract:
Global Navigation Satellite System (GNSS) is nowadays a common technology that improves navigation functions in our life. Additionally, GNSS is also being employed on behalf of an accurate atmospheric sensor these times. Meteorology is a practical application of GNSS, which is unnoticeable in the background of people’s life. GNSS Precise Point Positioning (PPP) is a positioning method that requires data from a single dual-frequency receiver and precise information about satellite positions and satellite clocks. In addition, careful attention to mitigate various error sources is required. All the above data are combined in a sophisticated mathematical algorithm. At this point, the research is going to demonstrate how GNSS and PPP method is capable to provide high-precision estimates, such as 3D positions or Zenith tropospheric delays (ZTDs). ZTDs combined with pressure and temperature information allows us to estimate the water vapor in the atmosphere as precipitable water vapor (PWV). If the process is replicated for a network of GNSS sensors, we can create thematic maps that allow extract water content information in any location within the network area. All of the above are possible thanks to the advances in GNSS data processing. Therefore, we are able to use GNSS data for climatic trend analysis and acquisition of the further knowledge about the atmospheric water content.Keywords: GNSS, precise point positioning, Zenith tropospheric delays, precipitable water vapor
Procedia PDF Downloads 199669 Finite Volume Method for Flow Prediction Using Unstructured Meshes
Authors: Juhee Lee, Yongjun Lee
Abstract:
In designing a low-energy-consuming buildings, the heat transfer through a large glass or wall becomes critical. Multiple layers of the window glasses and walls are employed for the high insulation. The gravity driven air flow between window glasses or wall layers is a natural heat convection phenomenon being a key of the heat transfer. For the first step of the natural heat transfer analysis, in this study the development and application of a finite volume method for the numerical computation of viscous incompressible flows is presented. It will become a part of the natural convection analysis with high-order scheme, multi-grid method, and dual-time step in the future. A finite volume method based on a fully-implicit second-order is used to discretize and solve the fluid flow on unstructured grids composed of arbitrary-shaped cells. The integrations of the governing equation are discretised in the finite volume manner using a collocated arrangement of variables. The convergence of the SIMPLE segregated algorithm for the solution of the coupled nonlinear algebraic equations is accelerated by using a sparse matrix solver such as BiCGSTAB. The method used in the present study is verified by applying it to some flows for which either the numerical solution is known or the solution can be obtained using another numerical technique available in the other researches. The accuracy of the method is assessed through the grid refinement.Keywords: finite volume method, fluid flow, laminar flow, unstructured grid
Procedia PDF Downloads 286668 Thermoelectric Blanket for Aiding the Treatment of Cerebral Hypoxia and Other Related Conditions
Authors: Sarayu Vanga, Jorge Galeano-Cabral, Kaya Wei
Abstract:
Cerebral hypoxia refers to a condition in which there is a decrease in oxygen supply to the brain. Patients suffering from this condition experience a decrease in their body temperature. While there isn't any cure to treat cerebral hypoxia as of date, certain procedures are utilized to help aid in the treatment of the condition. Regulating the body temperature is an example of one of those procedures. Hypoxia is well known to reduce the body temperature of mammals, although the neural origins of this response remain uncertain. In order to speed recovery from this condition, it is necessary to maintain a stable body temperature. In this study, we present an approach to regulating body temperature for patients who suffer from cerebral hypoxia or other similar conditions. After a thorough literature study, we propose the use of thermoelectric blankets, which are temperature-controlled thermal blankets based on thermoelectric devices. These blankets are capable of heating up and cooling down the patient to stabilize body temperature. This feature is possible through the reversible effect that thermoelectric devices offer while behaving as a thermal sensor, and it is an effective way to stabilize temperature. Thermoelectricity is the direct conversion of thermal to electrical energy and vice versa. This effect is now known as the Seebeck effect, and it is characterized by the Seebeck coefficient. In such a configuration, the device has cooling and heating sides with temperatures that can be interchanged by simply switching the direction of the current input in the system. This design integrates various aspects, including a humidifier, ventilation machine, IV-administered medication, air conditioning, circulation device, and a body temperature regulation system. The proposed design includes thermocouples that will trigger the blanket to increase or decrease a set temperature through a medical temperature sensor. Additionally, the proposed design allows an efficient way to control fluctuations in body temperature while being cost-friendly, with an expected cost of 150 dollars. We are currently working on developing a prototype of the design to collect thermal and electrical data under different conditions and also intend to perform an optimization analysis to improve the design even further. While this proposal was developed for treating cerebral hypoxia, it can also aid in the treatment of other related conditions, as fluctuations in body temperature appear to be a common symptom that patients have for many illnesses.Keywords: body temperature regulation, cerebral hypoxia, thermoelectric, blanket design
Procedia PDF Downloads 161667 Automatic Registration of Rail Profile Based Local Maximum Curvature Entropy
Authors: Hao Wang, Shengchun Wang, Weidong Wang
Abstract:
On the influence of train vibration and environmental noise on the measurement of track wear, we proposed a method for automatic extraction of circular arc on the inner or outer side of the rail waist and achieved the high-precision registration of rail profile. Firstly, a polynomial fitting method based on truncated residual histogram was proposed to find the optimal fitting curve of the profile and reduce the influence of noise on profile curve fitting. Then, based on the curvature distribution characteristics of the fitting curve, the interval search algorithm based on dynamic window’s maximum curvature entropy was proposed to realize the automatic segmentation of small circular arc. At last, we fit two circle centers as matching reference points based on small circular arcs on both sides and realized the alignment from the measured profile to the standard designed profile. The static experimental results show that the mean and standard deviation of the method are controlled within 0.01mm with small measurement errors and high repeatability. The dynamic test also verified the repeatability of the method in the train-running environment, and the dynamic measurement deviation of rail wear is within 0.2mm with high repeatability.Keywords: curvature entropy, profile registration, rail wear, structured light, train-running
Procedia PDF Downloads 263666 ANOVA-Based Feature Selection and Machine Learning System for IoT Anomaly Detection
Authors: Muhammad Ali
Abstract:
Cyber-attacks and anomaly detection on the Internet of Things (IoT) infrastructure is emerging concern in the domain of data-driven intrusion. Rapidly increasing IoT risk is now making headlines around the world. denial of service, malicious control, data type probing, malicious operation, DDos, scan, spying, and wrong setup are attacks and anomalies that can affect an IoT system failure. Everyone talks about cyber security, connectivity, smart devices, and real-time data extraction. IoT devices expose a wide variety of new cyber security attack vectors in network traffic. For further than IoT development, and mainly for smart and IoT applications, there is a necessity for intelligent processing and analysis of data. So, our approach is too secure. We train several machine learning models that have been compared to accurately predicting attacks and anomalies on IoT systems, considering IoT applications, with ANOVA-based feature selection with fewer prediction models to evaluate network traffic to help prevent IoT devices. The machine learning (ML) algorithms that have been used here are KNN, SVM, NB, D.T., and R.F., with the most satisfactory test accuracy with fast detection. The evaluation of ML metrics includes precision, recall, F1 score, FPR, NPV, G.M., MCC, and AUC & ROC. The Random Forest algorithm achieved the best results with less prediction time, with an accuracy of 99.98%.Keywords: machine learning, analysis of variance, Internet of Thing, network security, intrusion detection
Procedia PDF Downloads 126665 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms
Authors: Sagri Sharma
Abstract:
Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine
Procedia PDF Downloads 429664 Information Management Approach in the Prediction of Acute Appendicitis
Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki
Abstract:
This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree
Procedia PDF Downloads 352663 Bayesian Inference for High Dimensional Dynamic Spatio-Temporal Models
Authors: Sofia M. Karadimitriou, Kostas Triantafyllopoulos, Timothy Heaton
Abstract:
Reduced dimension Dynamic Spatio-Temporal Models (DSTMs) jointly describe the spatial and temporal evolution of a function observed subject to noise. A basic state space model is adopted for the discrete temporal variation, while a continuous autoregressive structure describes the continuous spatial evolution. Application of such a DSTM relies upon the pre-selection of a suitable reduced set of basic functions and this can present a challenge in practice. In this talk, we propose an online estimation method for high dimensional spatio-temporal data based upon DSTM and we attempt to resolve this issue by allowing the basis to adapt to the observed data. Specifically, we present a wavelet decomposition in order to obtain a parsimonious approximation of the spatial continuous process. This parsimony can be achieved by placing a Laplace prior distribution on the wavelet coefficients. The aim of using the Laplace prior, is to filter wavelet coefficients with low contribution, and thus achieve the dimension reduction with significant computation savings. We then propose a Hierarchical Bayesian State Space model, for the estimation of which we offer an appropriate particle filter algorithm. The proposed methodology is illustrated using real environmental data.Keywords: multidimensional Laplace prior, particle filtering, spatio-temporal modelling, wavelets
Procedia PDF Downloads 429662 Optimisation of B2C Supply Chain Resource Allocation
Authors: Firdaous Zair, Zoubir Elfelsoufi, Mohammed Fourka
Abstract:
The allocation of resources is an issue that is needed on the tactical and operational strategic plan. This work considers the allocation of resources in the case of pure players, manufacturers and Click & Mortars that have launched online sales. The aim is to improve the level of customer satisfaction and maintaining the benefits of e-retailer and of its cooperators and reducing costs and risks. Our contribution is a decision support system and tool for improving the allocation of resources in logistics chains e-commerce B2C context. We first modeled the B2C chain with all operations that integrates and possible scenarios since online retailers offer a wide selection of personalized service. The personalized services that online shopping companies offer to the clients can be embodied in many aspects, such as the customizations of payment, the distribution methods, and after-sales service choices. In addition, every aspect of customized service has several modes. At that time, we analyzed the optimization problems of supply chain resource allocation in customized online shopping service mode, which is different from the supply chain resource allocation under traditional manufacturing or service circumstances. Then we realized an optimization model and algorithm for the development based on the analysis of the allocation of the B2C supply chain resources. It is a multi-objective optimization that considers the collaboration of resources in operations, time and costs but also the risks and the quality of services as well as dynamic and uncertain characters related to the request.Keywords: e-commerce, supply chain, B2C, optimisation, resource allocation
Procedia PDF Downloads 274661 Provision of Afterschool Programs: Understanding the Educational Needs and Outcomes of Newcomer and Refugee Students in Canada
Authors: Edward Shizha, Edward Makwarimba
Abstract:
Newcomer and refugee youth feel excluded in the education system in Canada, and the formal education environment does not fully cater for their learning needs. The objective of this study was to build knowledge and understanding of the educational needs and experiences of these youth in Canada and how available afterschool programs can most effectively support their learning needs and academic outcomes. The Employment and Social Development Canada (ESDC), which funded this research, enables and empowers students to advance their educational experience through targeted investments in services that are delivered by youth-serving organizations outside the formal education system through afterschool initiatives. A literature review and a provincial/territorial internet scan were conducted to determine the availability of services and programs that serve the educational needs and academic outcomes of newcomer youth in 10 provinces and 3 territories in Canada. The goal was to identify intersectional factors (e.g., gender, sexuality, culture, social class, race, etc.) that influence educational outcomes of newcomer/refugee students and to recommend ways the ESDC could complement settlement services to enhance students’ educational success. First, data was collected through a literature search of various databases, including PubMed, Web of Science, Scopus, Google docs, ACADEMIA, and grey literature, including government documents, to inform our analysis. Second, a provincial/territorial internet scan was conducted using a template that was created by ESDC staff with the input of the researchers. The objective of the web-search scan was to identify afterschool programs, projects, and initiatives offered to newcomer/refugee youth by service provider organizations. The method for the scan included both qualitative and quantitative data gathering. Both the literature review and the provincial/territorial scan revealed that there are gender disparities in educational outcomes of newcomer and refugee youth. High school completion rates by gender show that boys are at higher risk of not graduating than girls and that girls are more likely than boys to have at least a high school diploma and more likely to proceed to postsecondary education. Findings from literature reveal that afterschool programs are required for refugee youth who experience mental health challenges and miss out on significant periods of schooling, which affect attendance, participation, and graduation from high school. However, some refugee youth use their resilience and ambition to succeed in their educational outcomes. Another finding showed that some immigrant/refugee students, through ethnic organizations and familial affiliation, maintain aspects of their cultural values, parental expectations and ambitious expectations for their own careers to succeed in both high school and postsecondary education. The study found a significant combination of afterschool programs that include academic support, scholarships, bursaries, homework support, career readiness, internships, mentorship, tutoring, non-clinical counselling, mental health and social well-being support, language skills, volunteering opportunities, community connections, peer networking, culturally relevant services etc. These programs assist newcomer youth to develop self-confidence and prepare for academic success and future career development. The study concluded that advantages of afterschool programs are greatest for youth at risk for poor educational outcomes, such as Latino and Black youth, including 2SLGBTQI+ immigrant youth.Keywords: afterschool programs, educational outcomes, newcomer youth, refugee youth, youth-serving organizations
Procedia PDF Downloads 76660 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: classifier ensemble, breast cancer survivability, data mining, SEER
Procedia PDF Downloads 329659 Machine Learning in Gravity Models: An Application to International Recycling Trade Flow
Authors: Shan Zhang, Peter Suechting
Abstract:
Predicting trade patterns is critical to decision-making in public and private domains, especially in the current context of trade disputes among major economies. In the past, U.S. recycling has relied heavily on strong demand for recyclable materials overseas. However, starting in 2017, a series of new recycling policies (bans and higher inspection standards) was enacted by multiple countries that were the primary importers of recyclables from the U.S. prior to that point. As the global trade flow of recycling shifts, some new importers, mostly developing countries in South and Southeast Asia, have been overwhelmed by the sheer quantities of scrap materials they have received. As the leading exporter of recyclable materials, the U.S. now has a pressing need to build its recycling industry domestically. With respect to the global trade in scrap materials used for recycling, the interest in this paper is (1) predicting how the export of recyclable materials from the U.S. might vary over time, and (2) predicting how international trade flows for recyclables might change in the future. Focusing on three major recyclable materials with a history of trade, this study uses data-driven and machine learning (ML) algorithms---supervised (shrinkage and tree methods) and unsupervised (neural network method)---to decipher the international trade pattern of recycling. Forecasting the potential trade values of recyclables in the future could help importing countries, to which those materials will shift next, to prepare related trade policies. Such policies can assist policymakers in minimizing negative environmental externalities and in finding the optimal amount of recyclables needed by each country. Such forecasts can also help exporting countries, like the U.S understand the importance of healthy domestic recycling industry. The preliminary result suggests that gravity models---in addition to particular selection macroeconomic predictor variables--are appropriate predictors of the total export value of recyclables. With the inclusion of variables measuring aspects of the political conditions (trade tariffs and bans), predictions show that recyclable materials are shifting from more policy-restricted countries to less policy-restricted countries in international recycling trade. Those countries also tend to have high manufacturing activities as a percentage of their GDP.Keywords: environmental economics, machine learning, recycling, international trade
Procedia PDF Downloads 170658 Research on the Optimization of the Facility Layout of Efficient Cafeterias for Troops
Authors: Qing Zhang, Jiachen Nie, Yujia Wen, Guanyuan Kou, Peng Yu, Kun Xia, Qin Yang, Li Ding
Abstract:
BACKGROUND: A facility layout problem (FLP) is an NP-complete (non-deterministic polynomial) problem, which is hard to obtain an exact optimal solution. FLP has been widely studied in various limited spaces and workflows. For example, cafeterias with many types of equipment for troops cause chaotic processes when dining. OBJECTIVE: This article tried to optimize the layout of troops’ cafeteria and to improve the overall efficiency of the dining process. METHODS: First, the original cafeteria layout design scheme was analyzed from an ergonomic perspective and two new design schemes were generated. Next, three facility layout models were designed, and further simulation was applied to compare the total time and density of troops between each scheme. Last, an experiment of the dining process with video observation and analysis verified the simulation results. RESULTS: In a simulation, the dining time under the second new layout is shortened by 2.25% and 1.89% (p<0.0001, p=0.0001) compared with the other two layouts, while troops-flow density and interference both greatly reduced in the two new layouts. In the experiment, process completing time and the number of interference reduced as well, which verified corresponding simulation results. CONCLUSIONS: Our two new layout schemes are tested to be optimal by a series of simulation and space experiments. In future research, similar approaches could be applied when taking layout-design algorithm calculation into consideration.Keywords: layout optimization, dining efficiency, troops’ cafeteria, anylogic simulation, field experiment
Procedia PDF Downloads 143657 PathoPy2.0: Application of Fractal Geometry for Early Detection and Histopathological Analysis of Lung Cancer
Authors: Rhea Kapoor
Abstract:
Fractal dimension provides a way to characterize non-geometric shapes like those found in nature. The purpose of this research is to estimate Minkowski fractal dimension of human lung images for early detection of lung cancer. Lung cancer is the leading cause of death among all types of cancer and an early histopathological analysis will help reduce deaths primarily due to late diagnosis. A Python application program, PathoPy2.0, was developed for analyzing medical images in pixelated format and estimating Minkowski fractal dimension using a new box-counting algorithm that allows windowing of images for more accurate calculation in the suspected areas of cancerous growth. Benchmark geometric fractals were used to validate the accuracy of the program and changes in fractal dimension of lung images to indicate the presence of issues in the lung. The accuracy of the program for the benchmark examples was between 93-99% of known values of the fractal dimensions. Fractal dimension values were then calculated for lung images, from National Cancer Institute, taken over time to correctly detect the presence of cancerous growth. For example, as the fractal dimension for a given lung increased from 1.19 to 1.27 due to cancerous growth, it represents a significant change in fractal dimension which lies between 1 and 2 for 2-D images. Based on the results obtained on many lung test cases, it was concluded that fractal dimension of human lungs can be used to diagnose lung cancer early. The ideas behind PathoPy2.0 can also be applied to study patterns in the electrical activity of the human brain and DNA matching.Keywords: fractals, histopathological analysis, image processing, lung cancer, Minkowski dimension
Procedia PDF Downloads 179656 Multimodal Database of Retina Images for Africa: The First Open Access Digital Repository for Retina Images in Sub Saharan Africa
Authors: Simon Arunga, Teddy Kwaga, Rita Kageni, Michael Gichangi, Nyawira Mwangi, Fred Kagwa, Rogers Mwavu, Amos Baryashaba, Luis F. Nakayama, Katharine Morley, Michael Morley, Leo A. Celi, Jessica Haberer, Celestino Obua
Abstract:
Purpose: The main aim for creating the Multimodal Database of Retinal Images for Africa (MoDRIA) was to provide a publicly available repository of retinal images for responsible researchers to conduct algorithm development in a bid to curb the challenges of ophthalmic artificial intelligence (AI) in Africa. Methods: Data and retina images were ethically sourced from sites in Uganda and Kenya. Data on medical history, visual acuity, ocular examination, blood pressure, and blood sugar were collected. Retina images were captured using fundus cameras (Foru3-nethra and Canon CR-Mark-1). Images were stored on a secure online database. Results: The database consists of 7,859 retinal images in portable network graphics format from 1,988 participants. Images from patients with human immunodeficiency virus were 18.9%, 18.2% of images were from hypertensive patients, 12.8% from diabetic patients, and the rest from normal’ participants. Conclusion: Publicly available data repositories are a valuable asset in the development of AI technology. Therefore, is a need for the expansion of MoDRIA so as to provide larger datasets that are more representative of Sub-Saharan data.Keywords: retina images, MoDRIA, image repository, African database
Procedia PDF Downloads 129655 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 333654 Research on Detection of Web Page Visual Salience Region Based on Eye Tracker and Spectral Residual Model
Authors: Xiaoying Guo, Xiangyun Wang, Chunhua Jia
Abstract:
Web page has been one of the most important way of knowing the world. Humans catch a lot of information from it everyday. Thus, understanding where human looks when they surfing the web pages is rather important. In normal scenes, the down-top features and top-down tasks significantly affect humans’ eye movement. In this paper, we investigated if the conventional visual salience algorithm can properly predict humans’ visual attractive region when they viewing the web pages. First, we obtained the eye movement data when the participants viewing the web pages using an eye tracker. By the analysis of eye movement data, we studied the influence of visual saliency and thinking way on eye-movement pattern. The analysis result showed that thinking way affect human’ eye-movement pattern much more than visual saliency. Second, we compared the results of web page visual salience region extracted by Itti model and Spectral Residual (SR) model. The results showed that Spectral Residual (SR) model performs superior than Itti model by comparison with the heat map from eye movements. Considering the influence of mind habit on humans’ visual region of interest, we introduced one of the most important cue in mind habit-fixation position to improved the SR model. The result showed that the improved SR model can better predict the human visual region of interest in web pages.Keywords: web page salience region, eye-tracker, spectral residual, visual salience
Procedia PDF Downloads 277653 Improved Classification Procedure for Imbalanced and Overlapped Situations
Authors: Hankyu Lee, Seoung Bum Kim
Abstract:
The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.Keywords: classification, imbalanced data with class overlap, split data space, support vector machine
Procedia PDF Downloads 308652 Patent on Brian: Brain Waves Stimulation
Authors: Jalil Qoulizadeh, Hasan Sadeghi
Abstract:
Brain waves are electrical wave patterns that are produced in the human brain. Knowing these waves and activating them can have a positive effect on brain function and ultimately create an ideal life. The brain has the ability to produce waves from 0.1 to above 65 Hz. (The Beta One device produces exactly these waves) This is because it is said that the waves produced by the Beta One device exactly match the waves produced by the brain. The function and method of this device is based on the magnetic stimulation of the brain. The technology used in the design and producƟon of this device works in a way to strengthen and improve the frequencies of brain waves with a pre-defined algorithm according to the type of requested function, so that the person can access the expected functions in life activities. to perform better. The effect of this field on neurons and their stimulation: In order to evaluate the effect of this field created by the device, on the neurons, the main tests are by conducting electroencephalography before and after stimulation and comparing these two baselines by qEEG or quantitative electroencephalography method using paired t-test in 39 subjects. It confirms the significant effect of this field on the change of electrical activity recorded after 30 minutes of stimulation in all subjects. The Beta One device is able to induce the appropriate pattern of the expected functions in a soft and effective way to the brain in a healthy and effective way (exactly in accordance with the harmony of brain waves), the process of brain activities first to a normal state and then to a powerful one. Production of inexpensive neuroscience equipment (compared to existing rTMS equipment) Magnetic brain stimulation for clinics - homes - factories and companies - professional sports clubs.Keywords: stimulation, brain, waves, betaOne
Procedia PDF Downloads 82651 Getting Out of the Box: Tangible Music Production in the Age of Virtual Technological Abundance
Authors: Tim Nikolsky
Abstract:
This paper seeks to explore the different ways in which music producers choose to embrace various levels of technology based on musical values, objectives, affordability, access and workflow benefits. Current digital audio production workflow is questioned. Engineers and music producers of today are increasingly divorced from the tangibility of music production. Making music no longer requires you to reach over and turn a knob. Ideas of authenticity in music production are being redefined. Calculations from the mathematical algorithm with the pretty pictures are increasingly being chosen over hardware containing transformers and tubes. Are mouse clicks and movements equivalent or inferior to the master brush strokes we are seeking to conjure? We are making audio production decisions visually by constantly looking at a screen rather than listening. Have we compromised our music objectives and values by removing the ‘hands-on’ nature of music making? DAW interfaces are making our musical decisions for us not necessarily in our best interests. Technological innovation has presented opportunities as well as challenges for education. What do music production students actually need to learn in a formalised education environment, and to what extent do they need to know it? In this brave new world of omnipresent music creation tools, do we still need tangibility in music production? Interviews with prominent Australian music producers that work in a variety of fields will be featured in this paper, and will provide insight in answering these questions and move towards developing an understanding how tangibility can be rediscovered in the next generation of music production.Keywords: analogue, digital, digital audio workstation, music production, plugins, tangibility, technology, workflow
Procedia PDF Downloads 272650 Increasing of Gain in Unstable Thin Disk Resonator
Authors: M. Asl. Dehghan, M. H. Daemi, S. Radmard, S. H. Nabavi
Abstract:
Thin disk lasers are engineered for efficient thermal cooling and exhibit superior performance for this task. However the disk thickness and large pumped area make the use of this gain format in a resonator difficult when constructing a single-mode laser. Choosing an unstable resonator design is beneficial for this purpose. On the other hand, the low gain medium restricts the application of unstable resonators to low magnifications and therefore to a poor beam quality. A promising idea to enable the application of unstable resonators to wide aperture, low gain lasers is to couple a fraction of the out coupled radiation back into the resonator. The output coupling gets dependent on the ratio of the back reflection and can be adjusted independently from the magnification. The excitation of the converging wave can be done by the use of an external reflector. The resonator performance is numerically predicted. First of all the threshold condition of linear, V and 2V shape resonator is investigated. Results show that the maximum magnification is 1.066 that is very low for high quality purposes. Inserting an additional reflector covers the low gain. The reflectivity and the related magnification of a 350 micron Yb:YAG disk are calculated. The theoretical model was based on the coupled Kirchhoff integrals and solved numerically by the Fox and Li algorithm. Results show that with back reflection mechanism in combination with increasing the number of beam incidents on disk, high gain and high magnification can occur.Keywords: unstable resonators, thin disk lasers, gain, external reflector
Procedia PDF Downloads 413649 Noise Reduction in Web Data: A Learning Approach Based on Dynamic User Interests
Authors: Julius Onyancha, Valentina Plekhanova
Abstract:
One of the significant issues facing web users is the amount of noise in web data which hinders the process of finding useful information in relation to their dynamic interests. Current research works consider noise as any data that does not form part of the main web page and propose noise web data reduction tools which mainly focus on eliminating noise in relation to the content and layout of web data. This paper argues that not all data that form part of the main web page is of a user interest and not all noise data is actually noise to a given user. Therefore, learning of noise web data allocated to the user requests ensures not only reduction of noisiness level in a web user profile, but also a decrease in the loss of useful information hence improves the quality of a web user profile. Noise Web Data Learning (NWDL) tool/algorithm capable of learning noise web data in web user profile is proposed. The proposed work considers elimination of noise data in relation to dynamic user interest. In order to validate the performance of the proposed work, an experimental design setup is presented. The results obtained are compared with the current algorithms applied in noise web data reduction process. The experimental results show that the proposed work considers the dynamic change of user interest prior to elimination of noise data. The proposed work contributes towards improving the quality of a web user profile by reducing the amount of useful information eliminated as noise.Keywords: web log data, web user profile, user interest, noise web data learning, machine learning
Procedia PDF Downloads 265648 Coding and Decoding versus Space Diversity for Rayleigh Fading Radio Frequency Channels
Authors: Ahmed Mahmoud Ahmed Abouelmagd
Abstract:
The diversity is the usual remedy of the transmitted signal level variations (Fading phenomena) in radio frequency channels. Diversity techniques utilize two or more copies of a signal and combine those signals to combat fading. The basic concept of diversity is to transmit the signal via several independent diversity branches to get independent signal replicas via time – frequency - space - and polarization diversity domains. Coding and decoding processes can be an alternative remedy for fading phenomena, it cannot increase the channel capacity, but it can improve the error performance. In this paper we propose the use of replication decoding with BCH code class, and Viterbi decoding algorithm with convolution coding; as examples of coding and decoding processes. The results are compared to those obtained from two optimized selection space diversity techniques. The performance of Rayleigh fading channel, as the model considered for radio frequency channels, is evaluated for each case. The evaluation results show that the coding and decoding approaches, especially the BCH coding approach with replication decoding scheme, give better performance compared to that of selection space diversity optimization approaches. Also, an approach for combining the coding and decoding diversity as well as the space diversity is considered, the main disadvantage of this approach is its complexity but it yields good performance results.Keywords: Rayleigh fading, diversity, BCH codes, Replication decoding, convolution coding, viterbi decoding, space diversity
Procedia PDF Downloads 443647 Development of Precise Ephemeris Generation Module for Thaichote Satellite Operations
Authors: Manop Aorpimai, Ponthep Navakitkanok
Abstract:
In this paper, the development of the ephemeris generation module used for the Thaichote satellite operations is presented. It is a vital part of the flight dynamics system, which comprises, the orbit determination, orbit propagation, event prediction and station-keeping maneuver modules. In the generation of the spacecraft ephemeris data, the estimated orbital state vector from the orbit determination module is used as an initial condition. The equations of motion are then integrated forward in time to predict the satellite states. The higher geopotential harmonics, as well as other disturbing forces, are taken into account to resemble the environment in low-earth orbit. Using a highly accurate numerical integrator based on the Burlish-Stoer algorithm the ephemeris data can be generated for long-term predictions, by using a relatively small computation burden and short calculation time. Some events occurring during the prediction course that are related to the mission operations, such as the satellite’s rise/set viewed from the ground station, Earth and Moon eclipses, the drift in ground track as well as the drift in the local solar time of the orbital plane are all detected and reported. When combined with other modules to form a flight dynamics system, this application is aimed to be applied for the Thaichote satellite and successive Thailand’s Earth-observation missions.Keywords: flight dynamics system, orbit propagation, satellite ephemeris, Thailand’s Earth Observation Satellite
Procedia PDF Downloads 377646 Introduce a New Model of Anomaly Detection in Computer Networks Using Artificial Immune Systems
Authors: Mehrshad Khosraviani, Faramarz Abbaspour Leyl Abadi
Abstract:
The fundamental component of the computer network of modern information society will be considered. These networks are connected to the network of the internet generally. Due to the fact that the primary purpose of the Internet is not designed for, in recent decades, none of these networks in many of the attacks has been very important. Today, for the provision of security, different security tools and systems, including intrusion detection systems are used in the network. A common diagnosis system based on artificial immunity, the designer, the Adhasaz Foundation has been evaluated. The idea of using artificial safety methods in the diagnosis of abnormalities in computer networks it has been stimulated in the direction of their specificity, there are safety systems are similar to the common needs of m, that is non-diagnostic. For example, such methods can be used to detect any abnormalities, a variety of attacks, being memory, learning ability, and Khodtnzimi method of artificial immune algorithm pointed out. Diagnosis of the common system of education offered in this paper using only the normal samples is required for network and any additional data about the type of attacks is not. In the proposed system of positive selection and negative selection processes, selection of samples to create a distinction between the colony of normal attack is used. Copa real data collection on the evaluation of ij indicates the proposed system in the false alarm rate is often low compared to other ir methods and the detection rate is in the variations.Keywords: artificial immune system, abnormality detection, intrusion detection, computer networks
Procedia PDF Downloads 355645 Comparison of Two Maintenance Policies for a Two-Unit Series System Considering General Repair
Authors: Seyedvahid Najafi, Viliam Makis
Abstract:
In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ.Keywords: condition-based maintenance, proportional hazards model, semi-Markov decision process, two-unit series systems
Procedia PDF Downloads 125644 Balance Control Mechanisms in Individuals With Multiple Sclerosis in Virtual Reality Environment
Authors: Badriah Alayidi, Emad Alyahya
Abstract:
Background: Most people with Multiple Sclerosis (MS) report worsening balance as the condition progresses. Poor balance control is also well known to be a significant risk factor for both falling and fear of falling. The increased risk of falls with disease progression thus makes balance control an essential target of gait rehabilitation amongst people with MS. Intervention programs have developed various methods to improve balance control, and accumulating evidence suggests that exercise programs may help people with MS improve their balance. Among these methods, virtual reality (VR) is growing in popularity as a balance-training technique owing to its potential benefits, including better compliance and greater user happiness. However, it is not clear if a VR environment will induce different balance control mechanisms in MS as compared to healthy individuals or traditional environments. Therefore, this study aims to examine how individuals with MS control their balance in a VR setting. Methodology: The proposed study takes an empirical approach to estimate and determine the role of balance response in persons with MS using a VR environment. It will use primary data collected through patient observations, physiological and biomechanical evaluation of balance, and data analysis. Results: The preliminary systematic review and meta-analysis indicated that there was variability in terms of the outcome assessing balance response in people with MS. The preliminary results of these assessments have the potential to provide essential indicators of the progression of MS and contribute to the individualization of treatment and evaluation of the interventions’ effectiveness. The literature describes patients who have had the opportunity to experiment in VR settings and then used what they have learned in the real world, suggesting that this VR setting could be more appealing than conditional settings. The findings of the proposed study will be beneficial in estimating and determining the effect of VR on balance control in persons with MS. In previous studies, VR was shown to be an interesting approach to neurological rehabilitation, but more data are needed to support this approach in MS. Conclusions: The proposed study enables an assessment of balance and evaluations of a variety of physiological implications related to neural activity as well as biomechanical implications related to movement analysis.Keywords: multiple sclerosis, virtual reality, postural control, balance
Procedia PDF Downloads 76643 The Use of Remotely Sensed Data to Extract Wetlands Area in the Cultural Park of Ahaggar, South of Algeria
Authors: Y. Fekir, K. Mederbal, M. A. Hammadouche, D. Anteur
Abstract:
The cultural park of the Ahaggar, occupying a large area of Algeria, is characterized by a rich wetlands area to be preserved and managed both in time and space. The management of a large area, by its complexity, needs large amounts of data, which for the most part, are spatially localized (DEM, satellite images and socio-economic information...), where the use of conventional and traditional methods is quite difficult. The remote sensing, by its efficiency in environmental applications, became an indispensable solution for this kind of studies. Remote sensing imaging data have been very useful in the last decade in very interesting applications. They can aid in several domains such as the detection and identification of diverse wetland surface targets, topographical details, and geological features... In this work, we try to extract automatically wetlands area using multispectral remotely sensed data on-board the Earth Observing 1 (EO-1) and Landsat satellite. Both are high-resolution multispectral imager with a 30 m resolution. The instrument images an interesting surface area. We have used images acquired over the several area of interesting in the National Park of Ahaggar in the south of Algeria. An Extraction Algorithm is applied on the several spectral index obtained from combination of different spectral bands to extract wetlands fraction occupation of land use. The obtained results show an accuracy to distinguish wetlands area from the other lad use themes using a fine exploitation on spectral index.Keywords: multispectral data, EO1, landsat, wetlands, Ahaggar, Algeria
Procedia PDF Downloads 378