Search results for: predictive accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4539

Search results for: predictive accuracy

1299 Functions and Pathophysiology of the Ventricular System: Review of the Underlying Basic Physics

Authors: Mohamed Abdelrahman Abdalla

Abstract:

Apart from their function in producing CSF, the brain ventricles have been recognized as the mere remnant of the embryological neural tube with no clear role. The lack of proper definition of the function of the brain ventricles and the central spinal canal has made it difficult to ascertain the pathophysiology of its different disease conditions or to treat them. This study aims to review the simple physics that could explain the basic function of the CNS ventricular system and to suggest new ways of approaching its pathology. There are probably more physical factors to consider than only the pressure. Monro-Killie hypothesis focuses on volume and subsequently pressure to direct our surgical management in different disease conditions. However, the enlarged volume of the ventricles in normal pressure hydrocephalus does not move any blood or brain outside the skull. Also, in idiopathic intracranial hypertension, the very high intracranial pressure rarely causes brain herniation. On this note, the continuum of the intracranial cavity with the spinal canal makes it a whole unit and hence the defect in the theory. In this study, adding different factors to the equation like brain and CSF density and positions of the brain in space, in addition to the volume and pressure, aims to identify how the ventricles are important in the CNS homeostasis. In addition, increasing the variables that we analyze to treat different CSF pathological conditions should increase our understanding and hence accuracy of treatment of such conditions.

Keywords: communicating hydrocephalus, functions of the ventricles, idiopathic intracranial hypertension physics of CSF

Procedia PDF Downloads 106
1298 Serological IgG Testing to Diagnose Alimentary Induced Diseases and Monitoring Efficacy of an Individual Defined Diet in Dogs

Authors: Anne-Margré C. Vink

Abstract:

Background: Food-related allergies and intolerances are frequently occurring in dogs. Diagnosis and monitoring according to ‘Golden Standard’ of elimination efficiency are time-consuming, expensive, and requires expert clinical setting. In order to facilitate rapid and robust, quantitative testing of intolerance, and determining the individual offending foods, a serological test is implicated. Method: As we developed Medisynx IgG Human Screening Test ELISA before and the dog’s immune system is most similar to humans, we were able to develop Medisynx IgG Dog Screening Test ELISA as well. In this study, 47 dogs suffering from Canine Atopic Dermatitis (CAD) and several secondary induced reactions were included to participate in serological Medisynx IgG Dog Screening Test ELISA (within < 0,02 % SD). Results were expressed as titers relative to the standard OD readings to diagnose alimentary induced diseases and monitoring the efficacy of an individual eliminating diet in dogs. Split sample analysis was performed by independently sending 2 times 3 ml serum under two unique codes. Results: The veterinarian monitored these dogs to check dog’ results at least at 3, 7, 21, 49, 70 days and after period of 6 and 12 months on an individual negative diet and a positive challenge (retrospectively) at 6 months. Data of each dog were recorded in a screening form and reported that a complete recovery of all clinical manifestations was observed at or less than 70 days (between 50 and 70 days) in the majority of dogs(44 out of 47 dogs =93.6%). Conclusion: Challenge results showed a significant result of 100% in specificity as well as 100% positive predicted value. On the other hand, sensitivity was 95,7% and negative predictive value was 95,7%. In conclusion, an individual diet based on IgG ELISA in dogs provides a significant improvement of atopic dermatitis and pruritus including all other non-specific defined allergic skin reactions as erythema, itching, biting and gnawing at toes, as well as to several secondary manifestations like chronic diarrhoea, chronic constipation, otitis media, obesity, laziness or inactive behaviour, pain and muscular stiffness causing a movement disorders, excessive lacrimation, hyper behaviour, nervous behaviour and not possible to stay alone at home, anxiety, biting and aggressive behaviour and disobedience behaviour. Furthermore, we conclude that a relatively more severe systemic candidiasis, as shown by relatively higher titer (class 3 and 4 IgG reactions to Candida albicans), influence the duration of recovery from clinical manifestations in affected dogs. These findings are consistent with our preliminary human clinical studies.

Keywords: allergy, canine atopic dermatitis, CAD, food allergens, IgG-ELISA, food-incompatibility

Procedia PDF Downloads 321
1297 A Machine Learning Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

There has been a need in recent years to predict student academic achievement prior to graduation. This is to assist them in improving their grades, especially for those who have struggled in the past. The purpose of this research is to use supervised learning techniques to create a model that predicts student academic progress. Many scholars have developed models that predict student academic achievement based on characteristics including smoking, demography, culture, social media, parent educational background, parent finances, and family background, to mention a few. This element, as well as the model used, could have misclassified the kids in terms of their academic achievement. As a prerequisite to predicting if the student will perform well in the future on related courses, this model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester. With a 96.7 percent accuracy, the model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost. This model is offered as a desktop application with user-friendly interfaces for forecasting student academic progress for both teachers and students. As a result, both students and professors are encouraged to use this technique to predict outcomes better.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 109
1296 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm

Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri

Abstract:

This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.

Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction

Procedia PDF Downloads 31
1295 Multistage Data Envelopment Analysis Model for Malmquist Productivity Index Using Grey's System Theory to Evaluate Performance of Electric Power Supply Chain in Iran

Authors: Mesbaholdin Salami, Farzad Movahedi Sobhani, Mohammad Sadegh Ghazizadeh

Abstract:

Evaluation of organizational performance is among the most important measures that help organizations and entities continuously improve their efficiency. Organizations can use the existing data and results from the comparison of units under investigation to obtain an estimation of their performance. The Malmquist Productivity Index (MPI) is an important index in the evaluation of overall productivity, which considers technological developments and technical efficiency at the same time. This article proposed a model based on the multistage MPI, considering limited data (Grey’s theory). This model can evaluate the performance of units using limited and uncertain data in a multistage process. It was applied by the electricity market manager to Iran’s electric power supply chain (EPSC), which contains uncertain data, to evaluate the performance of its actors. Results from solving the model showed an improvement in the accuracy of future performance of the units under investigation, using the Grey’s system theory. This model can be used in all case studies, in which MPI is used and there are limited or uncertain data.

Keywords: Malmquist Index, Grey's Theory, CCR Model, network data envelopment analysis, Iran electricity power chain

Procedia PDF Downloads 165
1294 Major Depressive Disorder: Diagnosis based on Electroencephalogram Analysis

Authors: Wajid Mumtaz, Aamir Saeed Malik, Syed Saad Azhar Ali, Mohd Azhar Mohd Yasin

Abstract:

In this paper, a technique based on electroencephalogram (EEG) analysis is presented, aiming for diagnosing major depressive disorder (MDD) among a potential population of MDD patients and healthy controls. EEG is recognized as a clinical modality during applications such as seizure diagnosis, index for anesthesia, detection of brain death or stroke. However, its usability for psychiatric illnesses such as MDD is less studied. Therefore, in this study, for the sake of diagnosis, 2 groups of study participants were recruited, 1) MDD patients, 2) healthy people as controls. EEG data acquired from both groups were analyzed involving inter-hemispheric asymmetry and composite permutation entropy index (CPEI). To automate the process, derived quantities from EEG were utilized as inputs to classifier such as logistic regression (LR) and support vector machine (SVM). The learning of these classification models was tested with a test dataset. Their learning efficiency is provided as accuracy of classifying MDD patients from controls, their sensitivities and specificities were reported, accordingly (LR =81.7 % and SVM =81.5 %). Based on the results, it is concluded that the derived measures are indicators for diagnosing MDD from a potential population of normal controls. In addition, the results motivate further exploring other measures for the same purpose.

Keywords: major depressive disorder, diagnosis based on EEG, EEG derived features, CPEI, inter-hemispheric asymmetry

Procedia PDF Downloads 546
1293 Towards an Environmental Knowledge System in Water Management

Authors: Mareike Dornhoefer, Madjid Fathi

Abstract:

Water supply and water quality are key problems of mankind at the moment and - due to increasing population - in the future. Management disciplines like water, environment and quality management therefore need to closely interact, to establish a high level of water quality and to guarantee water supply in all parts of the world. Groundwater remediation is one aspect in this process. From a knowledge management perspective it is only possible to solve complex ecological or environmental problems if different factors, expert knowledge of various stakeholders and formal regulations regarding water, waste or chemical management are interconnected in form of a knowledge base. In general knowledge management focuses the processes of gathering and representing existing and new knowledge in a way, which allows for inference or deduction of knowledge for e.g. a situation where a problem solution or decision support are required. A knowledge base is no sole data repository, but a key element in a knowledge based system, thus providing or allowing for inference mechanisms to deduct further knowledge from existing facts. In consequence this knowledge provides decision support. The given paper introduces an environmental knowledge system in water management. The proposed environmental knowledge system is part of a research concept called Green Knowledge Management. It applies semantic technologies or concepts such as ontology or linked open data to interconnect different data and information sources about environmental aspects, in this case, water quality, as well as background material enriching an established knowledge base. Examples for the aforementioned ecological or environmental factors threatening water quality are among others industrial pollution (e.g. leakage of chemicals), environmental changes (e.g. rise in temperature) or floods, where all kinds of waste are merged and transferred into natural water environments. Water quality is usually determined with the help of measuring different indicators (e.g. chemical or biological), which are gathered with the help of laboratory testing, continuous monitoring equipment or other measuring processes. During all of these processes data are gathered and stored in different databases. Meanwhile the knowledge base needs to be established through interconnecting data of these different data sources and enriching its semantics. Experts may add their knowledge or experiences of previous incidents or influencing factors. In consequence querying or inference mechanisms are applied for the deduction of coherence between indicators, predictive developments or environmental threats. Relevant processes or steps of action may be modeled in form of a rule based approach. Overall the environmental knowledge system supports the interconnection of information and adding semantics to create environmental knowledge about water environment, supply chain as well as quality. The proposed concept itself is a holistic approach, which links to associated disciplines like environmental and quality management. Quality indicators and quality management steps need to be considered e.g. for the process and inference layers of the environmental knowledge system, thus integrating the aforementioned management disciplines in one water management application.

Keywords: water quality, environmental knowledge system, green knowledge management, semantic technologies, quality management

Procedia PDF Downloads 221
1292 Blood Volume Pulse Extraction for Non-Contact Photoplethysmography Measurement from Facial Images

Authors: Ki Moo Lim, Iman R. Tayibnapis

Abstract:

According to WHO estimation, 38 out of 56 million (68%) global deaths in 2012, were due to noncommunicable diseases (NCDs). To avert NCD, one of the solutions is early detection of diseases. In order to do that, we developed 'U-Healthcare Mirror', which is able to measure vital sign such as heart rate (HR) and respiration rate without any physical contact and consciousness. To measure HR in the mirror, we utilized digital camera. The camera records red, green, and blue (RGB) discoloration from user's facial image sequences. We extracted blood volume pulse (BVP) from the RGB discoloration because the discoloration of the facial skin is accordance with BVP. We used blind source separation (BSS) to extract BVP from the RGB discoloration and adaptive filters for removing noises. We utilized singular value decomposition (SVD) method to implement the BSS and the adaptive filters. HR was estimated from the obtained BVP. We did experiment for HR measurement by using our method and previous method that used independent component analysis (ICA) method. We compared both of them with HR measurement from commercial oximeter. The experiment was conducted under various distance between 30~110 cm and light intensity between 5~2000 lux. For each condition, we did measurement 7 times. The estimated HR showed 2.25 bpm of mean error and 0.73 of pearson correlation coefficient. The accuracy has improved compared to previous work. The optimal distance between the mirror and user for HR measurement was 50 cm with medium light intensity, around 550 lux.

Keywords: blood volume pulse, heart rate, photoplethysmography, independent component analysis

Procedia PDF Downloads 329
1291 Unstructured-Data Content Search Based on Optimized EEG Signal Processing and Multi-Objective Feature Extraction

Authors: Qais M. Yousef, Yasmeen A. Alshaer

Abstract:

Over the last few years, the amount of data available on the globe has been increased rapidly. This came up with the emergence of recent concepts, such as the big data and the Internet of Things, which have furnished a suitable solution for the availability of data all over the world. However, managing this massive amount of data remains a challenge due to their large verity of types and distribution. Therefore, locating the required file particularly from the first trial turned to be a not easy task, due to the large similarities of names for different files distributed on the web. Consequently, the accuracy and speed of search have been negatively affected. This work presents a method using Electroencephalography signals to locate the files based on their contents. Giving the concept of natural mind waves processing, this work analyses the mind wave signals of different people, analyzing them and extracting their most appropriate features using multi-objective metaheuristic algorithm, and then classifying them using artificial neural network to distinguish among files with similar names. The aim of this work is to provide the ability to find the files based on their contents using human thoughts only. Implementing this approach and testing it on real people proved its ability to find the desired files accurately within noticeably shorter time and retrieve them as a first choice for the user.

Keywords: artificial intelligence, data contents search, human active memory, mind wave, multi-objective optimization

Procedia PDF Downloads 175
1290 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform

Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu

Abstract:

Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.

Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform

Procedia PDF Downloads 65
1289 Applicability of Cameriere’s Age Estimation Method in a Sample of Turkish Adults

Authors: Hatice Boyacioglu, Nursel Akkaya, Humeyra Ozge Yilanci, Hilmi Kansu, Nihal Avcu

Abstract:

The strong relationship between the reduction in the size of the pulp cavity and increasing age has been reported in the literature. This relationship can be utilized to estimate the age of an individual by measuring the pulp cavity size using dental radiographs as a non-destructive method. The purpose of this study is to develop a population specific regression model for age estimation in a sample of Turkish adults by applying Cameriere’s method on panoramic radiographs. The sample consisted of 100 panoramic radiographs of Turkish patients (40 men, 60 women) aged between 20 and 70 years. Pulp and tooth area ratios (AR) of the maxilla¬¬ry canines were measured by two maxillofacial radiologists and then the results were subjected to regression analysis. There were no statistically significant intra-observer and inter-observer differences. The correlation coefficient between age and the AR of the maxillary canines was -0.71 and the following regression equation was derived: Estimated Age = 77,365 – ( 351,193 × AR ). The mean prediction error was 4 years which is within acceptable errors limits for age estimation. This shows that the pulp/tooth area ratio is a useful variable for assessing age with reasonable accuracy. Based on the results of this research, it was concluded that Cameriere’s method is suitable for dental age estimation and it can be used for forensic procedures in Turkish adults. These instructions give you guidelines for preparing papers for conferences or journals.

Keywords: age estimation by teeth, forensic dentistry, panoramic radiograph, Cameriere's method

Procedia PDF Downloads 450
1288 Classification of Potential Biomarkers in Breast Cancer Using Artificial Intelligence Algorithms and Anthropometric Datasets

Authors: Aref Aasi, Sahar Ebrahimi Bajgani, Erfan Aasi

Abstract:

Breast cancer (BC) continues to be the most frequent cancer in females and causes the highest number of cancer-related deaths in women worldwide. Inspired by recent advances in studying the relationship between different patient attributes and features and the disease, in this paper, we have tried to investigate the different classification methods for better diagnosis of BC in the early stages. In this regard, datasets from the University Hospital Centre of Coimbra were chosen, and different machine learning (ML)-based and neural network (NN) classifiers have been studied. For this purpose, we have selected favorable features among the nine provided attributes from the clinical dataset by using a random forest algorithm. This dataset consists of both healthy controls and BC patients, and it was noted that glucose, BMI, resistin, and age have the most importance, respectively. Moreover, we have analyzed these features with various ML-based classifier methods, including Decision Tree (DT), K-Nearest Neighbors (KNN), eXtreme Gradient Boosting (XGBoost), Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machine (SVM) along with NN-based Multi-Layer Perceptron (MLP) classifier. The results revealed that among different techniques, the SVM and MLP classifiers have the most accuracy, with amounts of 96% and 92%, respectively. These results divulged that the adopted procedure could be used effectively for the classification of cancer cells, and also it encourages further experimental investigations with more collected data for other types of cancers.

Keywords: breast cancer, diagnosis, machine learning, biomarker classification, neural network

Procedia PDF Downloads 136
1287 Routing Medical Images with Tabu Search and Simulated Annealing: A Study on Quality of Service

Authors: Mejía M. Paula, Ramírez L. Leonardo, Puerta A. Gabriel

Abstract:

In telemedicine, the image repository service is important to increase the accuracy of diagnostic support of medical personnel. This study makes comparison between two routing algorithms regarding the quality of service (QoS), to be able to analyze the optimal performance at the time of loading and/or downloading of medical images. This study focused on comparing the performance of Tabu Search with other heuristic and metaheuristic algorithms that improve QoS in telemedicine services in Colombia. For this, Tabu Search and Simulated Annealing heuristic algorithms are chosen for their high usability in this type of applications; the QoS is measured taking into account the following metrics: Delay, Throughput, Jitter and Latency. In addition, routing tests were carried out on ten images in digital image and communication in medicine (DICOM) format of 40 MB. These tests were carried out for ten minutes with different traffic conditions, reaching a total of 25 tests, from a server of Universidad Militar Nueva Granada (UMNG) in Bogotá-Colombia to a remote user in Universidad de Santiago de Chile (USACH) - Chile. The results show that Tabu search presents a better QoS performance compared to Simulated Annealing, managing to optimize the routing of medical images, a basic requirement to offer diagnostic images services in telemedicine.

Keywords: medical image, QoS, simulated annealing, Tabu search, telemedicine

Procedia PDF Downloads 219
1286 Experimental Validation of a Mathematical Model for Sizing End-of-Production-Line Test Benches for Electric Motors of Electric Vehicle

Authors: Emiliano Lustrissimi, Bonifacio Bianco, Sebastiano Caravaggi, Antonio Rosato

Abstract:

A mathematical framework has been designed to enhance the configuration of an end-of-production-line (EOL) test bench. This system can be used to assess the performance of electric motors or axles intended for electric vehicles. The model has been developed to predict the behaviour of EOL test benches and electric motors/axles under various boundary conditions, eliminating the need for extensive physical testing and reducing the corresponding power consumption. The suggested model is versatile, capable of being utilized across various types of electric motors or axles, and adaptable to accommodate varying power ratings of electric motors or axles. The maximum performance to be guaranteed by the EMs according to the car maker's specifications are taken as inputs in the model. Then, the required performance of each main EOL test bench component is calculated, and the corresponding systems available on the market are selected based on manufacturers’ catalogues. In this study, an EOL test bench has been designed according to the proposed model outputs for testing a low-power (about 22 kW) electric axle. The performance of the designed EOL test bench has been measured and used to validate the proposed model and assess both the consistency of the constraints as well as the accuracy of predictions in terms of electric demands. The comparison between experimental and predicted data exhibited a reasonable agreement, allowing to demonstrate that, despite some discrepancies, the model gives an accurate representation of the EOL test benches' performance.

Keywords: electric motors, electric vehicles, end-of-production-line test bench, mathematical model, field tests

Procedia PDF Downloads 51
1285 Principle Component Analysis on Colon Cancer Detection

Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti

Abstract:

Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.

Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis

Procedia PDF Downloads 205
1284 DeepNIC a Method to Transform Each Tabular Variable into an Independant Image Analyzable by Basic CNNs

Authors: Nguyen J. M., Lucas G., Ruan S., Digonnet H., Antonioli D.

Abstract:

Introduction: Deep Learning (DL) is a very powerful tool for analyzing image data. But for tabular data, it cannot compete with machine learning methods like XGBoost. The research question becomes: can tabular data be transformed into images that can be analyzed by simple CNNs (Convolutional Neuron Networks)? Will DL be the absolute tool for data classification? All current solutions consist in repositioning the variables in a 2x2 matrix using their correlation proximity. In doing so, it obtains an image whose pixels are the variables. We implement a technology, DeepNIC, that offers the possibility of obtaining an image for each variable, which can be analyzed by simple CNNs. Material and method: The 'ROP' (Regression OPtimized) model is a binary and atypical decision tree whose nodes are managed by a new artificial neuron, the Neurop. By positioning an artificial neuron in each node of the decision trees, it is possible to make an adjustment on a theoretically infinite number of variables at each node. From this new decision tree whose nodes are artificial neurons, we created the concept of a 'Random Forest of Perfect Trees' (RFPT), which disobeys Breiman's concepts by assembling very large numbers of small trees with no classification errors. From the results of the RFPT, we developed a family of 10 statistical information criteria, Nguyen Information Criterion (NICs), which evaluates in 3 dimensions the predictive quality of a variable: Performance, Complexity and Multiplicity of solution. A NIC is a probability that can be transformed into a grey level. The value of a NIC depends essentially on 2 super parameters used in Neurops. By varying these 2 super parameters, we obtain a 2x2 matrix of probabilities for each NIC. We can combine these 10 NICs with the functions AND, OR, and XOR. The total number of combinations is greater than 100,000. In total, we obtain for each variable an image of at least 1166x1167 pixels. The intensity of the pixels is proportional to the probability of the associated NIC. The color depends on the associated NIC. This image actually contains considerable information about the ability of the variable to make the prediction of Y, depending on the presence or absence of other variables. A basic CNNs model was trained for supervised classification. Results: The first results are impressive. Using the GSE22513 public data (Omic data set of markers of Taxane Sensitivity in Breast Cancer), DEEPNic outperformed other statistical methods, including XGBoost. We still need to generalize the comparison on several databases. Conclusion: The ability to transform any tabular variable into an image offers the possibility of merging image and tabular information in the same format. This opens up great perspectives in the analysis of metadata.

Keywords: tabular data, CNNs, NICs, DeepNICs, random forest of perfect trees, classification

Procedia PDF Downloads 125
1283 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network

Authors: Sandesh Achar

Abstract:

Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.

Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.

Procedia PDF Downloads 44
1282 Association of Body Composition Parameters with Lower Limb Strength and Upper Limb Functional Capacity in Quilombola Remnants

Authors: Leonardo Costa Pereira, Frederico Santos Santana, Mauro Karnikowski, Luís Sinésio Silva Neto, Aline Oliveira Gomes, Marisete Peralta Safons, Margô Gomes De Oliveira Karnikowski

Abstract:

In Brazil, projections of population aging follow all world projections, the birth rate tends to be surpassed by the mortality rate around the year 2045. Historically, the population of Brazilian blacks suffered for several centuries from the oppression of dominant classes. A group, especially of blacks, stands out in relation to territorial, historical and social aspects, and for centuries they have isolated themselves in small communities, in order to maintain their freedom and culture. The isolation of the Quilombola communities generated socioeconomic effects as well as the health of these blacks. Thus, the objective of the present study is to verify the association of body composition parameters with lower and upper limb strength and functional capacity in Quilombola remnants. The research was approved by ethics committee (1,771,159). Anthropometric evaluations of hip and waist circumference, body mass and height were performed. In order to verify the body composition, the relationship between stature and body mass (BM) was performed, generating the body mass index (BMI), as well as the dual-energy X-ray absorptiometry (DEXA) test. The Time Up and Go (TUG) test was used to evaluate the functional capacity, and a maximum repetition test (1MR) for knee extension and handgrip (HG) was applied for strength magnitude analysis. Statistical analysis was performed using the statistical package SPSS 22.0. Shapiro Wilk's normality test was performed. For the possible correlations, the suggestions of the Pearson or Spearman tests were adopted. The results obtained after the interpretation identified that the sample (n = 18) was composed of 66.7% of female individuals with mean age of 66.07 ± 8.95 years. The sample’s body fat percentage (%BF) (35.65 ± 10.73) exceeds the recommendations for age group, as well as the anthropometric parameters of hip (90.91 ± 8.44cm) and waist circumference (80.37 ± 17.5cm). The relationship between height (1.55 ± 0.1m) and body mass (63.44 ± 11.25Kg) generated a BMI of 24.16 ± 7.09Kg/m2, that was considered normal. The TUG performance was 10.71 ± 1.85s. In the 1MR test, 46.67 ± 13.06Kg and in the HG 23.93±7.96Kgf were obtained, respectively. Correlation analyzes were characterized by the high frequency of significant correlations for height, dominant arm mass (DAM), %BF, 1MR and HG variables. In addition, correlations between HG and BM (r = 0.67, p = 0.005), height (r = 0.51, p = 0.004) and DAM (r = 0.55, p = 0.026) were also observed. The strength of the lower limbs correlates with BM (r = 0.69, p = 0.003), height (r = 0.62, p = 0.01) and DAM (r = 0.772, p = 0.001). In this way, we can conclude that not only the simple spatial relationship of mass and height can influence in predictive parameters of strength or functionality, being important the verification of the conditions of the corporal composition. For this population, height seems to be a good predictor of strength and body composition.

Keywords: African Continental Ancestry Group, body composition, functional capacity, strength

Procedia PDF Downloads 276
1281 The Map of Cassini: An Accurate View of Current Border Between Spain and France

Authors: Barbara Polo Martin

Abstract:

During the 18th century, the border between Spain and France underwent various changes, primarily due to territorial agreements, wars, and treaties between the two nations and other European powers. For studying these changes, the Cassini maps remain valuable historical documents, offering a glimpse into the landscape and geography of 18th-century France and its neighboring regions, including the border between Spain and France. However, it's essential to recognize that these maps may not reflect modern political boundaries or territorial changes that have occurred since their creation. The project was initiated by King Louis XV in 1744 and continued by his successor, Louis XVI. The primary objective was to produce accurate maps of France, which would serve various purposes, including military, administrative, and scientific. The Cassini maps were groundbreaking for their time, as they were among the earliest attempts to create topographic maps on a national scale. They covered the entirety of France and were based on meticulous surveying and cartographic techniques. The maps featured precise geographic details, including elevation contours, rivers, roads, forests, and settlements. This study aims to analyze this rich and unknown cartography of France, study the rich place names it offers, as well as the accuracy of delimitations created over time between both empires in a historical way but also through a Geographical Information System. This study will offer a deeper knowledge about the cartography that supposes the beginning of topography in Europe.

Keywords: cartography, engineering, borders, Spain, France, Cassini

Procedia PDF Downloads 60
1280 Estimation of Consolidating Settlement Based on a Time-Dependent Skin Friction Model Considering Column Surface Roughness

Authors: Jiang Zhenbo, Ishikura Ryohei, Yasufuku Noriyuki

Abstract:

Improvement of soft clay deposits by the combination of surface stabilization and floating type cement-treated columns is one of the most popular techniques worldwide. On the basis of one dimensional consolidation model, a time-dependent skin friction model for the column-soil interaction is proposed. The nonlinear relationship between column shaft shear stresses and effective vertical pressure of the surrounding soil can be described in this model. The influence of column-soil surface roughness can be represented using a roughness coefficient R, which plays an important role in the design of column length. Based on the homogenization method, a part of floating type improved ground will be treated as an unimproved portion, which with a length of αH1 is defined as a time-dependent equivalent skin friction length. The compression settlement of this unimproved portion can be predicted only using the soft clay parameters. Apart from calculating the settlement of this composited ground, the load transfer mechanism is discussed utilizing model tests. The proposed model is validated by comparing with calculations and laboratory results of model and ring shear tests, which indicate the suitability and accuracy of the solutions in this paper.

Keywords: floating type improved foundation, time-dependent skin friction, roughness, consolidation

Procedia PDF Downloads 468
1279 Modeling Aeration of Sharp Crested Weirs by Using Support Vector Machines

Authors: Arun Goel

Abstract:

The present paper attempts to investigate the prediction of air entrainment rate and aeration efficiency of a free over-fall jets issuing from a triangular sharp crested weir by using regression based modelling. The empirical equations, support vector machine (polynomial and radial basis function) models and the linear regression techniques were applied on the triangular sharp crested weirs relating the air entrainment rate and the aeration efficiency to the input parameters namely drop height, discharge, and vertex angle. It was observed that there exists a good agreement between the measured values and the values obtained using empirical equations, support vector machine (Polynomial and rbf) models, and the linear regression techniques. The test results demonstrated that the SVM based (Poly & rbf) model also provided acceptable prediction of the measured values with reasonable accuracy along with empirical equations and linear regression techniques in modelling the air entrainment rate and the aeration efficiency of a free over-fall jets issuing from triangular sharp crested weir. Further sensitivity analysis has also been performed to study the impact of input parameter on the output in terms of air entrainment rate and aeration efficiency.

Keywords: air entrainment rate, dissolved oxygen, weir, SVM, regression

Procedia PDF Downloads 436
1278 A New Family of Integration Methods for Nonlinear Dynamic Analysis

Authors: Shuenn-Yih Chang, Chiu-LI Huang, Ngoc-Cuong Tran

Abstract:

A new family of structure-dependent integration methods, whose coefficients of the difference equation for displacement increment are functions of the initial structural properties and the step size for time integration, is proposed in this work. This family method can simultaneously integrate the controllable numerical dissipation, explicit formulation and unconditional stability together. In general, its numerical dissipation can be continuously controlled by a parameter and it is possible to achieve zero damping. In addition, it can have high-frequency damping to suppress or even remove the spurious oscillations high frequency modes. Whereas, the low frequency modes can be very accurately integrated due to the almost zero damping for these low frequency modes. It is shown herein that the proposed family method can have exactly the same numerical properties as those of HHT-α method for linear elastic systems. In addition, it still preserves the most important property of a structure-dependent integration method, which is an explicit formulation for each time step. Consequently, it can save a huge computational efforts in solving inertial problems when compared to the HHT-α method. In fact, it is revealed by numerical experiments that the CPU time consumed by the proposed family method is only about 1.6% of that consumed by the HHT-α method for the 125-DOF system while it reduces to be 0.16% for the 1000-DOF system. Apparently, the saving of computational efforts is very significant.

Keywords: structure-dependent integration method, nonlinear dynamic analysis, unconditional stability, numerical dissipation, accuracy

Procedia PDF Downloads 639
1277 Impacts on the Modification of a Two-Blade Mobile on the Agitation of Newtonian Fluids

Authors: Abderrahim Sidi Mohammed Nekrouf, Sarra Youcefi

Abstract:

Fluid mixing plays a crucial role in numerous industries as it has a significant impact on the final product quality and performance. In certain cases, the circulation of viscous fluids presents challenges, leading to the formation of stagnant zones. To overcome this issue, stirring devices are employed for fluid mixing. This study focuses on a numerical analysis aimed at understanding the behavior of Newtonian fluids when agitated by a two-blade agitator in a cylindrical vessel. We investigate the influence of the agitator shape on fluid motion. Bi-blade agitators of this type are commonly used in the food, cosmetic, and chemical industries to agitate both viscous and non-viscous liquids. Numerical simulations were conducted using Computational Fluid Dynamics (CFD) software to obtain velocity profiles, streamlines, velocity contours, and the associated power number. The obtained results were compared with experimental data available in the literature, validating the accuracy of our numerical approach. The results clearly demonstrate that modifying the agitator shape has a significant impact on fluid motion. This modification generates an axial flow that enhances the efficiency of the fluid flow. The various velocity results convincingly reveal that the fluid is more uniformly agitated with this modification, resulting in improved circulation and a substantial reduction in stagnant zones.

Keywords: Newtonian fluids, numerical modeling, two blade., CFD

Procedia PDF Downloads 78
1276 A Numerical Investigation of Total Temperature Probes Measurement Performance

Authors: Erdem Meriç

Abstract:

Measuring total temperature of air flow accurately is a very important requirement in the development phases of many industrial products, including gas turbines and rockets. Thermocouples are very practical devices to measure temperature in such cases, but in high speed and high temperature flows, the temperature of thermocouple junction may deviate considerably from real flow total temperature due to the effects of heat transfer mechanisms of convection, conduction, and radiation. To avoid errors in total temperature measurement, special probe designs which are experimentally characterized are used. In this study, a validation case which is an experimental characterization of a specific class of total temperature probes is selected from the literature to develop a numerical conjugate heat transfer analysis methodology to study the total temperature probe flow field and solid temperature distribution. Validated conjugate heat transfer methodology is used to investigate flow structures inside and around the probe and effects of probe design parameters like the ratio between inlet and outlet hole areas and prob tip geometry on measurement accuracy. Lastly, a thermal model is constructed to account for errors in total temperature measurement for a specific class of probes in different operating conditions. Outcomes of this work can guide experimentalists to design a very accurate total temperature probe and quantify the possible error for their specific case.

Keywords: conjugate heat transfer, recovery factor, thermocouples, total temperature probes

Procedia PDF Downloads 134
1275 Defect Identification in Partial Discharge Patterns of Gas Insulated Switchgear and Straight Cable Joint

Authors: Chien-Kuo Chang, Yu-Hsiang Lin, Yi-Yun Tang, Min-Chiu Wu

Abstract:

With the trend of technological advancement, the harm caused by power outages is substantial, mostly due to problems in the power grid. This highlights the necessity for further improvement in the reliability of the power system. In the power system, gas-insulated switches (GIS) and power cables play a crucial role. Long-term operation under high voltage can cause insulation materials in the equipment to crack, potentially leading to partial discharges. If these partial discharges (PD) can be analyzed, preventative maintenance and replacement of equipment can be carried out, there by improving the reliability of the power grid. This research will diagnose defects by identifying three different defects in GIS and three different defects in straight cable joints, for a total of six types of defects. The partial discharge data measured will be converted through phase analysis diagrams and pulse sequence analysis. Discharge features will be extracted using convolutional image processing, and three different deep learning models, CNN, ResNet18, and MobileNet, will be used for training and evaluation. Class Activation Mapping will be utilized to interpret the black-box problem of deep learning models, with each model achieving an accuracy rate of over 95%. Lastly, the overall model performance will be enhanced through an ensemble learning voting method.

Keywords: partial discharge, gas-insulated switches, straight cable joint, defect identification, deep learning, ensemble learning

Procedia PDF Downloads 78
1274 Development of Partial Discharge Defect Recognition and Status Diagnosis System with Adaptive Deep Learning

Authors: Chien-kuo Chang, Bo-wei Wu, Yi-yun Tang, Min-chiu Wu

Abstract:

This paper proposes a power equipment diagnosis system based on partial discharge (PD), which is characterized by increasing the readability of experimental data and the convenience of operation. This system integrates a variety of analysis programs of different data formats and different programming languages and then establishes a set of interfaces that can follow and expand the structure, which is also helpful for subsequent maintenance and innovation. This study shows a case of using the developed Convolutional Neural Networks (CNN) to integrate with this system, using the designed model architecture to simplify the complex training process. It is expected that the simplified training process can be used to establish an adaptive deep learning experimental structure. By selecting different test data for repeated training, the accuracy of the identification system can be enhanced. On this platform, the measurement status and partial discharge pattern of each equipment can be checked in real time, and the function of real-time identification can be set, and various training models can be used to carry out real-time partial discharge insulation defect identification and insulation state diagnosis. When the electric power equipment entering the dangerous period, replace equipment early to avoid unexpected electrical accidents.

Keywords: partial discharge, convolutional neural network, partial discharge analysis platform, adaptive deep learning

Procedia PDF Downloads 78
1273 Experimental Modeling and Simulation of Zero-Surface Temperature of Controlled Water Jet Impingement Cooling System for Hot-Rolled Steel Plates

Authors: Thomas Okechukwu Onah, Onyekachi Marcel Egwuagu

Abstract:

Zero-surface temperature, which controlled the cooling profile, was modeled and used to investigate the effect of process parameters on the hot-rolled steel plates. The parameters include impingement gaps of 40mm to 70mm; pipe diameters of 20mm to 45mm feeding jet nozzle with 30 holes of 8mm diameters each; and flow rates within 2.896x10-⁶m³/s and 3.13x10-⁵m³/s. The developed simulation model of the Zero-Surface Temperature, upon validation, showed 99% prediction accuracy with dimensional homogeneity established. The evaluated Zero-Surface temperature of Controlled Water Jet Impingement Steel plates showed a high cooling rate of 36.31 Celsius degree/sec at an optimal cooling nozzle diameter of 20mm, impingement gap of 70mm and a flow rate of 1.77x10-⁵m³/s resulting in Reynold's number 2758.586, in the turbulent regime was obtained. It was also deduced that as the nozzle diameter was increasing, the impingement gap was reducing. This achieved a faster rate of cooling to an optimum temperature of 300oC irrespective of the starting surface cooling temperature. The results additionally showed that with a tested-plate initial temperature of 550oC, a controlled cooling temperature of about 160oC produced a film and nucleated boiling heat extraction that was particularly beneficial at the end of controlled cooling and influenced the microstructural properties of the test plates.

Keywords: temperature, mechanistic-model, plates, impingements, dimensionless-numbers

Procedia PDF Downloads 47
1272 Iterative Estimator-Based Nonlinear Backstepping Control of a Robotic Exoskeleton

Authors: Brahmi Brahim, Mohammad Habibur Rahman, Maarouf Saad, Cristóbal Ochoa Luna

Abstract:

A repetitive training movement is an efficient method to improve the ability and movement performance of stroke survivors and help them to recover their lost motor function and acquire new skills. The ETS-MARSE is seven degrees of freedom (DOF) exoskeleton robot developed to be worn on the lateral side of the right upper-extremity to assist and rehabilitate the patients with upper-extremity dysfunction resulting from stroke. Practically, rehabilitation activities are repetitive tasks, which make the assistive/robotic systems to suffer from repetitive/periodic uncertainties and external perturbations induced by the high-order dynamic model (seven DOF) and interaction with human muscle which impact on the tracking performance and even on the stability of the exoskeleton. To ensure the robustness and the stability of the robot, a new nonlinear backstepping control was implemented with designed tests performed by healthy subjects. In order to limit and to reject the periodic/repetitive disturbances, an iterative estimator was integrated into the control of the system. The estimator does not need the precise dynamic model of the exoskeleton. Experimental results confirm the robustness and accuracy of the controller performance to deal with the external perturbation, and the effectiveness of the iterative estimator to reject the repetitive/periodic disturbances.

Keywords: backstepping control, iterative control, Rehabilitation, ETS-MARSE

Procedia PDF Downloads 286
1271 DEKA-1 a Dose-Finding Phase 1 Trial: Observing Safety and Biomarkers using DK210 (EGFR) for Inoperable Locally Advanced and/or Metastatic EGFR+ Tumors with Progressive Disease Failing Systemic Therapy

Authors: Spira A., Marabelle A., Kientop D., Moser E., Mumm J.

Abstract:

Background: Both interleukin-2 (IL-2) and interleukin-10 (IL-10) have been extensively studied for their stimulatory function on T cells and their potential to obtain sustainable tumor control in RCC, melanoma, lung, and pancreatic cancer as monotherapy, as well as combination with PD-1 blockers, radiation, and chemotherapy. While approved, IL-2 retains significant toxicity, preventing its widespread use. The significant efforts undertaken to uncouple IL-2 toxicity from its anti-tumor function have been unsuccessful, and early phase clinical safety observed with PEGylated IL-10 was not met in a blinded Phase 3 trial. Deka Biosciences has engineered a novel molecule coupling wild-type IL-2 to a high affinity variant of Epstein Barr Viral (EBV) IL-10 via a scaffold (scFv) that binds to epidermal growth factor receptors (EGFR). This patented molecule, termed DK210 (EGFR), is retained at high levels within the tumor microenvironment for days after dosing. In addition to overlapping and non-redundant anti-tumor function, IL-10 reduces IL-2 mediated cytokine release syndrome risks and inhibits IL-2 mediated T regulatory cell proliferation. Methods: DK210 (EGFR) is being evaluated in an open-label, dose-escalation (Phase 1) study with 5 (0.025-0.3 mg/kg) monotherapy dose levels and (expansion cohorts) in combination with PD-1 blockers, or radiation or chemotherapy in patients with advanced solid tumors overexpressing EGFR. Key eligibility criteria include 1) confirmed progressive disease on at least one line of systemic treatment, 2) EGFR overexpression or amplification documented in histology reports, 3) at least a 4 week or 5 half-lives window since last treatment, and 4) excluding subjects with long QT syndrome, multiple myeloma, multiple sclerosis, myasthenia gravis or uncontrolled infectious, psychiatric, neurologic, or cancer disease. Plasma and tissue samples will be investigated for pharmacodynamic and predictive biomarkers and genetic signatures associated with IFN-gamma secretion, aiming to select subjects for treatment in Phase 2. Conclusion: Through successful coupling of wild-type IL-2 with a high affinity IL-10 and targeting directly to the tumor microenvironment, DK210 (EGFR) has the potential to harness IL-2 and IL-10’s known anti-cancer promise while reducing immunogenicity and toxicity risks enabling safe concomitant cytokine treatment with other anti-cancer modalities.

Keywords: cytokine, EGFR over expression, interleukine-2, interleukine-10, clinical trial

Procedia PDF Downloads 86
1270 Effects of Pore-Water Pressure on the Motion of Debris Flow

Authors: Meng-Yu Lin, Wan-Ju Lee

Abstract:

Pore-water pressure, which mediates effective stress and shear strength at grain contacts, has a great influence on the motion of debris flow. The factors that control the diffusion of excess pore-water pressure play very important roles in the debris-flow motion. This research investigates these effects by solving the distribution of pore-water pressure numerically in an unsteady, surging motion of debris flow. The governing equations are the depth-averaged equations for the motion of debris-flow surges coupled with the one-dimensional diffusion equation for excess pore-water pressures. The pore-pressure diffusion equation is solved using a Fourier series, which may improve the accuracy of the solution. The motion of debris-flow surge is modelled using a Lagrangian particle method. From the computational results, the effects of pore-pressure diffusivities and the initial excess pore pressure on the formations of debris-flow surges are investigated. Computational results show that the presence of pore water can increase surge velocities and then changes the profiles of depth distribution. Due to the linear distribution of the vertical component of pore-water velocity, pore pressure dissipates rapidly near the bottom and forms a parabolic distribution in the vertical direction. Increases in the diffusivity of pore-water pressure cause the pore pressures decay more rapidly and then decrease the mobility of the surge.

Keywords: debris flow, diffusion, Lagrangian particle method, pore-pressure diffusivity, pore-water pressure

Procedia PDF Downloads 143