Search results for: elemental graph data model (EGDM)
35139 COVID_ICU_BERT: A Fine-Tuned Language Model for COVID-19 Intensive Care Unit Clinical Notes
Authors: Shahad Nagoor, Lucy Hederman, Kevin Koidl, Annalina Caputo
Abstract:
Doctors’ notes reflect their impressions, attitudes, clinical sense, and opinions about patients’ conditions and progress, and other information that is essential for doctors’ daily clinical decisions. Despite their value, clinical notes are insufficiently researched within the language processing community. Automatically extracting information from unstructured text data is known to be a difficult task as opposed to dealing with structured information such as vital physiological signs, images, and laboratory results. The aim of this research is to investigate how Natural Language Processing (NLP) techniques and machine learning techniques applied to clinician notes can assist in doctors’ decision-making in Intensive Care Unit (ICU) for coronavirus disease 2019 (COVID-19) patients. The hypothesis is that clinical outcomes like survival or mortality can be useful in influencing the judgement of clinical sentiment in ICU clinical notes. This paper introduces two contributions: first, we introduce COVID_ICU_BERT, a fine-tuned version of clinical transformer models that can reliably predict clinical sentiment for notes of COVID patients in the ICU. We train the model on clinical notes for COVID-19 patients, a type of notes that were not previously seen by clinicalBERT, and Bio_Discharge_Summary_BERT. The model, which was based on clinicalBERT achieves higher predictive accuracy (Acc 93.33%, AUC 0.98, and precision 0.96 ). Second, we perform data augmentation using clinical contextual word embedding that is based on a pre-trained clinical model to balance the samples in each class in the data (survived vs. deceased patients). Data augmentation improves the accuracy of prediction slightly (Acc 96.67%, AUC 0.98, and precision 0.92 ).Keywords: BERT fine-tuning, clinical sentiment, COVID-19, data augmentation
Procedia PDF Downloads 20435138 Facility Anomaly Detection with Gaussian Mixture Model
Authors: Sunghoon Park, Hank Kim, Jinwon An, Sungzoon Cho
Abstract:
Internet of Things allows one to collect data from facilities which are then used to monitor them and even predict malfunctions in advance. Conventional quality control methods focus on setting a normal range on a sensor value defined between a lower control limit and an upper control limit, and declaring as an anomaly anything falling outside it. However, interactions among sensor values are ignored, thus leading to suboptimal performance. We propose a multivariate approach which takes into account many sensor values at the same time. In particular Gaussian Mixture Model is used which is trained to maximize likelihood value using Expectation-Maximization algorithm. The number of Gaussian component distributions is determined by Bayesian Information Criterion. The negative Log likelihood value is used as an anomaly score. The actual usage scenario goes like a following. For each instance of sensor values from a facility, an anomaly score is computed. If it is larger than a threshold, an alarm will go off and a human expert intervenes and checks the system. A real world data from Building energy system was used to test the model.Keywords: facility anomaly detection, gaussian mixture model, anomaly score, expectation maximization algorithm
Procedia PDF Downloads 27035137 Experimental and Numerical Analyses of Tehran Research Reactor
Authors: A. Lashkari, H. Khalafi, H. Khazeminejad, S. Khakshourniya
Abstract:
In this paper, a numerical model is presented. The model is used to analyze a steady state thermo-hydraulic and reactivity insertion transient in TRR reference cores respectively. The model predictions are compared with the experiments and PARET code results. The model uses the piecewise constant and lumped parameter methods for the coupled point kinetics and thermal-hydraulics modules respectively. The advantages of the piecewise constant method are simplicity, efficiency and accuracy. A main criterion on the applicability range of this model is that the exit coolant temperature remains below the saturation temperature, i.e. no bulk boiling occurs in the core. The calculation values of power and coolant temperature, in steady state and positive reactivity insertion scenario, are in good agreement with the experiment values. However, the model is a useful tool for the transient analysis of most research reactor encountered in practice. The main objective of this work is using simple calculation methods and benchmarking them with experimental data. This model can be used for training proposes.Keywords: thermal-hydraulic, research reactor, reactivity insertion, numerical modeling
Procedia PDF Downloads 40035136 The Development of Research Based Model to Enhance Critical Thinking, Cognitive Skills and Culture and Local Wisdom Knowledge of Undergraduate Students
Authors: Nithipattara Balsiri
Abstract:
The purposes of this research was to develop instructional model by using research-based learning enhancing critical thinking, cognitive skills, and culture and local wisdom knowledge of undergraduate students. The sample consisted of 307 undergraduate students. Critical thinking and cognitive skills test were employed for data collection. Second-order confirmatory factor analysis, t-test, and one-way analysis of variance were employed for data analysis using SPSS and LISREL programs. The major research results were as follows; 1) the instructional model by using research-based learning enhancing critical thinking, cognitive skills, and culture and local wisdom knowledge should be consists of 6 sequential steps, namely (1) the setting research problem (2) the setting research hypothesis (3) the data collection (4) the data analysis (5) the research result conclusion (6) the application for problem solving, and 2) after the treatment undergraduate students possessed a higher scores in critical thinking and cognitive skills than before treatment at the 0.05 level of significance.Keywords: critical thinking, cognitive skills, culture and local wisdom knowledge
Procedia PDF Downloads 36435135 Comparing Performance of Neural Network and Decision Tree in Prediction of Myocardial Infarction
Authors: Reza Safdari, Goli Arji, Robab Abdolkhani Maryam zahmatkeshan
Abstract:
Background and purpose: Cardiovascular diseases are among the most common diseases in all societies. The most important step in minimizing myocardial infarction and its complications is to minimize its risk factors. The amount of medical data is increasingly growing. Medical data mining has a great potential for transforming these data into information. Using data mining techniques to generate predictive models for identifying those at risk for reducing the effects of the disease is very helpful. The present study aimed to collect data related to risk factors of heart infarction from patients’ medical record and developed predicting models using data mining algorithm. Methods: The present work was an analytical study conducted on a database containing 350 records. Data were related to patients admitted to Shahid Rajaei specialized cardiovascular hospital, Iran, in 2011. Data were collected using a four-sectioned data collection form. Data analysis was performed using SPSS and Clementine version 12. Seven predictive algorithms and one algorithm-based model for predicting association rules were applied to the data. Accuracy, precision, sensitivity, specificity, as well as positive and negative predictive values were determined and the final model was obtained. Results: five parameters, including hypertension, DLP, tobacco smoking, diabetes, and A+ blood group, were the most critical risk factors of myocardial infarction. Among the models, the neural network model was found to have the highest sensitivity, indicating its ability to successfully diagnose the disease. Conclusion: Risk prediction models have great potentials in facilitating the management of a patient with a specific disease. Therefore, health interventions or change in their life style can be conducted based on these models for improving the health conditions of the individuals at risk.Keywords: decision trees, neural network, myocardial infarction, Data Mining
Procedia PDF Downloads 42935134 Modeling Activity Pattern Using XGBoost for Mining Smart Card Data
Authors: Eui-Jin Kim, Hasik Lee, Su-Jin Park, Dong-Kyu Kim
Abstract:
Smart-card data are expected to provide information on activity pattern as an alternative to conventional person trip surveys. The focus of this study is to propose a method for training the person trip surveys to supplement the smart-card data that does not contain the purpose of each trip. We selected only available features from smart card data such as spatiotemporal information on the trip and geographic information system (GIS) data near the stations to train the survey data. XGboost, which is state-of-the-art tree-based ensemble classifier, was used to train data from multiple sources. This classifier uses a more regularized model formalization to control the over-fitting and show very fast execution time with well-performance. The validation results showed that proposed method efficiently estimated the trip purpose. GIS data of station and duration of stay at the destination were significant features in modeling trip purpose.Keywords: activity pattern, data fusion, smart-card, XGboost
Procedia PDF Downloads 24435133 Improving the Performance of Requisition Document Online System for Royal Thai Army by Using Time Series Model
Authors: D. Prangchumpol
Abstract:
This research presents a forecasting method of requisition document demands for Military units by using Exponential Smoothing methods to analyze data. The data used in the forecast is an actual data requisition document of The Adjutant General Department. The results of the forecasting model to forecast the requisition of the document found that Holt–Winters’ trend and seasonality method of α=0.1, β=0, γ=0 is appropriate and matches for requisition of documents. In addition, the researcher has developed a requisition online system to improve the performance of requisition documents of The Adjutant General Department, and also ensuring that the operation can be checked.Keywords: requisition, holt–winters, time series, royal thai army
Procedia PDF Downloads 30735132 Ontology Mapping with R-GNN for IT Infrastructure: Enhancing Ontology Construction and Knowledge Graph Expansion
Authors: Andrey Khalov
Abstract:
The rapid growth of unstructured data necessitates advanced methods for transforming raw information into structured knowledge, particularly in domain-specific contexts such as IT service management and outsourcing. This paper presents a methodology for automatically constructing domain ontologies using the DOLCE framework as the base ontology. The research focuses on expanding ITIL-based ontologies by integrating concepts from ITSMO, followed by the extraction of entities and relationships from domain-specific texts through transformers and statistical methods like formal concept analysis (FCA). In particular, this work introduces an R-GNN-based approach for ontology mapping, enabling more efficient entity extraction and ontology alignment with existing knowledge bases. Additionally, the research explores transfer learning techniques using pre-trained transformer models (e.g., DeBERTa-v3-large) fine-tuned on synthetic datasets generated via large language models such as LLaMA. The resulting ontology, termed IT Ontology (ITO), is evaluated against existing methodologies, highlighting significant improvements in precision and recall. This study advances the field of ontology engineering by automating the extraction, expansion, and refinement of ontologies tailored to the IT domain, thus bridging the gap between unstructured data and actionable knowledge.Keywords: ontology mapping, knowledge graphs, R-GNN, ITIL, NER
Procedia PDF Downloads 1335131 Exploring Disruptive Innovation Capacity Effects on Firm Performance: An Investigation in Industries 4.0
Authors: Selma R. Oliveira, E. W. Cazarini
Abstract:
Recently, studies have referenced innovation as a key factor affecting the performance of firms. Companies make use of its innovative capacities to achieve sustainable competitive advantage. In this perspective, the objective of this paper is to contribute to innovation planning policies in industry 4.0. Thus, this paper examines the disruptive innovation capacity on firm performance in Europe. This procedure was prepared according to the following phases: Phase 1: Determination of the conceptual model; and Phase 2: Verification of the conceptual model. The research was initially conducted based on the specialized literature, which extracted the data regarding the constructs/structure and content in order to build the model. The research involved the intervention of experts knowledgeable on the object studied, selected by technical-scientific criteria. The data were extracted using an assessment matrix. To reduce subjectivity in the results achieved the following methods were used complementarily and in combination: multicriteria analysis, multivariate analysis, psychometric scaling and neurofuzzy technology. The data were extracted using an assessment matrix and the results were satisfactory, validating the modeling approach.Keywords: disruptive innovation, capacity, performance, Industry 4.0
Procedia PDF Downloads 16435130 PM Air Quality of Windsor Regional Scale Transport’s Impact and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
This paper is mapping air quality model to engineering the industrial system that ultimately utilized in extensive range of energy systems, distribution resources, and end-user technologies. The model is determining long-range transport patterns contribution as area source can either traced from 48 hrs backward trajectory model or remotely described from background measurements data in those days. The trajectory model will be run within stable conditions and quite constant parameters of the atmospheric pressure at the most time of the year. Air parcel trajectory is necessary for estimating the long-range transport of pollutants and other chemical species. It provides a better understanding of airflow patterns. Since a large amount of meteorological data and a great number of calculations are required to drive trajectory, it will be very useful to apply HYPSLIT model to locate areas and boundaries influence air quality at regional location of Windsor. 2–days backward trajectories model at high and low concentration measurements below and upward the benchmark which was areas influence air quality measurement levels. The benchmark level will be considered as 30 (μg/m3) as the moderate level for Ontario region. Thereby, air quality model is incorporating a midpoint concept between biotic and abiotic components to broaden the scope of quantification impact. The later outcomes’ theories of environmental obligation suggest either a recommendation or a decision of what is a legislative should be achieved in mitigation measures of air emission impact ultimately.Keywords: air quality, management systems, environmental impact assessment, industrial ecology, climate change
Procedia PDF Downloads 24635129 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 2935128 A Mutually Exclusive Task Generation Method Based on Data Augmentation
Authors: Haojie Wang, Xun Li, Rui Yin
Abstract:
In order to solve the memorization overfitting in the model-agnostic meta-learning MAML algorithm, a method of generating mutually exclusive tasks based on data augmentation is proposed. This method generates a mutex task by corresponding one feature of the data to multiple labels so that the generated mutex task is inconsistent with the data distribution in the initial dataset. Because generating mutex tasks for all data will produce a large number of invalid data and, in the worst case, lead to an exponential growth of computation, this paper also proposes a key data extraction method that only extract part of the data to generate the mutex task. The experiments show that the method of generating mutually exclusive tasks can effectively solve the memorization overfitting in the meta-learning MAML algorithm.Keywords: mutex task generation, data augmentation, meta-learning, text classification.
Procedia PDF Downloads 14135127 A Posteriori Trading-Inspired Model-Free Time Series Segmentation
Authors: Plessen Mogens Graf
Abstract:
Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.Keywords: time series segmentation, model-free, trading-inspired, multivariate data
Procedia PDF Downloads 13335126 Using Historical Data for Stock Prediction
Authors: Sofia Stoica
Abstract:
In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.Keywords: finance, machine learning, opening price, stock market
Procedia PDF Downloads 18735125 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 28635124 Exploring the Activity Fabric of an Intelligent Environment with Hierarchical Hidden Markov Theory
Authors: Chiung-Hui Chen
Abstract:
The Internet of Things (IoT) was designed for widespread convenience. With the smart tag and the sensing network, a large quantity of dynamic information is immediately presented in the IoT. Through the internal communication and interaction, meaningful objects provide real-time services for users. Therefore, the service with appropriate decision-making has become an essential issue. Based on the science of human behavior, this study employed the environment model to record the time sequences and locations of different behaviors and adopted the probability module of the hierarchical Hidden Markov Model for the inference. The statistical analysis was conducted to achieve the following objectives: First, define user behaviors and predict the user behavior routes with the environment model to analyze user purposes. Second, construct the hierarchical Hidden Markov Model according to the logic framework, and establish the sequential intensity among behaviors to get acquainted with the use and activity fabric of the intelligent environment. Third, establish the intensity of the relation between the probability of objects’ being used and the objects. The indicator can describe the possible limitations of the mechanism. As the process is recorded in the information of the system created in this study, these data can be reused to adjust the procedure of intelligent design services.Keywords: behavior, big data, hierarchical hidden Markov model, intelligent object
Procedia PDF Downloads 23335123 Application of the Tripartite Model to the Link between Non-Suicidal Self-Injury and Suicidal Risk
Authors: Ashley Wei-Ting Wang, Wen-Yau Hsu
Abstract:
Objectives: The current study applies and expands the Tripartite Model to elaborate the link between non-suicidal self-injury (NSSI) and suicidal behavior. We propose a structural model of NSSI and suicidal risk, in which negative affect (NA) predicts both anxiety and depression, positive affect (PA) predicts depression only, anxiety is linked to NSSI, and depression is linked to suicidal risk. Method: Four hundreds and eighty seven undergraduates participated. Data were collected by administering self-report questionnaires. We performed hierarchical regression and structural equation modeling to test the proposed structural model. Results: The results largely support the proposed structural model, with one exception: anxiety was strongly associated with NSSI and to a lesser extent with suicidal risk. Conclusions: We conclude that the co-occurrence of NSSI and suicidal risk is due to NA and anxiety, and suicidal risk can be differentiated by depression. Further theoretical and practical implications are discussed.Keywords: non-suicidal self-injury, suicidal risk, anxiety, depression, the tripartite model, hierarchical relationship
Procedia PDF Downloads 47035122 Predication Model for Leukemia Diseases Based on Data Mining Classification Algorithms with Best Accuracy
Authors: Fahd Sabry Esmail, M. Badr Senousy, Mohamed Ragaie
Abstract:
In recent years, there has been an explosion in the rate of using technology that help discovering the diseases. For example, DNA microarrays allow us for the first time to obtain a "global" view of the cell. It has great potential to provide accurate medical diagnosis, to help in finding the right treatment and cure for many diseases. Various classification algorithms can be applied on such micro-array datasets to devise methods that can predict the occurrence of Leukemia disease. In this study, we compared the classification accuracy and response time among eleven decision tree methods and six rule classifier methods using five performance criteria. The experiment results show that the performance of Random Tree is producing better result. Also it takes lowest time to build model in tree classifier. The classification rules algorithms such as nearest- neighbor-like algorithm (NNge) is the best algorithm due to the high accuracy and it takes lowest time to build model in classification.Keywords: data mining, classification techniques, decision tree, classification rule, leukemia diseases, microarray data
Procedia PDF Downloads 31935121 A Fuzzy TOPSIS Based Model for Safety Risk Assessment of Operational Flight Data
Authors: N. Borjalilu, P. Rabiei, A. Enjoo
Abstract:
Flight Data Monitoring (FDM) program assists an operator in aviation industries to identify, quantify, assess and address operational safety risks, in order to improve safety of flight operations. FDM is a powerful tool for an aircraft operator integrated into the operator’s Safety Management System (SMS), allowing to detect, confirm, and assess safety issues and to check the effectiveness of corrective actions, associated with human errors. This article proposes a model for safety risk assessment level of flight data in a different aspect of event focus based on fuzzy set values. It permits to evaluate the operational safety level from the point of view of flight activities. The main advantages of this method are proposed qualitative safety analysis of flight data. This research applies the opinions of the aviation experts through a number of questionnaires Related to flight data in four categories of occurrence that can take place during an accident or an incident such as: Runway Excursions (RE), Controlled Flight Into Terrain (CFIT), Mid-Air Collision (MAC), Loss of Control in Flight (LOC-I). By weighting each one (by F-TOPSIS) and applying it to the number of risks of the event, the safety risk of each related events can be obtained.Keywords: F-topsis, fuzzy set, flight data monitoring (FDM), flight safety
Procedia PDF Downloads 16635120 Unseen Classes: The Paradigm Shift in Machine Learning
Authors: Vani Singhal, Jitendra Parmar, Satyendra Singh Chouhan
Abstract:
Unseen class discovery has now become an important part of a machine-learning algorithm to judge new classes. Unseen classes are the classes on which the machine learning model is not trained on. With the advancement in technology and AI replacing humans, the amount of data has increased to the next level. So while implementing a model on real-world examples, we come across unseen new classes. Our aim is to find the number of unseen classes by using a hierarchical-based active learning algorithm. The algorithm is based on hierarchical clustering as well as active sampling. The number of clusters that we will get in the end will give the number of unseen classes. The total clusters will also contain some clusters that have unseen classes. Instead of first discovering unseen classes and then finding their number, we directly calculated the number by applying the algorithm. The dataset used is for intent classification. The target data is the intent of the corresponding query. We conclude that when the machine learning model will encounter real-world data, it will automatically find the number of unseen classes. In the future, our next work would be to label these unseen classes correctly.Keywords: active sampling, hierarchical clustering, open world learning, unseen class discovery
Procedia PDF Downloads 17035119 A Human Centered Design of an Exoskeleton Using Multibody Simulation
Authors: Sebastian Kölbl, Thomas Reitmaier, Mathias Hartmann
Abstract:
Trial and error approaches to adapt wearable support structures to human physiology are time consuming and elaborate. However, during preliminary design, the focus lies on understanding the interaction between exoskeleton and the human body in terms of forces and moments, namely body mechanics. For the study at hand, a multi-body simulation approach has been enhanced to evaluate actual forces and moments in a human dummy model with and without a digital mock-up of an active exoskeleton. Therefore, different motion data have been gathered and processed to perform a musculosceletal analysis. The motion data are ground reaction forces, electromyography data (EMG) and human motion data recorded with a marker-based motion capture system. Based on the experimental data, the response of the human dummy model has been calibrated. Subsequently, the scalable human dummy model, in conjunction with the motion data, is connected with the exoskeleton structure. The results of the human-machine interaction (HMI) simulation platform are in particular resulting contact forces and human joint forces to compare with admissible values with regard to the human physiology. Furthermore, it provides feedback for the sizing of the exoskeleton structure in terms of resulting interface forces (stress justification) and the effect of its compliance. A stepwise approach for the setup and validation of the modeling strategy is presented and the potential for a more time and cost-effective development of wearable support structures is outlined.Keywords: assistive devices, ergonomic design, inverse dynamics, inverse kinematics, multibody simulation
Procedia PDF Downloads 16035118 A Bayesian Multivariate Microeconometric Model for Estimation of Price Elasticity of Demand
Authors: Jefferson Hernandez, Juan Padilla
Abstract:
Estimation of price elasticity of demand is a valuable tool for the task of price settling. Given its relevance, it is an active field for microeconomic and statistical research. Price elasticity in the industry of oil and gas, in particular for fuels sold in gas stations, has shown to be a challenging topic given the market and state restrictions, and underlying correlations structures between the types of fuels sold by the same gas station. This paper explores the Lotka-Volterra model for the problem for price elasticity estimation in the context of fuels; in addition, it is introduced multivariate random effects with the purpose of dealing with errors, e.g., measurement or missing data errors. In order to model the underlying correlation structures, the Inverse-Wishart, Hierarchical Half-t and LKJ distributions are studied. Here, the Bayesian paradigm through Markov Chain Monte Carlo (MCMC) algorithms for model estimation is considered. Simulation studies covering a wide range of situations were performed in order to evaluate parameter recovery for the proposed models and algorithms. Results revealed that the proposed algorithms recovered quite well all model parameters. Also, a real data set analysis was performed in order to illustrate the proposed approach.Keywords: price elasticity, volume, correlation structures, Bayesian models
Procedia PDF Downloads 16435117 Artificial Neural Network-Based Short-Term Load Forecasting for Mymensingh Area of Bangladesh
Authors: S. M. Anowarul Haque, Md. Asiful Islam
Abstract:
Electrical load forecasting is considered to be one of the most indispensable parts of a modern-day electrical power system. To ensure a reliable and efficient supply of electric energy, special emphasis should have been put on the predictive feature of electricity supply. Artificial Neural Network-based approaches have emerged to be a significant area of interest for electric load forecasting research. This paper proposed an Artificial Neural Network model based on the particle swarm optimization algorithm for improved electric load forecasting for Mymensingh, Bangladesh. The forecasting model is developed and simulated on the MATLAB environment with a large number of training datasets. The model is trained based on eight input parameters including historical load and weather data. The predicted load data are then compared with an available dataset for validation. The proposed neural network model is proved to be more reliable in terms of day-wise load forecasting for Mymensingh, Bangladesh.Keywords: load forecasting, artificial neural network, particle swarm optimization
Procedia PDF Downloads 16935116 Optimizing Telehealth Internet of Things Integration: A Sustainable Approach through Fog and Cloud Computing Platforms for Energy Efficiency
Authors: Yunyong Guo, Sudhakar Ganti, Bryan Guo
Abstract:
The swift proliferation of telehealth Internet of Things (IoT) devices has sparked concerns regarding energy consumption and the need for streamlined data processing. This paper presents an energy-efficient model that integrates telehealth IoT devices into a platform based on fog and cloud computing. This integrated system provides a sustainable and robust solution to address the challenges. Our model strategically utilizes fog computing as a localized data processing layer and leverages cloud computing for resource-intensive tasks, resulting in a significant reduction in overall energy consumption. The incorporation of adaptive energy-saving strategies further enhances the efficiency of our approach. Simulation analysis validates the effectiveness of our model in improving energy efficiency for telehealth IoT systems, particularly when integrated with localized fog nodes and both private and public cloud infrastructures. Subsequent research endeavors will concentrate on refining the energy-saving model, exploring additional functional enhancements, and assessing its broader applicability across various healthcare and industry sectors.Keywords: energy-efficient, fog computing, IoT, telehealth
Procedia PDF Downloads 7535115 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest
Authors: Bharatendra Rai
Abstract:
Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error
Procedia PDF Downloads 32135114 Kinetic Modeling Study and Scale-Up of Niogas Generation Using Garden Grass and Cattle Dung as Feedstock
Authors: Tumisang Seodigeng, Hilary Rutto
Abstract:
In this study we investigate the use of a laboratory batch digester to derive kinetic parameters for anaerobic digestion of garden grass and cattle dung. Laboratory experimental data from a 5 liter batch digester operating at mesophilic temperature of 32 C is used to derive parameters for Michaelis-Menten kinetic model. These fitted kinetics are further used to predict the scale-up parameters of a batch digester using DynoChem modeling and scale-up software. The scale-up model results are compared with performance data from 20 liter, 50 liter, and 200 liter batch digesters. Michaelis-Menten kinetic model shows to be a very good and easy to use model for kinetic parameter fitting on DynoChem and can accurately predict scale-up performance of 20 liter and 50 liter batch reactor based on parameters fitted on a 5 liter batch reactor.Keywords: Biogas, kinetics, DynoChem Scale-up, Michaelis-Menten
Procedia PDF Downloads 49635113 Implementation of a Non-Poissonian Model in a Low-Seismicity Area
Authors: Ludivine Saint-Mard, Masato Nakajima, Gloria Senfaute
Abstract:
In areas with low to moderate seismicity, the probabilistic seismic hazard analysis frequently uses a Poisson approach, which assumes independence in time and space of events to determine the annual probability of earthquake occurrence. Nevertheless, in countries with high seismic rate, such as Japan, it is frequently use non-poissonian model which assumes that next earthquake occurrence depends on the date of previous one. The objective of this paper is to apply a non-poissonian models in a region of low to moderate seismicity to get a feedback on the following questions: can we overcome the lack of data to determine some key parameters?, and can we deal with uncertainties to apply largely this methodology on an industrial context?. The Brownian-Passage-Time model was applied to a fault located in France and conclude that even if the lack of data can be overcome with some calculations, the amount of uncertainties and number of scenarios leads to a numerous branches in PSHA, making this method difficult to apply on a large scale of low to moderate seismicity areas and in an industrial context.Keywords: probabilistic seismic hazard, non-poissonian model, earthquake occurrence, low seismicity
Procedia PDF Downloads 6135112 Multi-Objective Production Planning Problem: A Case Study of Certain and Uncertain Environment
Authors: Ahteshamul Haq, Srikant Gupta, Murshid Kamal, Irfan Ali
Abstract:
This case study designs and builds a multi-objective production planning model for a hardware firm with certain & uncertain data. During the time of interaction with the manager of the firm, they indicate some of the parameters may be vague. This vagueness in the formulated model is handled by the concept of fuzzy set theory. Triangular & Trapezoidal fuzzy numbers are used to represent the uncertainty in the collected data. The fuzzy nature is de-fuzzified into the crisp form using well-known defuzzification method via graded mean integration representation method. The proposed model attempts to maximize the production of the firm, profit related to the manufactured items & minimize the carrying inventory costs in both certain & uncertain environment. The recommended optimal plan is determined via fuzzy programming approach, and the formulated models are solved by using optimizing software LINGO 16.0 for getting the optimal production plan. The proposed model yields an efficient compromise solution with the overall satisfaction of decision maker.Keywords: production planning problem, multi-objective optimization, fuzzy programming, fuzzy sets
Procedia PDF Downloads 21135111 Applying the Crystal Model Approach on Light Nuclei for Calculating Radii and Density Distribution
Authors: A. Amar
Abstract:
A new model, namely the crystal model, has been modified to calculate the radius and density distribution of light nuclei up to ⁸Be. The crystal model has been modified according to solid-state physics, which uses the analogy between nucleon distribution and atoms distribution in the crystal. The model has analytical analysis to calculate the radius where the density distribution of light nuclei has obtained from analogy of crystal lattice. The distribution of nucleons over crystal has been discussed in a general form. The equation that has been used to calculate binding energy was taken from the solid-state model of repulsive and attractive force. The numbers of the protons were taken to control repulsive force, where the atomic number was responsible for the attractive force. The parameter has been calculated from the crystal model was found to be proportional to the radius of the nucleus. The density distribution of light nuclei was taken as a summation of two clusters distribution as in ⁶Li=alpha+deuteron configuration. A test has been done on the data obtained for radius and density distribution using double folding for d+⁶,⁷Li with M3Y nucleon-nucleon interaction. Good agreement has been obtained for both the radius and density distribution of light nuclei. The model failed to calculate the radius of ⁹Be, so modifications should be done to overcome discrepancy.Keywords: nuclear physics, nuclear lattice, study nucleus as crystal, light nuclei till to ⁸Be
Procedia PDF Downloads 17435110 Quantitative Analysis of the Trade Potential of the United States with Members of the European Union: A Gravity Model Approach
Authors: Zahid Ahmad, Nauman Ali
Abstract:
This study has estimated the trade between USA and individual members of European Union using Gravity Model of Trade as The USA has a complex trade relationship with the European countries consist of a large number of consumers, which make USA dependent on EU for major of its total world trade. However, among the member of EU, the trade potential of USA with individual members of EU is not known. Panel data techniques e.g. Random Effect, Fixed Effect and Pooled Panel have been applied to secondary quantitative data to analyze the Trade between USA and EU. Trade Potential of USA with individual members of EU has been obtained using the ratio of Actual trade of USA with EU members and the trade as predicted by Gravity Model. The Study concluded that the USA has greater trade potential with 16 members of EU, including Croatia, Portugal and United Kingdom on top. On the other hand, Finland, Ireland, and France are the top countries with which the USA has exhaustive trade potential.Keywords: analytical technique, economic, gravity, international trade, significant
Procedia PDF Downloads 304