Search results for: mixed effects models
16106 Refitting Equations for Peak Ground Acceleration in Light of the PF-L Database
Authors: Matevž Breška, Iztok Peruš, Vlado Stankovski
Abstract:
Systematic overview of existing Ground Motion Prediction Equations (GMPEs) has been published by Douglas. The number of earthquake recordings that have been used for fitting these equations has increased in the past decades. The current PF-L database contains 3550 recordings. Since the GMPEs frequently model the peak ground acceleration (PGA) the goal of the present study was to refit a selection of 44 of the existing equation models for PGA in light of the latest data. The algorithm Levenberg-Marquardt was used for fitting the coefficients of the equations and the results are evaluated both quantitatively by presenting the root mean squared error (RMSE) and qualitatively by drawing graphs of the five best fitted equations. The RMSE was found to be as low as 0.08 for the best equation models. The newly estimated coefficients vary from the values published in the original works.Keywords: Ground Motion Prediction Equations, Levenberg-Marquardt algorithm, refitting PF-L database, peak ground acceleration
Procedia PDF Downloads 46216105 Domestic Violence and Wives’ Depressive Symptoms in China: The Moderating Role of Gender Ideology
Authors: Xiangmei Li
Abstract:
Domestic violence (DV) victims are at a greater risk of suffering mental health problems; however, not all victims experience the same degree of depression. Women respond differently to gender inequalities based on their gender ideologies. This study explored the moderating role of gender ideology in the relation between exposure to DV and depression. Data were drawn from a sub-sample of women aged 18-60 from the Third WaveSurvey on the Social Status of Women in China (N = 10,701). The survey adopted astratified three-stage sampling design to select a representative sample of respondents from the country. Regression models were used to examine the moderating effects of gender ideology on the relation between DV and depression. Women who reported DV experience had more severe depressive symptoms after controlling for confounding social–demographic factors (β = 0.592, 95% CI: 0.489 – 0.695). Women's gender ideology moderated the association between DV severity and depression (β = -0.049, 95% CI: -0.085 – -0.013), despite being subjected to the same levels of victimization. The experience of domestic violence is a useful indicator for routine screening for depression in clinic and community settings. Interventions that aim to decrease depression caused by DV are more likely to be effective if they promote more egalitarian gender ideology to counter the mindset that a woman's role is confined to the home and a family suffers if the wife participates in the labor force.Keywords: domestic violence against wives, depression, gender ideology, moderation
Procedia PDF Downloads 12916104 Hospital Acquired Bloodstream Infections Among Patients With Hematological and Solid Malignancies: Epidemiology, Causative Pathogens and Mortality
Authors: Marah El-Beeli, Abdullah Balkhair, Zakaryia Al Muharmi, Samir Al Adawi, Mansoor Al-Jabri, Abdullah Al Rawahi, Hazaa Al Yahyae, Eman Al Balushi, Yahya M. Al-Farsi
Abstract:
The health care service and the anticancer chemotherapeutics has changed the natural history of cancer into manageable chronic disease and improve the cancer patient’s lifestyle and increase the survival time. Despite that, still, infection is the major dilemma opposing the cancer patient either because of the clinical presentation of the cancer type and impaired immune system or as a consequence of anticancer therapy. This study has been conducted to1) track changes in the epidemiology of hospital-acquired bloodstream infections among patients with malignancies in the last five years. 2) To explore the causative pathogens and 3) the outcome of HA-BSIs in patients with a different types of malignancies. An ampi-directional study (retrospective and prospective follow up) of patients with malignancies admitted at Sultan Qaboos University hospital (570-bed tertiary hospital) during the study period (from January 2015 to December 2019). The cumulative frequency and prevalence rates of HA-BSIs by patients and isolates were calculated. In addition, the cumulative frequency of participants with single versus mixed infections and types of causative micro-organisms of HA-BSIs were obtained. A total of 1246 event of HA-BSIs has occurred during the study period. Nearly the third (30.25%) of the HA-BSI events was identified among 288 patients with malignancies. About 20% of cases were mixed infections (more than one isolate). Staphylococcus spp were the predominant isolated pathogen (24.7%), followed by Klebsiella spp (15.8%), Escherichia spp (13%), and Pseudomonas spp (9.3%). About half (51%) of cases died in the same year, and (64%) of the deaths occur within two weeks after the infection. According to the observations, no changes in the trends of epidemiology, causative pathogens, morbidity, and mortality rates in the last five years.Keywords: epidemiology, haematological malignancies, hospital acquired bloodstream infections, solid malignancies
Procedia PDF Downloads 15016103 Voltage Sag Characteristics during Symmetrical and Asymmetrical Faults
Authors: Ioannis Binas, Marios Moschakis
Abstract:
Electrical faults in transmission and distribution networks can have great impact on the electrical equipment used. Fault effects depend on the characteristics of the fault as well as the network itself. It is important to anticipate the network’s behavior during faults when planning a new equipment installation, as well as troubleshooting. Moreover, working backwards, we could be able to estimate the characteristics of the fault when checking the perceived effects. Different transformer winding connections dominantly used in the Greek power transfer and distribution networks and the effects of 1-phase to neutral, phase-to-phase, 2-phases to neutral and 3-phase faults on different locations of the network were simulated in order to present voltage sag characteristics. The study was performed on a generic network with three steps down transformers on two voltage level buses (one 150 kV/20 kV transformer and two 20 kV/0.4 kV). We found that during faults, there are significant changes both on voltage magnitudes and on phase angles. The simulations and short-circuit analysis were performed using the PSCAD simulation package. This paper presents voltage characteristics calculated for the simulated network, with different approaches on the transformer winding connections during symmetrical and asymmetrical faults on various locations.Keywords: Phase angle shift, power quality, transformer winding connections, voltage sag propagation
Procedia PDF Downloads 13916102 Finite Element Modeling Techniques of Concrete in Steel and Concrete Composite Members
Authors: J. Bartus, J. Odrobinak
Abstract:
The paper presents a nonlinear analysis 3D model of composite steel and concrete beams with web openings using the Finite Element Method (FEM). The core of the study is the introduction of basic modeling techniques comprehending the description of material behavior, appropriate elements selection, and recommendations for overcoming problems with convergence. Results from various finite element models are compared in the study. The main objective is to observe the concrete failure mechanism and its influence on the structural performance of numerical models of the beams at particular load stages. The bearing capacity of beams, corresponding deformations, stresses, strains, and fracture patterns were determined. The results show how load-bearing elements consisting of concrete parts can be analyzed using FEM software with various options to create the most suitable numerical model. The paper demonstrates the versatility of Ansys software usage for structural simulations.Keywords: Ansys, concrete, modeling, steel
Procedia PDF Downloads 12116101 Generalization of Zhou Fixed Point Theorem
Authors: Yu Lu
Abstract:
Fixed point theory is a basic tool for the study of the existence of Nash equilibria in game theory. This paper presents a significant generalization of the Veinott-Zhou fixed point theorem for increasing correspondences, which serves as an essential framework for investigating the existence of Nash equilibria in supermodular and quasisupermodular games. To establish our proofs, we explore different conceptions of multivalued increasingness and provide comprehensive results concerning the existence of the largest/least fixed point. We provide two distinct approaches to the proof, each offering unique insights and advantages. These advancements not only extend the applicability of the Veinott-Zhou theorem to a broader range of economic scenarios but also enhance the theoretical framework for analyzing equilibrium behavior in complex game-theoretic models. Our findings pave the way for future research in the development of more sophisticated models of economic behavior and strategic interaction.Keywords: fixed-point, Tarski’s fixed-point theorem, Nash equilibrium, supermodular game
Procedia PDF Downloads 5516100 Estimating the Volatilite of Stock Markets in Case of Financial Crisis
Authors: Gultekin Gurcay
Abstract:
In this paper, effects and responses of stock were analyzed. This analysis was done periodically. The dimensions of the financial crisis impact on the stock market were investigated by GARCH model. In this context, S&P 500 stock market is modeled with DAX, NIKKEI and BIST100. In this way, The effects of the changing in S&P 500 stock market were examined on European and Asian stock markets. Conditional variance coefficient will be calculated through garch model. The scope of the crisis period, the conditional covariance coefficient will be analyzed comparatively.Keywords: conditional variance coefficient, financial crisis, garch model, stock market
Procedia PDF Downloads 29416099 The Effectiveness of Laser In situ Keratomileusis for Correction Various Types of Refractive Anomalies
Authors: Yuliya Markava
Abstract:
The laser in situ keratomileusis (LASIK) is widely common surgical procedure, which has become an alternative for patients who are not satisfied with the performance of other correction methods. A high level of patient satisfaction functional outcomes after refractive surgery confirms the high reliability and safety of LASIK and provides a significant improvement in the quality of life and social adaptation. Purpose: To perform clinical analysis of the results of correction made to the excimer laser system SCHWIND AMARIS 500E in patients with different types of refractive anomalies. Materials and Methods: This was a retrospective analysis of 1581 operations (812 patients): 413 males (50.86%) and 399 females (49.14%) at the age from 18 to 47 years with different types of ametropia. All operations were performed on excimer laser SCHWIND AMARIS 500E in the LASIK procedure. Formation of the corneal flap was made by mechanical microkeratome SCHWIND. Results: Analyzing the structure of refractive anomalies: The largest number of interventions accounted for myopia: 1505 eyes (95.2%), of which about a low myopia: 706 eyes (44.7%), moderate myopia: 562 eyes (35.5 %), high myopia: eyes 217 (13.7%) and supermyopia: 20 eyes (1.3%). Hyperopia was 0.7% (11 eyes), mixed astigmatism: 4.1% (65 eyes). The efficiency was 80% (in patients with supermyopia) to 91.6% and 95.4% (in groups with myopia low and moderate, respectively). Uncorrected visual acuity average values before and after laser operation was in groups: a low myopia 0.18 (up 0.05 to 0.31) and 0.80 (up 0.60 to 1.0); moderate myopia 0.08 (up 0.03 to 0.13) and 0.87 ( up 0.74 to 1.0); high myopia 0.05 (up 0.02 to 0.08) and 0.83 (up 0.66 to 1.0); supermyopia 0.03 (up 0.02 to 0.04) and 0.59 ( up 0.34 to 0.84); hyperopia 0.27 (up 0.16 to 0.38) and 0.57 (up 0.27 to 0.87); mixed astigmatism of 0.35 (up 0.19 to 0.51) and 0.69 (up 0.44 to 0.94). In all cases, after LASIK indicators uncorrected visual acuity significantly increased. Reoperation was 4.43%. Significance: Clinical results of refractive surgery at the excimer laser system SCHWIND AMARIS 500E in different ametropia correction is characterized by high efficiency.Keywords: effectiveness of laser correction, LASIK, refractive anomalies, surgical treatment
Procedia PDF Downloads 25216098 Effects of Cannabis and Cocaine on Driving Related Tasks of Perception, Cognition, and Action
Authors: Michelle V. Tomczak, Reyhaneh Bakhtiari, Aaron Granley, Anthony Singhal
Abstract:
Objective: Cannabis and cocaine are associated with a range of mental and physical effects that can impair aspects of human behavior. Driving is a complex cognitive behavior that is an essential part of everyday life and can be broken down into many subcomponents, each of which can uniquely impact road safety. With the growing movement of jurisdictions to legalize cannabis, there is an increased focus on impairment and driving. The purpose of this study was to identify driving-related cognitive-performance deficits that are impacted by recreational drug use. Design and Methods: With the assistance of law enforcement agencies, we recruited over 300 participants under the influence of various drugs including cannabis and cocaine. These individuals performed a battery of computer-based tasks scientifically proven to be re-lated to on-road driving performance and designed to test response-speed, memory processes, perceptual-motor skills, and decision making. Data from a control group with healthy non-drug using adults was collected as well. Results: Compared to controls, the drug group showed def-icits in all tasks. The data also showed clear differences between the cannabis and cocaine groups where cannabis users were faster, and performed better on some aspects of the decision-making and perceptual-motor tasks. Memory performance was better in the cocaine group for simple tasks but not more complex tasks. Finally, the participants who consumed both drugs performed most similarly to the cannabis group. Conclusions: Our results show distinct and combined effects of cannabis and cocaine on human performance relating to driving. These dif-ferential effects are likely related to the unique effects of each drug on the human brain and how they distinctly contribute to mental states. Our results have important implications for road safety associated with driver impairment.Keywords: driving, cognitive impairment, recreational drug use, cannabis and cocaine
Procedia PDF Downloads 12616097 Motion Effects of Arabic Typography on Screen-Based Media
Authors: Ibrahim Hassan
Abstract:
Motion typography is one of the most important types of visual communication based on display. Through the digital display media, we can control the text properties (size, direction, thickness, color, etc.). The use of motion typography in visual communication made it have several images. We need to adjust the terminology and clarify the different differences between them, so relying on the word motion typography -considered a general term- is not enough to separate the different communicative functions of the moving text. In this paper, we discuss the different effects of motion typography on Arabic writing and how we can achieve harmony between the movement and the letterform, and we will, during our experiments, present a new type of text movement.Keywords: Arabic typography, motion typography, kinetic typography, fluid typography, temporal typography
Procedia PDF Downloads 16016096 Self-Disclosure of Location: Influences of Personality Traits, Intrinsic Motivations and Extrinsic Motivations
Authors: Chechen Liao, Sheng Yi Lin
Abstract:
With the popularity of smartphone usage and the flourish of social networks, many people began to use the 'check-in' functions to share their location information and days of live and self-disclosure. In order to increase exposure and awareness, some stores provide discounts and other benefits to attract consumers to 'check-in' in their stores. The purpose of this study was to investigate whether personality traits, intrinsic motivations, extrinsic motivations, and privacy concerns would affect self-disclosure of location for consumers. Research data were collected from 407 individuals that have used Facebook check-in in Taiwan. This study used SmartPLS 2.0 structural equation modeling to validate the model. The results show that information sharing, information storage, enjoyment, self-presentation, get a feedback, economic reward, and keep up with trends had significant positive effects on self-disclosure. While extroversion and openness to use have significant positive effects on self-disclosure, conscientiousness and privacy concerns have significant negative effects on self-disclosure. The results of the study provide academic and practical implications for the future growth of location-based self-disclosure.Keywords: check-in, extrinsic motivation, intrinsic motivation, personality trait, self-disclosure
Procedia PDF Downloads 17016095 Statistical Modeling of Mobile Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad S. Daba, J. P. Dubois
Abstract:
Understanding the statistics of non-isotropic scattering multipath channels that fade randomly with respect to time, frequency, and space in a mobile environment is very crucial for the accurate detection of received signals in wireless and cellular communication systems. In this paper, we derive stochastic models for the probability density function (PDF) of the shift in the carrier frequency caused by the Doppler Effect on the received illuminating signal in the presence of a dominant line of sight. Our derivation is based on a generalized Clarke’s and a two-wave partially developed scattering models, where the statistical distribution of the frequency shift is shown to be consistent with the power spectral density of the Doppler shifted signal.Keywords: Doppler shift, filtered Poisson process, generalized Clark’s model, non-isotropic scattering, partially developed scattering, Rician distribution
Procedia PDF Downloads 37216094 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia PDF Downloads 13416093 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 12716092 Using Confirmatory Factor Analysis to Test the Dimensional Structure of Tourism Service Quality
Authors: Ibrahim A. Elshaer, Alaa M. Shaker
Abstract:
Several previous empirical studies have operationalized service quality as either a multidimensional or unidimensional construct. While few earlier studies investigated some practices of the assumed dimensional structure of service quality, no study has been found to have tested the construct’s dimensionality using confirmatory factor analysis (CFA). To gain a better insight into the dimensional structure of service quality construct, this paper tests its dimensionality using three CFA models (higher order factor model, oblique factor model, and one factor model) on a set of data collected from 390 British tourists visited Egypt. The results of the three tests models indicate that service quality construct is multidimensional. This result helps resolving the problems that might arise from the lack of clarity concerning the dimensional structure of service quality, as without testing the dimensional structure of a measure, researchers cannot assume that the significant correlation is a result of factors measuring the same construct.Keywords: service quality, dimensionality, confirmatory factor analysis, Egypt
Procedia PDF Downloads 59116091 Colored Image Classification Using Quantum Convolutional Neural Networks Approach
Authors: Farina Riaz, Shahab Abdulla, Srinjoy Ganguly, Hajime Suzuki, Ravinesh C. Deo, Susan Hopkins
Abstract:
Recently, quantum machine learning has received significant attention. For various types of data, including text and images, numerous quantum machine learning (QML) models have been created and are being tested. Images are exceedingly complex data components that demand more processing power. Despite being mature, classical machine learning still has difficulties with big data applications. Furthermore, quantum technology has revolutionized how machine learning is thought of, by employing quantum features to address optimization issues. Since quantum hardware is currently extremely noisy, it is not practicable to run machine learning algorithms on it without risking the production of inaccurate results. To discover the advantages of quantum versus classical approaches, this research has concentrated on colored image data. Deep learning classification models are currently being created on Quantum platforms, but they are still in a very early stage. Black and white benchmark image datasets like MNIST and Fashion MINIST have been used in recent research. MNIST and CIFAR-10 were compared for binary classification, but the comparison showed that MNIST performed more accurately than colored CIFAR-10. This research will evaluate the performance of the QML algorithm on the colored benchmark dataset CIFAR-10 to advance QML's real-time applicability. However, deep learning classification models have not been developed to compare colored images like Quantum Convolutional Neural Network (QCNN) to determine how much it is better to classical. Only a few models, such as quantum variational circuits, take colored images. The methodology adopted in this research is a hybrid approach by using penny lane as a simulator. To process the 10 classes of CIFAR-10, the image data has been translated into grey scale and the 28 × 28-pixel image containing 10,000 test and 50,000 training images were used. The objective of this work is to determine how much the quantum approach can outperform a classical approach for a comprehensive dataset of color images. After pre-processing 50,000 images from a classical computer, the QCNN model adopted a hybrid method and encoded the images into a quantum simulator for feature extraction using quantum gate rotations. The measurements were carried out on the classical computer after the rotations were applied. According to the results, we note that the QCNN approach is ~12% more effective than the traditional classical CNN approaches and it is possible that applying data augmentation may increase the accuracy. This study has demonstrated that quantum machine and deep learning models can be relatively superior to the classical machine learning approaches in terms of their processing speed and accuracy when used to perform classification on colored classes.Keywords: CIFAR-10, quantum convolutional neural networks, quantum deep learning, quantum machine learning
Procedia PDF Downloads 12916090 Dynamical Models for Enviromental Effect Depuration for Structural Health Monitoring of Bridges
Authors: Francesco Morgan Bono, Simone Cinquemani
Abstract:
This research aims to enhance bridge monitoring by employing innovative techniques that incorporate exogenous factors into the modeling of sensor signals, thereby improving long-term predictability beyond traditional static methods. Using real datasets from two different bridges equipped with Linear Variable Displacement Transducer (LVDT) sensors, the study investigates the fundamental principles governing sensor behavior for more precise long-term forecasts. Additionally, the research evaluates performance on noisy and synthetically damaged data, proposing a residual-based alarm system to detect anomalies in the bridge. In summary, this novel approach combines advanced modeling, exogenous factors, and anomaly detection to extend prediction horizons and improve preemptive damage recognition, significantly advancing structural health monitoring practices.Keywords: structural health monitoring, dynamic models, sindy, railway bridges
Procedia PDF Downloads 3816089 On the Existence of Homotopic Mapping Between Knowledge Graphs and Graph Embeddings
Authors: Jude K. Safo
Abstract:
Knowledge Graphs KG) and their relation to Graph Embeddings (GE) represent a unique data structure in the landscape of machine learning (relative to image, text and acoustic data). Unlike the latter, GEs are the only data structure sufficient for representing hierarchically dense, semantic information needed for use-cases like supply chain data and protein folding where the search space exceeds the limits traditional search methods (e.g. page-rank, Dijkstra, etc.). While GEs are effective for compressing low rank tensor data, at scale, they begin to introduce a new problem of ’data retreival’ which we observe in Large Language Models. Notable attempts by transE, TransR and other prominent industry standards have shown a peak performance just north of 57% on WN18 and FB15K benchmarks, insufficient practical industry applications. They’re also limited, in scope, to next node/link predictions. Traditional linear methods like Tucker, CP, PARAFAC and CANDECOMP quickly hit memory limits on tensors exceeding 6.4 million nodes. This paper outlines a topological framework for linear mapping between concepts in KG space and GE space that preserve cardinality. Most importantly we introduce a traceable framework for composing dense linguistic strcutures. We demonstrate performance on WN18 benchmark this model hits. This model does not rely on Large Langauge Models (LLM) though the applications are certainy relevant here as well.Keywords: representation theory, large language models, graph embeddings, applied algebraic topology, applied knot theory, combinatorics
Procedia PDF Downloads 6816088 Cedrela Toona Roxb.: An Exploratory Study Describing Its Antidiabetic Property
Authors: Kinjal H. Shah, Piyush M. Patel
Abstract:
Diabetes mellitus is considered to be a serious endocrine syndrome. Synthetic hypoglycemic agents can produce serious side effects including hematological effects, coma, and disturbances of the liver and kidney. In addition, they are not suitable for use during pregnancy. In recent years, there have been relatively few reports of short-term side effects or toxicity due to sulphonylureas. Published figures and frequency of side effects in large series of patient range from about 1 to 5%, with symptoms severe enough to lead to the withdrawal of the drug in less than 1 to 2%. Adverse effects, in general, have been of the following type: allergic skin reactions, gastrointestinal disturbances, blood dyscrasias, hepatic dysfunction, and hypoglycemia. The associated disadvantages with insulin and oral hypoglycemic agents have led to stimulation in the research for locating natural resources showing antidiabetic activity and to explore the possibilities of using traditional medicines with proper chemical and pharmacological profiles. Literature survey reveals that the inhabitants of Abbottabad district of Pakistan use the dried leaf powder along with table salt and water orally for treating diabetes, skin allergy, wounds and as a blood purifier, where they pronounced the plant locally as ‘Nem.' The detailed phytochemical investigation of the Cedrela toona Roxb. leaves for antidiabetic activity has not been documented. Hence, there is a need for phytochemical investigation of the leaves for antidiabetic activity. The collection of fresh leaves and authentification followed by successive extraction, phytochemical screening, and testing of antidiabetic activity. The blood glucose level was reduced maximum in ethanol extract at 5th and 7th h after treatment. Blood glucose was depressed by 8.2% and 10.06% in alloxan – induced diabetic rats after treatment which was comparable to the standard drug, Glibenclamide. This may be due to the activation of the existing pancreatic cells in diabetic rats by the ethanolic extract.Keywords: antidiabetic, Cedrela toona Roxb., phytochemical screening, blood glucose
Procedia PDF Downloads 26016087 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 7416086 Sunshine Hour as a Factor to Maintain the Circadian Rhythm of Heart Rate: Analysis of Ambulatory ECG and Weather Big Data
Authors: Emi Yuda, Yutaka Yoshida, Junichiro Hayano
Abstract:
Distinct circadian rhythm of activity, i.e., high activity during the day and deep rest at night are a typical feature of a healthy lifestyle. Exposure to the skylight is thought to be an important factor to increase arousal level and maintain normal circadian rhythm. To examine whether sunshine hours influence the day-night contract of activity, we analyzed the relationship between 24-hour heart rate (HR) and weather data of the recording day. We analyzed data in 36,500 males and 49,854 females of Allostatic State Mapping by Ambulatory ECG Repository (ALLSTAR) database in Japan. Median (IQR) sunshine duration was 5.3 (2.8-7.9) hr. While sunshine hours had only modest effects of increasing 24-hour average HR in either gender (P=0.0282 and 0.0248 for male and female) and no significant effects on nighttime HR in either gender, it increased daytime HR (P = 0.0007 and 0.0015) and day-night HF difference in both genders (P < 0.0001 for both) even after adjusting for the effects of average temperature, atmospheric pressure, and humidity. Our observations support for the hypothesis that longer sunshine hours enhance circadian rhythm of activity.Keywords: big data, circadian rhythm, heart rate, sunshine
Procedia PDF Downloads 16516085 Microencapsulation for Enhancing the Survival of S. thermophilus and L. bulgaricus during Spray Drying of Sweetened Yoghurt
Authors: Dibyakanta Seth, Hari Niwas Mishra, Sankar Chandra Deka
Abstract:
Microencapsulation is an established method of protecting bacteria from the adverse conditions. An improved extrusion spraying technique was used to encapsulate mixed bacteria culture of S. thermophilus and L. bulgaricus using sodium alginate as the coating material. The effect of nozzle air pressure (200, 300, 400 and 500 kPa), sodium alginate concentration (1%, 1.5%, 2%, 2.5% and 3% w/v), different concentration of calcium chloride (0.1, 0.2, 1 M) and initial cell loads (10⁷, 10⁸, 10⁹ cfu/ml) on the viability of encapsulated bacteria were investigated. With the increase in air pressure the size of microcapsules decreased, however the effect was non-significant. There was no significant difference (p > 0.05) in the viability of encapsulated cells when the concentration of calcium chloride was increased. Increased level of sodium alginate significantly increased the survival ratio of encapsulated bacteria (P < 0.01). Encapsulation with 3% alginate was treated as optimum since a higher concentration of alginate increased the gel strength of the solution and thus was difficult to spray. Under optimal conditions 3% alginate, 10⁹ cfu/ml cell load, 20 min hardening time in 0.1 M CaCl2 and 400 kPa nozzle air pressure, the viability of bacteria cells was maximum compared to the free cells. The microcapsules made at the optimal condition when mixed with yoghurt and subjected to spray drying at 148°C, the survival ratio was 2.48×10⁻¹ for S. thermophilus and 7.26×10⁻¹ for L. bulgaricus. In contrast, the survival ratio of free cells of S. thermophilus and L. bulgaricus were 2.36×10⁻³ and 8.27×10⁻³, respectively. This study showed a decline in viable cells count of about 0.5 log over a period of 7 weeks while there was a decline of about 1 log in cultures which were incorporated as free cells in yoghurt. Microencapsulation provided better protection at higher acidity compared to free cells. This study demonstrated that microencapsulation of yoghurt culture in sodium alginate is an effective technique of protection against extreme drying conditions.Keywords: extrusion, microencapsulation, spray drying, sweetened yoghurt
Procedia PDF Downloads 25316084 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis
Authors: Meng Su
Abstract:
High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis
Procedia PDF Downloads 10816083 DUSP16 Inhibition Rescues Neurogenic and Cognitive Deficits in Alzheimer's Disease Mice Models
Authors: Huimin Zhao, Xiaoquan Liu, Haochen Liu
Abstract:
The major challenge facing Alzheimer's Disease (AD) drug development is how to effectively improve cognitive function in clinical practice. Growing evidence indicates that stimulating hippocampal neurogenesis is a strategy for restoring cognition in animal models of AD. The mitogen-activated protein kinase (MAPK) pathway is a crucial factor in neurogenesis, which is negatively regulated by Dual-specificity phosphatase 16 (DUSP16). Transcriptome analysis of post-mortem brain tissue revealed up-regulation of DUSP16 expression in AD patients. Additionally, DUSP16 was involved in regulating the proliferation and neural differentiation of neural progenitor cells (NPCs). Nevertheless, whether the effect of DUSP16 on ameliorating cognitive disorders by influencing NPCs differentiation in AD mice remains unclear. Our study demonstrates an association between DUSP16 SNPs and clinical progression in individuals with mild cognitive impairment (MCI). Besides, we found that increased DUSP16 expression in both 3×Tg and SAMP8 models of AD led to NPC differentiation impairments. By silencing DUSP16, cognitive benefits, the induction of AHN and synaptic plasticity, were observed in AD mice. Furthermore, we found that DUSP16 is involved in the process of NPC differentiation by regulating c-Jun N-terminal kinase (JNK) phosphorylation. Moreover, the increased DUSP16 may be regulated by the ETS transcription factor (ELK1), which binds to the promoter region of DUSP16. Loss of ELK1 resulted in decreased DUSP16 mRNA and protein levels. Our data uncover a potential regulatory role for DUSP16 in adult hippocampal neurogenesis and provide a possibility to find the target of AD intervention.Keywords: alzheimer's disease, cognitive function, DUSP16, hippocampal neurogenesis
Procedia PDF Downloads 7216082 The Link Between Collaboration Interactions and Team Creativity Among Nursing Student Teams in Taiwan: A Moderated Mediation Model
Authors: Hsing Yuan Liu
Abstract:
Background: Considerable theoretical and empirical work has identified a relationship between collaboration interactions and creativity in an organizational context. The mechanisms underlying this link, however, are not well understood in healthcare education. Objectives: The aims of this study were to explore the impact of collaboration interactions on team creativity and its underlying mechanism and to verify a moderated mediation model. Design, setting, and participants: This study utilized a cross-sectional, quantitative, descriptive design. The survey data were collected from 177 nursing students who enrolled in 18-week capstone courses of small interdisciplinary groups collaborating to design healthcare products in Taiwan during 2018 and 2019. Methods: Questionnaires assessed the nursing students' perceptions about their teams' swift trust (of cognition- and affect-based), conflicts (of task, process, and relationship), interaction behaviors (constructive controversy, helping behaviors, and spontaneous communication), and creativity. This study used descriptive statistics to compare demographics, swift trust scores, conflict scores, interaction behavior scores, and creativity scores for interdisciplinary teams. Data were analyzed using Pearson’s correlation coefficient and simple and hierarchical multiple regression models. Results: Pearson’s correlation analysis showed the cognition-based team swift trust was positively correlated with team creativity. The mediation model indicated constructive controversy fully mediated the effect of cognition-based team swift trust on student teams’ creativity. The moderated mediation model indicated that task conflict negatively moderates the mediating effect of the constructive controversy on the link between cognition-based team swift trust and team creativity. Conclusion: Our findings suggest nursing student teams’ interaction behaviors and task conflict are crucial mediating and moderated mediation variables on the relationship between collaboration interactions and team creativity, respectively. The empirical data confirms the validity of our proposed moderated mediation models of team creativity. Therefore, this study's validated moderated mediation model could provide guidance for nursing educators to improve collaboration interaction outcomes and creativity on nursing student teams.Keywords: team swift trust, team conflict, team interaction behavior, moderated mediating effects, interdisciplinary education, nursing students
Procedia PDF Downloads 18716081 In Vivo Maltase and Sucrase Inhibitory Activities of Five Underutilized Nigerian Edible Fruits
Authors: Mohammed Auwal Ibrahim, Isa Yunusa, Nafisa Kabir, Shazali Ali Baba, Amina Muhammad Yushau, Suraj Suraj Ibrahim, Zaharaddeen Idris Bello, Suleiman Haruna Suleiman, Murtala Bindawa Isah
Abstract:
Background: Inhibition of intestinal maltase and sucrase prevents postprandial blood glucose excursions which are beneficial in ameliorating diabetes-associated complications. Objective: In this study, the inhibitory effects of fruit extracts of Parinari macrophylla, Detarium microcarpum, Ziziphus spina-christi, Z. mairei and Parkia biglobosa were investigated against intestinal maltase and sucrase. Methods: Rats were given co-administration of the fruit extracts with maltose or sucrose and blood glucose levels were measured at 0, 30, 90 and 120 min. Results: The glucose-time curves indicated that all the fruits had the most potent inhibitory effects on both maltase and sucrase within the first 30 min. The computed Area Under the Curves (AUC0-120)for all the fruits indicated more potent inhibitory effects against intestinal maltase than sucrase.The ED50 range for the fruits extract against maltase and sucrase were 647.15-1118.35 and 942.44-1851.94 mg/kg bw respectively. Conclusion: The data suggests that the fruits could prevent postprandial hyperglycemia via inhibition of intestinal maltase and sucrase.Keywords: diabetes mellitus, fruits, α-glucosidases, maltase, sucrase
Procedia PDF Downloads 38516080 Stabilization of y-Sterilized Food, Packaging Materials by Synergistic Mixtures of Food-Contact Approval Stabilizers
Authors: Sameh A. S. Thabit Alariqi
Abstract:
Food is widely packaged with plastic materials to prevent microbial contamination and spoilage. Ionizing radiation is widely used to sterilize the food-packaging materials. Sterilization by γ-radiation causes degradation for the plastic packaging materials such as embrittlement, stiffening, softening, discoloration, odour generation, and decrease in molecular weight. Many antioxidants can prevent γ-degradation but most of them are toxic. The migration of antioxidants to its environment gives rise to major concerns in case of food packaging plastics. In this attempt, we have aimed to utilize synergistic mixtures of stabilizers which are approved for food-contact applications. Ethylene-propylene-diene terpolymer (EPDM) have been melt-mixed with hindered amine stabilizers (HAS), phenolic antioxidants and organo-phosphites (hydroperoxide decomposer). Results were discussed by comparing the stabilizing efficiency of mixtures with and without phenol system. Among phenol containing systems where we mostly observed discoloration due to the oxidation of hindered phenol, the combination of secondary HAS, tertiary HAS, organo-phosphite and hindered phenol exhibited improved stabilization efficiency than single or binary additive systems. The mixture of secondary HAS and tertiary HAS, has shown antagonistic effect of stabilization. However, the combination of organo-phosphite with secondary HAS, tertiary HAS and phenol antioxidants have been found to give synergistic even at higher doses of -sterilization. The effects have been explained through the interaction between the stabilizers. After γ-irradiation, the consumption of oligomeric stabilizer significantly depends on the components of stabilization mixture. The effect of the organo-phosphite antioxidant on the overall stability has been discussed.Keywords: ethylene-propylene-diene terpolymer, synergistic mixtures, gamma sterilization, gamma stabilization
Procedia PDF Downloads 44016079 Static and Dynamic Behaviors of Sandwich Structures With Metallic Connections
Authors: Shidokht Rashiddadash, Mojtaba Sadighi, Soheil Dariushi
Abstract:
Since sandwich structures are used in many areas ranging from ships, trains, automobiles, aircrafts, bridge and building, connecting sandwich structures is necessary almost in all industries. So application of metallic joints between sandwich panels is increasing. Various joining methods are available such as mechanically fastened joints (riveting or bolting) or adhesively bonded joints and choosing one of them depends on the application. In this research, sandwich specimens were fabricated with two different types of metallic connections with dissimilar geometries. These specimens included beams and plates and were manufactured using glass-epoxy skins and aluminum honeycomb core. After construction of the specimens, bending and low velocity impact tests were executed on them and the behaviors of specimens were discussed. Numerical models were developed using LS-DYNA software and validated with test results. Finally, parametric studies were performed on the thicknesses and lengths of two connections by employing the numerical models.Keywords: connection, honeycomb, low velocity impact, sandwich panel, static test
Procedia PDF Downloads 5616078 Calculating All Dark Energy and Dark Matter Effects through Dynamic Gravity Theory
Authors: Sean Michael Kinney
Abstract:
In 1666, Newton created the Law of Universal Gravitation. And in 1915, Einstein improved it to incorporate factors such as time dilation and gravitational lensing. But currently, there is a problem with this “universal” law. The math doesn’t work outside the confines of our solar system. And something is missing; any evidence of what gravity actually is and how it manifests. This paper explores the notion that gravity must obey the law of conservation of energy as all other forces in this universe have been shown to do. Explaining exactly what gravity is and how it manifests itself. And looking at many different implications that would be created are explained. And finally, use the math of Dynamic gravity to calculate Dark Energy and Dark Matter effects to explain all observations without the need for exotic measures.Keywords: dynamic gravity, gravity, dark matter, dark energy
Procedia PDF Downloads 7816077 Grain Refinement of Al-7Si-0.4Mg Alloy by Combination of Al-Ti-B and Mg-Al2Ca Mater Alloys and Their Effects on Tensile Property
Authors: Young-Ok Yoon, Su-Yeon Lee, Seong-Ho Ha, Gil-Yong Yeom, Bong-Hwan Kim, Hyun-Kyu Lim, Shae K. Kim
Abstract:
Al-7Si-0.4Mg alloy (designated A356) is widely used in the automotive and aerospace industries as structural components due to an excellent combination of castability and mechanical properties. Grain refinement has a significant effect on the mechanical properties of castings, mainly since the distribution of secondary phase is changed. As a grain refiner, the Al-Ti-B master alloys containing TiAl3 and TiB2 particles have been widely used in Al foundries. The Mg loss and Mg based inclusion formation by the strong affinity of Mg to oxygen in the melting process of Mg contained alloys have been an issue. This can be significantly improved only by Mg+Al2Ca master alloy as an alloying element instead of pure Mg. Moreover, the eutectic Si modification and grain refinement is simultaneously obtained because Al2Ca behaves as Ca, a typical Si modifier. The present study is focused on the combined effects of Mg+Al2Ca and Al-Ti-B master alloys on the grain refiment of Al-7Si-0.4Mg alloy and their proper ratio for the optimum effect. The aim of this study, therefore, is to investigate the change of the microstructure in Al-7Si-0.4Mg alloy with different ratios of Ti and Al2Ca (detected Ca content) and their effects on the tensile property. The distribution and morphology of the secondary phases by the grain refinement will be discussed.Keywords: Al-7Si-0.4Mg alloy, Al2Ca, Al-Ti-B alloy, grain refinement
Procedia PDF Downloads 435