Search results for: mixture regression model
19178 Using Historical Data for Stock Prediction
Authors: Sofia Stoica
Abstract:
In this paper, we use historical data to predict the stock price of a tech company. To this end, we use a dataset consisting of the stock prices in the past five years of ten major tech companies – Adobe, Amazon, Apple, Facebook, Google, Microsoft, Netflix, Oracle, Salesforce, and Tesla. We experimented with a variety of models– a linear regressor model, K nearest Neighbors (KNN), a sequential neural network – and algorithms - Multiplicative Weight Update, and AdaBoost. We found that the sequential neural network performed the best, with a testing error of 0.18%. Interestingly, the linear model performed the second best with a testing error of 0.73%. These results show that using historical data is enough to obtain high accuracies, and a simple algorithm like linear regression has a performance similar to more sophisticated models while taking less time and resources to implement.Keywords: finance, machine learning, opening price, stock market
Procedia PDF Downloads 18719177 Representativity Based Wasserstein Active Regression
Authors: Benjamin Bobbia, Matthias Picard
Abstract:
In recent years active learning methodologies based on the representativity of the data seems more promising to limit overfitting. The presented query methodology for regression using the Wasserstein distance measuring the representativity of our labelled dataset compared to the global distribution. In this work a crucial use of GroupSort Neural Networks is made therewith to draw a double advantage. The Wasserstein distance can be exactly expressed in terms of such neural networks. Moreover, one can provide explicit bounds for their size and depth together with rates of convergence. However, heterogeneity of the dataset is also considered by weighting the Wasserstein distance with the error of approximation at the previous step of active learning. Such an approach leads to a reduction of overfitting and high prediction performance after few steps of query. After having detailed the methodology and algorithm, an empirical study is presented in order to investigate the range of our hyperparameters. The performances of this method are compared, in terms of numbers of query needed, with other classical and recent query methods on several UCI datasets.Keywords: active learning, Lipschitz regularization, neural networks, optimal transport, regression
Procedia PDF Downloads 8019176 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis
Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin
Abstract:
Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve
Procedia PDF Downloads 33519175 Linear Regression Estimation of Tactile Comfort for Denim Fabrics Based on In-Plane Shear Behavior
Authors: Nazli Uren, Ayse Okur
Abstract:
Tactile comfort of a textile product is an essential property and a major concern when it comes to customer perceptions and preferences. The subjective nature of comfort and the difficulties regarding the simulation of human hand sensory feelings make it hard to establish a well-accepted link between tactile comfort and objective evaluations. On the other hand, shear behavior of a fabric is a mechanical parameter which can be measured by various objective test methods. The principal aim of this study is to determine the tactile comfort of commercially available denim fabrics by subjective measurements, create a tactile score database for denim fabrics and investigate the relations between tactile comfort and shear behavior. In-plane shear behaviors of 17 different commercially available denim fabrics with a variety of raw material and weave structure were measured by a custom design shear frame and conventional bias extension method in two corresponding diagonal directions. Tactile comfort of denim fabrics was determined via subjective customer evaluations as well. Aforesaid relations were statistically investigated and introduced as regression equations. The analyses regarding the relations between tactile comfort and shear behavior showed that there are considerably high correlation coefficients. The suggested regression equations were likewise found out to be statistically significant. Accordingly, it was concluded that the tactile comfort of denim fabrics can be estimated with a high precision, based on the results of in-plane shear behavior measurements.Keywords: denim fabrics, in-plane shear behavior, linear regression estimation, tactile comfort
Procedia PDF Downloads 30019174 Impact of Aging on Fatigue Performance of Novel Hybrid HMA
Authors: Faizan Asghar, Mohammad Jamal Khattak
Abstract:
Aging, in general, refers to changes in rheological characteristics of asphalt mixture due to changes in chemical composition over the course of construction and service life of the pavement. The main goal of this study was to investigate the impact of oxidation on fatigue characteristics of a novel HMA composite fabricated with a combination of crumb rubber (CRM) and polyvinyl alcohol (PVA) fiber subject to aging of 7 and 14 days. A flexural beam fatigue test was performed to evaluate several characteristics of control, CRM modified, PVA reinforced, and novel rubber-fiber HMA composite. Experimental results revealed that aging had a significant impact on the fatigue performance of novel HMA composite. It was found that a suitable proportion of CRM and PVA radically affected the performance of novel rubber-fiber HMA in resistance to fracture and fatigue cracking when subjected to long-term aging. The developed novel HMA composite containing 2% CRM and 0.2% PVA presented around 29 times higher resistance to fatigue cracking for a period of 7 days of aging. To develop a cumulative plastic deformation level of 250 micros, such a mixture required over 50 times higher cycles than control HMA. Moreover, the crack propagation rate was reduced by over 90%, with over 12 times higher energy required to propagate a unit crack length in such a mixture compared to conventional HMA. Further, digital imaging correlation analyses revealed a more twisted and convoluted fracture path and higher strain distribution in rubber-fiber HMA composite. The fatigue performance after long-term aging of such novel HMA composite explicitly validates the ability to withstand load repetition that could lead to an extension in the service life of pavement infrastructure and reduce taxpayers’ dollars spent.Keywords: crumb rubber, PVA fibers, dry process, aging, performance testing, fatigue life
Procedia PDF Downloads 6419173 Comparison of Prognostic Models in Different Scenarios of Shoreline Position on Ponta Negra Beach in Northeastern Brazil
Authors: Débora V. Busman, Venerando E. Amaro, Mattheus da C. Prudêncio
Abstract:
Prognostic studies of the shoreline are of utmost importance for Ponta Negra Beach, located in Natal, Northeastern Brazil, where the infrastructure recently built along the shoreline is severely affected by flooding and erosion. This study compares shoreline predictions using three linear regression methods (LMS, LRR and WLR) and tries to discern the best method for different shoreline position scenarios. The methods have shown erosion on the beach in each of the scenarios tested, even in less intense dynamic conditions. The WLA_A with confidence interval of 95% was the well-adjusted model and calculated a retreat of -1.25 m/yr to -2.0 m/yr in hot spot areas. The change of the shoreline on Ponta Negra Beach can be measured as a negative exponential curve. Analysis of these methods has shown a correlation with the morphodynamic stage of the beach.Keywords: coastal erosion, prognostic model, DSAS, environmental safety
Procedia PDF Downloads 33319172 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 8619171 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities
Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto
Abstract:
The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP
Procedia PDF Downloads 8919170 Analyzing Preservice Teachers’ Attitudes toward Technology
Authors: Ahmet Oguz Akturk, Kemal Izci, Gurbuz Caliskan, Ismail Sahin
Abstract:
Rapid developments in technology are to necessitate societies to closely follow technological developments and change themselves to adopt those developments. It is obvious that one of the areas that are impacted from technological developments is education. Analyzing preservice teachers’ attitudes toward technology is crucial for both educational and professional purposes since teacher candidates are essential for educating future individual living in technological age. In this study, it is aimed to analyze preservice teachers’ attitudes toward technology and some variables (e.g., gender, daily internet usage and possessed technological devices) that predicting those attitudes. In this study, relational survey model used as research method and 329 preservice teachers who are studying in a large university located at the middle part of Turkey are voluntarily participated. Results of the study showed that mostly preservice teachers displayed positive attitudes toward technology while male preservice teachers’ attitudes toward technology was more positive than female preservice teachers. In order to analyze predicting factors for preservice teachers’ attitudes toward technology, stepwise multiple regressions were utilized. The results of stepwise multiple regression showed that daily internet use was the most strong predicting factor for predicting preservice teachers’ attitudes toward technology.Keywords: attitudes toward technology, preservice teachers, gender, stepwise multiple regression analysis
Procedia PDF Downloads 29019169 N-Heptane as Model Molecule for Cracking Catalyst Evaluation to Improve the Yield of Ethylene and Propylene
Authors: Tony K. Joseph, Balasubramanian Vathilingam, Stephane Morin
Abstract:
Currently, the refiners around the world are more focused on improving the yield of light olefins (propylene and ethylene) as both of them are very prominent raw materials to produce wide spectrum of polymeric materials such as polyethylene and polypropylene. Henceforth, it is desirable to increase the yield of light olefins via selective cracking of heavy oil fractions. In this study, zeolite grown on SiC was used as the catalyst to do model cracking reaction of n-heptane. The catalytic cracking of n-heptane was performed in a fixed bed reactor (12 mm i.d.) at three different temperatures (425, 450 and 475 °C) and at atmospheric pressure. A carrier gas (N₂) was mixed with n-heptane with ratio of 90:10 (N₂:n-heptane), and the gaseous mixture was introduced into the fixed bed reactor. Various flow rate of reactants was tested to increase the yield of ethylene and propylene. For the comparison purpose, commercial zeolite was also tested in addition to Zeolite on SiC. The products were analyzed using an Agilent gas chromatograph (GC-9860) equipped with flame ionization detector (FID). The GC is connected online with the reactor and all the cracking tests were successfully reproduced. The entire catalytic evaluation results will be presented during the conference.Keywords: cracking, catalyst, evaluation, ethylene, heptane, propylene
Procedia PDF Downloads 13519168 Artificial Membrane Comparison for Skin Permeation in Skin PAMPA
Authors: Aurea C. L. Lacerda, Paulo R. H. Moreno, Bruna M. P. Vianna, Cristina H. R. Serra, Airton Martin, André R. Baby, Vladi O. Consiglieri, Telma M. Kaneko
Abstract:
The modified Franz cell is the most widely used model for in vitro permeation studies, however it still presents some disadvantages. Thus, some alternative methods have been developed such as Skin PAMPA, which is a bio- artificial membrane that has been applied for skin penetration estimation of xenobiotics based on HT permeability model consisting. Skin PAMPA greatest advantage is to carry out more tests, in a fast and inexpensive way. The membrane system mimics the stratum corneum characteristics, which is the primary skin barrier. The barrier properties are given by corneocytes embedded in a multilamellar lipid matrix. This layer is the main penetration route through the paracellular permeation pathway and it consists of a mixture of cholesterol, ceramides, and fatty acids as the dominant components. However, there is no consensus on the membrane composition. The objective of this work was to compare the performance among different bio-artificial membranes for studying the permeation in skin PAMPA system. Material and methods: In order to mimetize the lipid composition`s present in the human stratum corneum six membranes were developed. The membrane composition was equimolar mixture of cholesterol, ceramides 1-O-C18:1, C22, and C20, plus fatty acids C20 and C24. The membrane integrity assay was based on the transport of Brilliant Cresyl Blue, which has a low permeability; and Lucifer Yellow with very poor permeability and should effectively be completely rejected. The membrane characterization was performed using Confocal Laser Raman Spectroscopy, using stabilized laser at 785 nm with 10 second integration time and 2 accumulations. The membrane behaviour results on the PAMPA system were statistically evaluated and all of the compositions have shown integrity and permeability. The confocal Raman spectra were obtained in the region of 800-1200 cm-1 that is associated with the C-C stretches of the carbon scaffold from the stratum corneum lipids showed similar pattern for all the membranes. The ceramides, long chain fatty acids and cholesterol in equimolar ratio permitted to obtain lipid mixtures with self-organization capability, similar to that occurring into the stratum corneum. Conclusion: The artificial biological membranes studied for Skin PAMPA showed to be similar and with comparable properties to the stratum corneum.Keywords: bio-artificial membranes, comparison, confocal Raman, skin PAMPA
Procedia PDF Downloads 50619167 Computer Simulation of Hydrogen Superfluidity through Binary Mixing
Authors: Sea Hoon Lim
Abstract:
A superfluid is a fluid of bosons that flows without resistance. In order to be a superfluid, a substance’s particles must behave like bosons, yet remain mobile enough to be considered a superfluid. Bosons are low-temperature particles that can be in all energy states at the same time. If bosons were to be cooled down, then the particles will all try to be on the lowest energy state, which is called the Bose Einstein condensation. The temperature when bosons start to matter is when the temperature has reached its critical temperature. For example, when Helium reaches its critical temperature of 2.17K, the liquid density drops and becomes a superfluid with zero viscosity. However, most materials will solidify -and thus not remain fluids- at temperatures well above the temperature at which they would otherwise become a superfluid. Only a few substances currently known to man are capable of at once remaining a fluid and manifesting boson statistics. The most well-known of these is helium and its isotopes. Because hydrogen is lighter than helium, and thus expected to manifest Bose statistics at higher temperatures than helium, one might expect hydrogen to also be a superfluid. As of today, however, no one has yet been able to produce a bulk, hydrogen superfluid. The reason why hydrogen did not form a superfluid in the past is its intermolecular interactions. As a result, hydrogen molecules are much more likely to crystallize than their helium counterparts. The key to creating a hydrogen superfluid is therefore finding a way to reduce the effect of the interactions among hydrogen molecules, postponing the solidification to lower temperature. In this work, we attempt via computer simulation to produce bulk superfluid hydrogen through binary mixing. Binary mixture is a technique of mixing two pure substances in order to avoid crystallization and enhance super fluidity. Our mixture here is KALJ H2. We then sample the partition function using this Path Integral Monte Carlo (PIMC), which is well-suited for the equilibrium properties of low-temperature bosons and captures not only the statistics but also the dynamics of Hydrogen. Via this sampling, we will then produce a time evolution of the substance and see if it exhibits superfluid properties.Keywords: superfluidity, hydrogen, binary mixture, physics
Procedia PDF Downloads 31619166 Principal Component Regression in Amylose Content on the Malaysian Market Rice Grains Using Near Infrared Reflectance Spectroscopy
Authors: Syahira Ibrahim, Herlina Abdul Rahim
Abstract:
The amylose content is an essential element in determining the texture and taste of rice grains. This paper evaluates the use of VIS-SWNIRS in estimating the amylose content for seven varieties of rice grains available in the Malaysian market. Each type consists of 30 samples and all the samples are scanned using the spectroscopy to obtain a range of values between 680-1000nm. The Savitzky-Golay (SG) smoothing filter is applied to each sample’s data before the Principal Component Regression (PCR) technique is used to examine the data and produce a single value for each sample. This value is then compared with reference values obtained from the standard iodine colorimetric test in terms of its coefficient of determination, R2. Results show that this technique produced low R2 values of less than 0.50. In order to improve the result, the range should include a wavelength range of 1100-2500nm and the number of samples processed should also be increased.Keywords: amylose content, diffuse reflectance, Malaysia rice grain, principal component regression (PCR), Visible and Shortwave near-infrared spectroscopy (VIS-SWNIRS)
Procedia PDF Downloads 38019165 A First-Principles Molecular Dynamics Study on Li+ Solvation Structures in THF/MTHF Containing Electrolytes for Lithium Metal Batteries.
Authors: Chiu-Neng Su, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
In lithium-ion batteries (LIBs) the solid–electrolyte interphase (SEI) layer, which forms on the anode surface, plays a crucial role in stabilizing battery performance. Over the past two decades, efforts to enhance LIB electrolytes have primarily focused on refining the quality of SEI components. Despite these endeavors, several observed phenomena remain inadequately improved the SEI layer. Consequently, there has been a significant surge in research interest regarding the behavior of electrolyte solvation structures to elucidate improvements in battery performance. Thus, in this study, we aimed to explore the solvation structures of LiPF₆ in a mixture of organic solvents, tetrahydrofuran (THF) and 2-methyl-tetrahydrofuran (MTHF) using ab-initio molecular dynamics (AIMD) simulations. Our work investigated the solvation structure of electrolytes with different salt concentrations: low-concentration electrolyte (1.0M LiPF6 in 1:1v/v mixture of THF and MTHF), and high-concentration electrolyte (2.0M LiPF₆ in 1:1v/v mixture of THF and MTHF) and compared them with that of conventional electrolyte (1.0M LiPF₆ in 1:1v/v mixture of ethylene carbonate (EC) and dimethyl carbonate (DMC)). Furthermore, the reduction stability of Li+ solvation structures in these electrolyte systems are investigated. It is found that the first solvation shell of Li+ primary consists of THF. We also analyzed the molecular orbital energy levels to understand the reducing stability of these solvents. Compared with the solvation sheath of commercial electrolyte, the THF/MTHF-containing electrolytes have a higher lowest unoccupied molecular orbital (LUMO) energy level, resulting in improved reduction and interface stability. It has been shown that Li-Al alloy can significantly improve cycle life and promote the formation of a dense SEI layer. Therefore, this study aims to construct the solvation structures obtained from calculations of the pure electrolyte system on the surface of Al-Li alloy. Additionally, AIMD simulations will be conducted to investigate chemical reactions at the interface. This investigation aims to elucidate the composition of the SEI layer formed. Furthermore, Bader charges are used to determine the origin and flow of electrons, thereby revealing the sequence of reduction reactions for generating SEI layers.Keywords: lithium, aluminum, alloy, battery, solvation structure
Procedia PDF Downloads 2119164 Evaluation of the Environmental Risk from the Co-Deposition of Waste Rock Material and Fly Ash
Authors: A. Mavrikos, N. Petsas, E. Kaltsi, D. Kaliampakos
Abstract:
The lignite-fired power plants in the Western Macedonia Lignite Center produce more than 8 106 t of fly ash per year. Approximately 90% of this quantity is used for restoration-reclamation of exhausted open-cast lignite mines and slope stabilization of the overburden. The purpose of this work is to evaluate the environmental behavior of the mixture of waste rock and fly ash that is being used in the external deposition site of the South Field lignite mine. For this reason, a borehole was made within the site and 86 samples were taken and subjected to chemical analyses and leaching tests. The results showed very limited leaching of trace elements and heavy metals from this mixture. Moreover, when compared to the limit values set for waste acceptable in inert waste landfills, only few excesses were observed, indicating only minor risk for groundwater pollution. However, due to the complexity of both the leaching process and the contaminant pathway, more boreholes and analyses should be made in nearby locations and a systematic groundwater monitoring program should be implemented both downstream and within the external deposition site.Keywords: co-deposition, fly ash, leaching tests, lignite, waste rock
Procedia PDF Downloads 23719163 1D/3D Modeling of a Liquid-Liquid Two-Phase Flow in a Milli-Structured Heat Exchanger/Reactor
Authors: Antoinette Maarawi, Zoe Anxionnaz-Minvielle, Pierre Coste, Nathalie Di Miceli Raimondi, Michel Cabassud
Abstract:
Milli-structured heat exchanger/reactors have been recently widely used, especially in the chemical industry, due to their enhanced performances in heat and mass transfer compared to conventional apparatuses. In our work, the ‘DeanHex’ heat exchanger/reactor with a 2D-meandering channel is investigated both experimentally and numerically. The square cross-sectioned channel has a hydraulic diameter of 2mm. The aim of our study is to model local physico-chemical phenomena (heat and mass transfer, axial dispersion, etc.) for a liquid-liquid two-phase flow in our lab-scale meandering channel, which represents the central part of the heat exchanger/reactor design. The numerical approach of the reactor is based on a 1D model for the flow channel encapsulated in a 3D model for the surrounding solid, using COMSOL Multiphysics V5.5. The use of the 1D approach to model the milli-channel reduces significantly the calculation time compared to 3D approaches, which are generally focused on local effects. Our 1D/3D approach intends to bridge the gap between the simulation at a small scale and the simulation at the reactor scale at a reasonable CPU cost. The heat transfer process between the 1D milli-channel and its 3D surrounding is modeled. The feasibility of this 1D/3D coupling was verified by comparing simulation results to experimental ones originated from two previous works. Temperature profiles along the channel axis obtained by simulation fit the experimental profiles for both cases. The next step is to integrate the liquid-liquid mass transfer model and to validate it with our experimental results. The hydrodynamics of the liquid-liquid two-phase system is modeled using the ‘mixture model approach’. The mass transfer behavior is represented by an overall volumetric mass transfer coefficient ‘kLa’ correlation obtained from our experimental results in the millimetric size meandering channel. The present work is a first step towards the scale-up of our ‘DeanHex’ expecting future industrialization of such equipment. Therefore, a generalized scaled-up model of the reactor comprising all the transfer processes will be built in order to predict the performance of the reactor in terms of conversion rate and energy efficiency at an industrial scale.Keywords: liquid-liquid mass transfer, milli-structured reactor, 1D/3D model, process intensification
Procedia PDF Downloads 13019162 Quantification and Thermal Behavior of Rice Bran Oil, Sunflower Oil and Their Model Blends
Authors: Harish Kumar Sharma, Garima Sengar
Abstract:
Rice bran oil is considered comparatively nutritionally superior than different fats/oils. Therefore, model blends prepared from pure rice bran oil (RBO) and sunflower oil (SFO) were explored for changes in the different physicochemical parameters. Repeated deep fat frying process was carried out by using dried potato in order to study the thermal behaviour of pure rice bran oil, sunflower oil and their model blends. Pure rice bran oil and sunflower oil had shown good thermal stability during the repeated deep fat frying cycles. Although, the model blends constituting 60% RBO + 40% SFO showed better suitability during repeated deep fat frying than the remaining blended oils. The quantification of pure rice bran oil in the blended oils, physically refined rice bran oil (PRBO): SnF (sunflower oil) was carried by different methods. The study revealed that regression equations based on the oryzanol content, palmitic acid composition and iodine value can be used for the quantification. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification.Keywords: rice bran oil, sunflower oil, frying, quantification
Procedia PDF Downloads 30619161 The Effect of Second Victim-Related Distress on Work-Related Outcomes in Tertiary Care, Kelantan, Malaysia
Authors: Ahmad Zulfahmi Mohd Kamaruzaman, Mohd Ismail Ibrahim, Ariffin Marzuki Mokhtar, Maizun Mohd Zain, Saiful Nazri Satiman, Mohd Najib Majdi Yaacob
Abstract:
Background: Aftermath any patient safety incidents, the involved healthcare providers possibly sustained second victim-related distress (second victim distress and reduced their professional efficacy), with subsequent negative work-related outcomes or vice versa cultivating resilience. This study aimed to investigate the factors affecting negative work-related outcomes and resilience, with the triad of support; colleague, supervisor, and institutional support as the hypothetical mediators. Methods: This was a cross sectional study recruiting a total of 733 healthcare providers from three tertiary care in Kelantan, Malaysia. Three steps of hierarchical linear regression were developed for each outcome; negative work-related outcomes and resilience. Then, four multiple mediator models of support triad were analyzed. Results: Second victim distress, professional efficacy, and the support triad contributed significantly for each regression model. In the pathway of professional efficacy on each negative work-related outcomes and resilience, colleague support partially mediated the relationship. As for second victim distress on negative work related outcomes, colleague and supervisor support were the partial mediator, and on resilience; all support triad also produced a similar effect. Conclusion: Second victim distress, professional efficacy, and the support triad influenced the relationship with the negative work-related outcomes and resilience. Support triad as the mediators ameliorated the effect in between and explained the urgency of having good support for recovery post encountering patient safety incidents.Keywords: second victims, patient safety incidents, hierarchical linear regression, mediation, support
Procedia PDF Downloads 10619160 Preparation of Nano-Scaled linbo3 by Polyol Method
Authors: Gabriella Dravecz, László Péter, Zsolt Kis
Abstract:
Abstract— The growth of optical LiNbO3 single crystal and its physical and chemical properties are well known on the macroscopic scale. Nowadays the rare-earth doped single crystals became important for coherent quantum optical experiments: electromagnetically induced transparency, slow down of light pulses, coherent quantum memory. The expansion of applications is increasingly requiring the production of nano scaled LiNbO3 particles. For example, rare-earth doped nanoscaled particles of lithium niobate can be act like single photon source which can be the bases of a coding system of the quantum computer providing complete inaccessibility to strangers. The polyol method is a chemical synthesis where oxide formation occurs instead of hydroxide because of the high temperature. Moreover the polyol medium limits the growth and agglomeration of the grains producing particles with the diameter of 30-200 nm. In this work nano scaled LiNbO3 was prepared by the polyol method. The starting materials (niobium oxalate and LiOH) were diluted in H2O2. Then it was suspended in ethylene glycol and heated up to about the boiling point of the mixture with intensive stirring. After the thermal equilibrium was reached, the mixture was kept in this temperature for 4 hours. The suspension was cooled overnight. The mixture was centrifuged and the particles were filtered. Dynamic Light Scattering (DLS) measurement was carried out and the size of the particles were found to be 80-100 nms. This was confirmed by Scanning Electron Microscope (SEM) investigations. The element analysis of SEM showed large amount of Nb in the sample. The production of LiNbO3 nano particles were succesful by the polyol method. The agglomeration of the particles were avoided and the size of 80-100nm could be reached.Keywords: lithium-niobate, nanoparticles, polyol, SEM
Procedia PDF Downloads 13219159 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets
Authors: Haripriya Chakraborty
Abstract:
The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.Keywords: agent-based modeling, healthcare reform, insurance markets, public policy
Procedia PDF Downloads 13819158 Adoption of Climate-Smart Agriculture Practices Among Farmers and Its Effect on Crop Revenue in Ethiopia
Authors: Fikiru Temesgen Gelata
Abstract:
Food security, adaptation, and climate change mitigation are all problems that can be resolved simultaneously with Climate-Smart Agriculture (CSA). This study examines determinants of climate-smart agriculture (CSA) practices among smallholder farmers, aiming to understand the factors guiding adoption decisions and evaluate the impact of CSA on smallholder farmer income in the study areas. For this study, three-stage sampling techniques were applied to select 230 smallholders randomly. Mann-Kendal test and multinomial endogenous switching regression model were used to analyze trends of decrease or increase within long-term temporal data and the impact of CSA on the smallholder farmer income, respectively. Findings revealed education level, household size, land ownership, off-farm income, climate information, and contact with extension agents found to be highly adopted CSA practices. On the contrary, erosion exerted a detrimental impact on all the agricultural practices examined within the study region. Various factors such as farming methods, the size of farms, proximity to irrigated farmlands, availability of extension services, distance to market hubs, and access to weather forecasts were recognized as key determinants influencing the adoption of CSA practices. The multinomial endogenous switching regression model (MESR) revealed that joint adoption of crop rotation and soil and water conservation practices significantly increased farm income by 1,107,245 ETB. The study recommends that counties and governments should prioritize addressing climate change in their development agendas to increase the adoption of climate-smart farming techniques.Keywords: climate-smart practices, food security, Oincome, MERM, Ethiopia
Procedia PDF Downloads 3219157 Quantitative Structure Activity Relationship and Insilco Docking of Substituted 1,3,4-Oxadiazole Derivatives as Potential Glucosamine-6-Phosphate Synthase Inhibitors
Authors: Suman Bala, Sunil Kamboj, Vipin Saini
Abstract:
Quantitative Structure Activity Relationship (QSAR) analysis has been developed to relate antifungal activity of novel substituted 1,3,4-oxadiazole against Candida albicans and Aspergillus niger using computer assisted multiple regression analysis. The study has shown the better relationship between antifungal activities with respect to various descriptors established by multiple regression analysis. The analysis has shown statistically significant correlation with R2 values 0.932 and 0.782 against Candida albicans and Aspergillus niger respectively. These derivatives were further subjected to molecular docking studies to investigate the interactions between the target compounds and amino acid residues present in the active site of glucosamine-6-phosphate synthase. All the synthesized compounds have better docking score as compared to standard fluconazole. Our results could be used for the further design as well as development of optimal and potential antifungal agents.Keywords: 1, 3, 4-oxadiazole, QSAR, multiple linear regression, docking, glucosamine-6-phosphate synthase
Procedia PDF Downloads 33919156 Adoption and Diffusion of E-Government Services in India: The Impact of User Demographics and Service Quality
Authors: Sayantan Khanra, Rojers P. Joseph
Abstract:
This study attempts to analyze the impact of demography and service quality on the adoption and diffusion of e-Government services in the context of India. The objective of this paper is to study the users' perception about e-Government services and investigate the key variables that are most salient to the Indian populace. At the completion of this study, a research model that would help to understand the relationship involving the demographic variables and service quality dimensions, and the willingness to adopt e-Government services is expected to be developed. Dedicated authorities, particularly those in developing economies, may use that model or its augmented versions to design and update e-Government services and promote their use among citizens. After all, enhanced public participation is required to improve efficiency, engagement and transparency in the implementation of the aforementioned services.Keywords: adoption and diffusion of e-government services, demographic variables, hierarchical regression analysis, service quality dimensions
Procedia PDF Downloads 29019155 Low Frequency Ultrasonic Degassing to Reduce Void Formation in Epoxy Resin and Its Effect on the Thermo-Mechanical Properties of the Cured Polymer
Authors: A. J. Cobley, L. Krishnan
Abstract:
The demand for multi-functional lightweight materials in sectors such as automotive, aerospace, electronics is growing, and for this reason fibre-reinforced, epoxy polymer composites are being widely utilized. The fibre reinforcing material is mainly responsible for the strength and stiffness of the composites whilst the main role of the epoxy polymer matrix is to enhance the load distribution applied on the fibres as well as to protect the fibres from the effect of harmful environmental conditions. The superior properties of the fibre-reinforced composites are achieved by the best properties of both of the constituents. Although factors such as the chemical nature of the epoxy and how it is cured will have a strong influence on the properties of the epoxy matrix, the method of mixing and degassing of the resin can also have a significant impact. The production of a fibre-reinforced epoxy polymer composite will usually begin with the mixing of the epoxy pre-polymer with a hardener and accelerator. Mechanical methods of mixing are often employed for this stage but such processes naturally introduce air into the mixture, which, if it becomes entrapped, will lead to voids in the subsequent cured polymer. Therefore, degassing is normally utilised after mixing and this is often achieved by placing the epoxy resin mixture in a vacuum chamber. Although this is reasonably effective, it is another process stage and if a method of mixing could be found that, at the same time, degassed the resin mixture this would lead to shorter production times, more effective degassing and less voids in the final polymer. In this study the effect of four different methods for mixing and degassing of the pre-polymer with hardener and accelerator were investigated. The first two methods were manual stirring and magnetic stirring which were both followed by vacuum degassing. The other two techniques were ultrasonic mixing/degassing using a 40 kHz ultrasonic bath and a 20 kHz ultrasonic probe. The cured cast resin samples were examined under scanning electron microscope (SEM), optical microscope, and Image J analysis software to study morphological changes, void content and void distribution. Three point bending test and differential scanning calorimetry (DSC) were also performed to determine the thermal and mechanical properties of the cured resin. It was found that the use of the 20 kHz ultrasonic probe for mixing/degassing gave the lowest percentage voids of all the mixing methods in the study. In addition, the percentage voids found when employing a 40 kHz ultrasonic bath to mix/degas the epoxy polymer mixture was only slightly higher than when magnetic stirrer mixing followed by vacuum degassing was utilized. The effect of ultrasonic mixing/degassing on the thermal and mechanical properties of the cured resin will also be reported. The results suggest that low frequency ultrasound is an effective means of mixing/degassing a pre-polymer mixture and could enable a significant reduction in production times.Keywords: degassing, low frequency ultrasound, polymer composites, voids
Procedia PDF Downloads 29519154 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization
Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın
Abstract:
There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.Keywords: aircraft, fatigue, joint, life, optimization, prediction.
Procedia PDF Downloads 17519153 The Use of Boosted Multivariate Trees in Medical Decision-Making for Repeated Measurements
Authors: Ebru Turgal, Beyza Doganay Erdogan
Abstract:
Machine learning aims to model the relationship between the response and features. Medical decision-making researchers would like to make decisions about patients’ course and treatment, by examining the repeated measurements over time. Boosting approach is now being used in machine learning area for these aims as an influential tool. The aim of this study is to show the usage of multivariate tree boosting in this field. The main reason for utilizing this approach in the field of decision-making is the ease solutions of complex relationships. To show how multivariate tree boosting method can be used to identify important features and feature-time interaction, we used the data, which was collected retrospectively from Ankara University Chest Diseases Department records. Dataset includes repeated PF ratio measurements. The follow-up time is planned for 120 hours. A set of different models is tested. In conclusion, main idea of classification with weighed combination of classifiers is a reliable method which was shown with simulations several times. Furthermore, time varying variables will be taken into consideration within this concept and it could be possible to make accurate decisions about regression and survival problems.Keywords: boosted multivariate trees, longitudinal data, multivariate regression tree, panel data
Procedia PDF Downloads 20119152 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran
Authors: Reza Zakerinejad
Abstract:
Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.Keywords: TreeNet model, terrain analysis, Golestan Province, Iran
Procedia PDF Downloads 53519151 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions
Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier
Abstract:
Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.Keywords: ice slurry, propylene-glycol, ethylene-glycol, rheology
Procedia PDF Downloads 25919150 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 7219149 Knowledge Sharing Model Based on Individual and Organizational Factors Related to Faculty Members of University
Authors: Mitra Sadoughi
Abstract:
This study presents the knowledge-sharing model based on individual and organizational factors related to faculty members. To achieve this goal, individual and organizational factors were presented through qualitative research in the form of open codes, axial, and selective observations; then, the final model was obtained using structural equation model. Participants included 1,719 faculty members of the Azad Universities, Mazandaran Province, Region 3. The samples related to the qualitative survey included 25 faculty members experienced at teaching and the samples related to the quantitative survey included 326 faculty members selected by multistage cluster sampling. A 72-item questionnaire was used to measure the quantitative variables. The reliability of the questionnaire was 0.93. Its content and face validity was determined with the help of faculty members, consultants, and other experts. For the analysis of quantitative data obtained from structural model and regression, SPSS and LISREL were used. The results showed that the status of knowledge sharing is moderate in the universities. Individual factors influencing knowledge sharing included the sharing of educational materials, perception, confidence and knowledge self-efficiency, and organizational factors influencing knowledge sharing included structural social capital, cognitive social capital, social capital relations, organizational communication, organizational structure, organizational culture, IT infrastructure and systems of rewards. Finally, it was found that the contribution of individual factors on knowledge sharing was more than organizational factors; therefore, a model was presented in which contribution of individual and organizational factors were determined.Keywords: knowledge sharing, social capital, organizational communication, knowledge self-efficiency, perception, trust, organizational culture
Procedia PDF Downloads 391