Search results for: Binomial model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16829

Search results for: Binomial model

7649 The Influence of Operational Changes on Efficiency and Sustainability of Manufacturing Firms

Authors: Dimitrios Kafetzopoulos

Abstract:

Nowadays, companies are more concerned with adopting their own strategies for increased efficiency and sustainability. Dynamic environments are fertile fields for developing operational changes. For this purpose, organizations need to implement an advanced management philosophy that boosts changes to companies’ operation. Changes refer to new applications of knowledge, ideas, methods, and skills that can generate unique capabilities and leverage an organization’s competitiveness. So, in order to survive and compete in the global and niche markets, companies should incorporate the adoption of operational changes into their strategy with regard to their products and their processes. Creating the appropriate culture for changes in terms of products and processes helps companies to gain a sustainable competitive advantage in the market. Thus, the purpose of this study is to investigate the role of both incremental and radical changes into operations of a company, taking into consideration not only product changes but also process changes, and continues by measuring the impact of these two types of changes on business efficiency and sustainability of Greek manufacturing companies. The above discussion leads to the following hypotheses: H1: Radical operational changes have a positive impact on firm efficiency. H2: Incremental operational changes have a positive impact on firm efficiency. H3: Radical operational changes have a positive impact on firm sustainability. H4: Incremental operational changes have a positive impact on firm sustainability. In order to achieve the objectives of the present study, a research study was carried out in Greek manufacturing firms. A total of 380 valid questionnaires were received while a seven-point Likert scale was used to measure all the questionnaire items of the constructs (radical changes, incremental changes, efficiency and sustainability). The constructs of radical and incremental operational changes, each one as one variable, has been subdivided into product and process changes. Non-response bias, common method variance, multicollinearity, multivariate normal distribution and outliers have been checked. Moreover, the unidimensionality, reliability and validity of the latent factors were assessed. Exploratory Factor Analysis and Confirmatory Factor Analysis were applied to check the factorial structure of the constructs and the factor loadings of the items. In order to test the research hypotheses, the SEM technique was applied (maximum likelihood method). The goodness of fit of the basic structural model indicates an acceptable fit of the proposed model. According to the present study findings, radical operational changes and incremental operational changes significantly influence both efficiency and sustainability of Greek manufacturing firms. However, it is in the dimension of radical operational changes, meaning those in process and product, that the most significant contributors to firm efficiency are to be found, while its influence on sustainability is low albeit statistically significant. On the contrary, incremental operational changes influence sustainability more than firms’ efficiency. From the above, it is apparent that the embodiment of the concept of the changes into the products and processes operational practices of a firm has direct and positive consequences for what it achieves from efficiency and sustainability perspective.

Keywords: incremental operational changes, radical operational changes, efficiency, sustainability

Procedia PDF Downloads 136
7648 Choosing between the Regression Correlation, the Rank Correlation, and the Correlation Curve

Authors: Roger L. Goodwin

Abstract:

This paper presents a rank correlation curve. The traditional correlation coefficient is valid for both continuous variables and for integer variables using rank statistics. Since the correlation coefficient has already been established in rank statistics by Spearman, such a calculation can be extended to the correlation curve. This paper presents two survey questions. The survey collected non-continuous variables. We will show weak to moderate correlation. Obviously, one question has a negative effect on the other. A review of the qualitative literature can answer which question and why. The rank correlation curve shows which collection of responses has a positive slope and which collection of responses has a negative slope. Such information is unavailable from the flat, "first-glance" correlation statistics.

Keywords: Bayesian estimation, regression model, rank statistics, correlation, correlation curve

Procedia PDF Downloads 475
7647 Genetic Analysis of Iron, Phosphorus, Potassium and Zinc Concentration in Peanut

Authors: Ajay B. C., Meena H. N., Dagla M. C., Narendra Kumar, Makwana A. D., Bera S. K., Kalariya K. A., Singh A. L.

Abstract:

The high-energy value, protein content and minerals makes peanut a rich source of nutrition at comparatively low cost. Basic information on genetics and inheritance of these mineral elements is very scarce. Hence, in the present study inheritance (using additive-dominance model) and association of mineral elements was studied in two peanut crosses. Dominance variance (H) played an important role in the inheritance of P, K, Fe and Zn in peanut pods. Average degree of dominance for most of the traits was greater than unity indicating over dominance for these traits. Significant associations were also observed among mineral elements both in F2 and F3 generations but pod yield had no associations with mineral elements (with few exceptions). Di-allele/bi-parental mating could be followed to identify high yielding and mineral dense segregates.

Keywords: correlation, dominance variance, mineral elements, peanut

Procedia PDF Downloads 413
7646 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria

Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale

Abstract:

The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.

Keywords: allocative efficiency, DEA, Tobit regression, tuber crop

Procedia PDF Downloads 289
7645 Empirical Acceleration Functions and Fuzzy Information

Authors: Muhammad Shafiq

Abstract:

In accelerated life testing approaches life time data is obtained under various conditions which are considered more severe than usual condition. Classical techniques are based on obtained precise measurements, and used to model variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. Analysis techniques, which do not consider fuzziness and are only based on precise life time observations, lead to pseudo results. This study was aimed to examine the behavior of empirical acceleration functions using fuzzy lifetimes data. The results showed an increased fuzziness in the transformed life times as compare to the input data.

Keywords: acceleration function, accelerated life testing, fuzzy number, non-precise data

Procedia PDF Downloads 298
7644 Manufacturing Process and Cost Estimation through Process Detection by Applying Image Processing Technique

Authors: Chalakorn Chitsaart, Suchada Rianmora, Noppawat Vongpiyasatit

Abstract:

In order to reduce the transportation time and cost for direct interface between customer and manufacturer, the image processing technique has been introduced in this research where designing part and defining manufacturing process can be performed quickly. A3D virtual model is directly generated from a series of multi-view images of an object, and it can be modified, analyzed, and improved the structure, or function for the further implementations, such as computer-aided manufacturing (CAM). To estimate and quote the production cost, the user-friendly platform has been developed in this research where the appropriate manufacturing parameters and process detections have been identified and planned by CAM simulation.

Keywords: image processing technique, feature detections, surface registrations, capturing multi-view images, Production costs and Manufacturing processes

Procedia PDF Downloads 250
7643 An Adaptive Oversampling Technique for Imbalanced Datasets

Authors: Shaukat Ali Shahee, Usha Ananthakumar

Abstract:

A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.

Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling

Procedia PDF Downloads 418
7642 Construction and Analysis of Partially Balanced Sudoku Design of Prime Order

Authors: Abubakar Danbaba

Abstract:

Sudoku squares have been widely used to design an experiment where each treatment occurs exactly once in each row, column or sub-block. For some experiments, the size of row (or column or sub-block) may be larger than the number of treatments. Since each treatment appears only once in each row (column or sub-block) with an additional empty cell such designs are partially balanced Sudoku designs (PBSD) with NP-complete structures. This paper proposed methods for constructing PBSD of prime order of treatments by a modified Kronecker product and swap of matrix row (or column) in cyclic order. In addition, linear model and procedure for the analysis of data for PBSD are proposed.

Keywords: sudoku design, partial sudoku, NP-complete, Kronecker product, row and column swap

Procedia PDF Downloads 272
7641 Cavitating Flow through a Venturi Using Computational Fluid Dynamics

Authors: Imane Benghalia, Mohammed Zamoum, Rachid Boucetta

Abstract:

Hydrodynamic cavitation is a complex physical phenomenon that appears in hydraulic systems (pumps, turbines, valves, Venturi tubes, etc.) when the fluid pressure decreases below the saturated vapor pressure. The works carried out in this study aimed to get a better understanding of the cavitating flow phenomena. For this, we have numerically studied a cavitating bubbly flow through a Venturi nozzle. The cavitation model is selected and solved using a commercial computational fluid dynamics (CFD) code. The obtained results show the effect of the inlet pressure (10, 7, 5, and 2 bars) of the Venturi on pressure, the velocity of the fluid flow, and the vapor fraction. We found that the inlet pressure of the Venturi strongly affects the evolution of the pressure, velocity, and vapor fraction formation in the cavitating flow.

Keywords: cavitating flow, CFD, phase change, venturi

Procedia PDF Downloads 84
7640 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 160
7639 Experimental and Computational Fluid Dynamic Modeling of a Progressing Cavity Pump Handling Newtonian Fluids

Authors: Deisy Becerra, Edwar Perez, Nicolas Rios, Miguel Asuaje

Abstract:

Progressing Cavity Pump (PCP) is a type of positive displacement pump that is being awarded greater importance as capable artificial lift equipment in the heavy oil field. The most commonly PCP used is driven single lobe pump that consists of a single external helical rotor turning eccentrically inside a double internal helical stator. This type of pump was analyzed by the experimental and Computational Fluid Dynamic (CFD) approach from the DCAB031 model located in a closed-loop arrangement. Experimental measurements were taken to determine the pressure rise and flow rate with a flow control valve installed at the outlet of the pump. The flowrate handled was measured by a FLOMEC-OM025 oval gear flowmeter. For each flowrate considered, the pump’s rotational speed and power input were controlled using an Invertek Optidrive E3 frequency driver. Once a steady-state operation was attained, pressure rise measurements were taken with a Sper Scientific wide range digital pressure meter. In this study, water and three Newtonian oils of different viscosities were tested at different rotational speeds. The CFD model implementation was developed on Star- CCM+ using an Overset Mesh that includes the relative motion between rotor and stator, which is one of the main contributions of the present work. The simulations are capable of providing detailed information about the pressure and velocity fields inside the device in laminar and unsteady regimens. The simulations have a good agreement with the experimental data due to Mean Squared Error (MSE) in under 21%, and the Grid Convergence Index (GCI) was calculated for the validation of the mesh, obtaining a value of 2.5%. In this case, three different rotational speeds were evaluated (200, 300, 400 rpm), and it is possible to show a directly proportional relationship between the rotational speed of the rotor and the flow rate calculated. The maximum production rates for the different speeds for water were 3.8 GPM, 4.3 GPM, and 6.1 GPM; also, for the oil tested were 1.8 GPM, 2.5 GPM, 3.8 GPM, respectively. Likewise, an inversely proportional relationship between the viscosity of the fluid and pump performance was observed, since the viscous oils showed the lowest pressure increase and the lowest volumetric flow pumped, with a degradation around of 30% of the pressure rise, between performance curves. Finally, the Productivity Index (PI) remained approximately constant for the different speeds evaluated; however, between fluids exist a diminution due to the viscosity.

Keywords: computational fluid dynamic, CFD, Newtonian fluids, overset mesh, PCP pressure rise

Procedia PDF Downloads 128
7638 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties

Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier

Abstract:

The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.

Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA

Procedia PDF Downloads 66
7637 Influencing Factors and Mechanism of Patient Engagement in Healthcare: A Survey in China

Authors: Qing Wu, Xuchun Ye, Kirsten Corazzini

Abstract:

Objective: It is increasingly recognized that patients’ rational and meaningful engagement in healthcare could make important contributions to their health care and safety management. However, recent evidence indicated that patients' actual roles in healthcare didn’t match their desired roles, and many patients reported a less active role than desired, which suggested that patient engagement in healthcare may be influenced by various factors. This study aimed to analyze influencing factors on patient engagement and explore the influence mechanism, which will be expected to contribute to the strategy development of patient engagement in healthcare. Methods: On the basis of analyzing the literature and theory study, the research framework was developed. According to the research framework, a cross-sectional survey was employed using the behavior and willingness of patient engagement in healthcare questionnaire, Chinese version All Aspects of Health Literacy Scale, Facilitation of Patient Involvement Scale and Wake Forest Physician Trust Scale, and other influencing factor related scales. A convenience sample of 580 patients was recruited from 8 general hospitals in Shanghai, Jiangsu Province, and Zhejiang Province. Results: The results of the cross-sectional survey indicated that the mean score for the patient engagement behavior was (4.146 ± 0.496), and the mean score for the willingness was (4.387 ± 0.459). The level of patient engagement behavior was inferior to their willingness to be involved in healthcare (t = 14.928, P < 0.01). The influencing mechanism model of patient engagement in healthcare was constructed by the path analysis. The path analysis revealed that patient attitude toward engagement, patients’ perception of facilitation of patient engagement and health literacy played direct prediction on the patients’ willingness of engagement, and standard estimated values of path coefficient were 0.341, 0.199, 0.291, respectively. Patients’ trust in physician and the willingness of engagement played direct prediction on the patient engagement, and standard estimated values of path coefficient were 0.211, 0.641, respectively. Patient attitude toward engagement, patients’ perception of facilitation and health literacy played indirect prediction on patient engagement, and standard estimated values of path coefficient were 0.219, 0.128, 0.187, respectively. Conclusions: Patients engagement behavior did not match their willingness to be involved in healthcare. The influencing mechanism model of patient engagement in healthcare was constructed. Patient attitude toward engagement, patients’ perception of facilitation of engagement and health literacy posed indirect positive influence on patient engagement through the patients’ willingness of engagement. Patients’ trust in physician and the willingness of engagement had direct positive influence on the patient engagement. Patient attitude toward engagement, patients’ perception of physician facilitation of engagement and health literacy were the factors influencing the patients’ willingness of engagement. The results of this study provided valuable evidence on guiding the development of strategies for promoting patient rational and meaningful engagement in healthcare.

Keywords: healthcare, patient engagement, influencing factor, the mechanism

Procedia PDF Downloads 156
7636 Urban Rail Transit CBTC Computer Interlocking Subsystem Relying on Multi-Template Pen Point Tracking Algorithm

Authors: Xinli Chen, Xue Su

Abstract:

In the urban rail transit CBTC system, interlocking is considered one of the most basic sys-tems, which has the characteristics of logical complexity and high-security requirements. The development and verification of traditional interlocking subsystems are entirely manual pro-cesses and rely too much on the designer, which often hides many uncertain factors. In order to solve this problem, this article is based on the multi-template nib tracking algorithm for model construction and verification, achieving the main safety attributes and using SCADE for formal verification. Experimental results show that this method helps to improve the quality and efficiency of interlocking software.

Keywords: computer interlocking subsystem, penpoint tracking, communication-based train control system, multi-template tip tracking

Procedia PDF Downloads 160
7635 Investigating Universals of Rhetoric

Authors: Nasreddin Ahmed

Abstract:

Despite the ostensible extant differences amongst world languages’ structures that have culminated in the divergence in orthographic, phonological, morphological, and syntactic systems that each language has, research in cognitive linguistic strives to establish the claim that such differences are merely prima facie of a totalized universal system of signification.Linguists , since Chomsky, have never given up on the attempt to establish linguistic descriptive model that espouses a perspective in which every human language has a slot . Concurring with claim that the so-called rhetorical devices are pervasive phenomena and not literary-specific , the present paper aspires to voice the claim that rhetorical devices not only ubiquitous in all levels of a particular language but also a universal linguistic phenomena. Using illustrations from Arabic and Englishthe paper intend to provide data-supported evidence that human beings are universally using similar rhetorical, albeit given different appellations.

Keywords: language, rhetoric, syntax, stylistics

Procedia PDF Downloads 96
7634 Screening of Ionic Liquids for Hydrogen Sulfide Removal Using COSMO-RS

Authors: Zulaika Mohd Khasiran

Abstract:

The capability of ionic liquids in various applications makes them attracted by many researchers. They have potential to be developed as “green” solvents for gas separation, especially H2S gas. In this work, it is attempted to predict the solubility of hydrogen sulfide (H2S) in ILs by COSMO-RS method. Since H2S is a toxic pollutant, it is difficult to work on it in the laboratory, therefore an appropriate model will be necessary in prior work. The COSMO-RS method is implemented to predict the Henry’s law constants and activity coefficient of H2S in 140 ILs with various combinations of cations and anions. It is found by the screening that more H2S can be absorbed in ILs with [Cl] and [Ac] anion. The solubility of H2S in ILs with different alkyl chain at the cations not much affected and with different type of cations are slightly influence H2S capture capacities. Even though the cations do not affect much in solubility of H2S, we still need to consider the effectiveness of cation in different way. The prediction results only show their physical absorption ability, but the absorption of H2S need to be consider chemically to get high capacity of absorption of H2S.

Keywords: H2S, hydrogen sulfide, ionic liquids, COSMO-RS

Procedia PDF Downloads 139
7633 A Numerical Simulation of Arterial Mass Transport in Presence of Magnetic Field-Links to Atherosclerosis

Authors: H. Aminfar, M. Mohammadpourfard, K. Khajeh

Abstract:

This paper has focused on the most important parameters in the LSC uptake; inlet Re number and Sc number in the presence of non-uniform magnetic field. The magnetic field is arising from the thin wire with electric current placed vertically to the arterial blood vessel. According to the results of this study, applying magnetic field can be a treatment for atherosclerosis by reducing LSC along the vessel wall. Homogeneous porous layer as a arterial wall has been regarded. Blood flow has been considered laminar and incompressible containing Ferro fluid (blood and 4 % vol. Fe₃O₄) under steady state conditions. Numerical solution of governing equations was obtained by using the single-phase model and control volume technique for flow field.

Keywords: LDL surface concentration (LSC), magnetic field, computational fluid dynamics, porous wall

Procedia PDF Downloads 408
7632 Relationship between Different Heart Rate Control Levels and Risk of Heart Failure Rehospitalization in Patients with Persistent Atrial Fibrillation: A Retrospective Cohort Study

Authors: Yongrong Liu, Xin Tang

Abstract:

Background: Persistent atrial fibrillation is a common arrhythmia closely related to heart failure. Heart rate control is an essential strategy for treating persistent atrial fibrillation. Still, the understanding of the relationship between different heart rate control levels and the risk of heart failure rehospitalization is limited. Objective: The objective of the study is to determine the relationship between different levels of heart rate control in patients with persistent atrial fibrillation and the risk of readmission for heart failure. Methods: We conducted a retrospective dual-centre cohort study, collecting data from patients with persistent atrial fibrillation who received outpatient treatment at two tertiary hospitals in central and western China from March 2019 to March 2020. The collected data included age, gender, body mass index (BMI), medical history, and hospitalization frequency due to heart failure. Patients were divided into three groups based on their heart rate control levels: Group I with a resting heart rate of less than 80 beats per minute, Group II with a resting heart rate between 80 and 100 beats per minute, and Group III with a resting heart rate greater than 100 beats per minute. The readmission rates due to heart failure within one year after discharge were statistically analyzed using propensity score matching in a 1:1 ratio. Differences in readmission rates among the different groups were compared using one-way ANOVA. The impact of varying levels of heart rate control on the risk of readmission for heart failure was assessed using the Cox proportional hazards model. Binary logistic regression analysis was employed to control for potential confounding factors. Results: We enrolled a total of 1136 patients with persistent atrial fibrillation. The results of the one-way ANOVA showed that there were differences in readmission rates among groups exposed to different levels of heart rate control. The readmission rates due to heart failure for each group were as follows: Group I (n=432): 31 (7.17%); Group II (n=387): 11.11%; Group III (n=317): 90 (28.50%) (F=54.3, P<0.001). After performing 1:1 propensity score matching for the different groups, 223 pairs were obtained. Analysis using the Cox proportional hazards model showed that compared to Group I, the risk of readmission for Group II was 1.372 (95% CI: 1.125-1.682, P<0.001), and for Group III was 2.053 (95% CI: 1.006-5.437, P<0.001). Furthermore, binary logistic regression analysis, including variables such as digoxin, hypertension, smoking, coronary heart disease, and chronic obstructive pulmonary disease as independent variables, revealed that coronary heart disease and COPD also had a significant impact on readmission due to heart failure (p<0.001). Conclusion: The correlation between the heart rate control level of patients with persistent atrial fibrillation and the risk of heart failure rehospitalization is positive. Reasonable heart rate control may significantly reduce the risk of heart failure rehospitalization.

Keywords: heart rate control levels, heart failure rehospitalization, persistent atrial fibrillation, retrospective cohort study

Procedia PDF Downloads 74
7631 Literature Review: Application of Artificial Intelligence in EOR

Authors: Masoumeh Mofarrah, Amir NahanMoghadam

Abstract:

Higher oil prices and increasing oil demand are main reasons for great attention to Enhanced Oil Recovery (EOR). Comprehensive researches have been accomplished to develop, appraise and improve EOR methods and their application. Recently Artificial Intelligence (AI) gained popularity in petroleum industry that can help petroleum engineers to solve some fundamental petroleum engineering problems such as reservoir simulation, EOR project risk analysis, well log interpretation and well test model selection. This study presents a historical overview of most popular AI tools including neural networks, genetic algorithms, fuzzy logic and expert systems in petroleum industry and discusses two case studies to represent the application of two mentioned AI methods for selecting an appropriate EOR method based on reservoir characterization in feasible and effective way.

Keywords: artificial intelligence, EOR, neural networks, expert systems

Procedia PDF Downloads 408
7630 Transport Infrastructure and Economic Growth in South Africa

Authors: Abigail Mosetsanagape Mooketsi, Itumeleng Pleasure Mongale, Joel Hinaunye Eita

Abstract:

The aim of this study is to analyse the impact of transport infrastructure on economic growth in South Africa through Engle Granger two step approach using the data from 1970 to 2013. GDP is used as a proxy for economic growth whilst rail transport (rail lines, rail goods transported) and air transport(air passengers carried, air freight) are used as proxies for transport infrastructure. The results showed that there is a positive long-run relationship between transport infrastructure and economic growth. The results show that South Africa’s economic growth can be boosted by providing transport infrastructure. The estimated models were simulated and the results that the model is a good fit. The findings of this research will be beneficial to policy makers, academics and it will also enhance the ability of the investors to make informed decisions about investing in South Africa.

Keywords: transport, infrastructure, economic growth, South Africa

Procedia PDF Downloads 482
7629 Discursive Psychology of Emotions in Mediation

Authors: Katarzyna Oberda

Abstract:

The aim of this paper is to conceptual emotions in the process of mediation. Although human emotions have been approached from various disciplines and perspectives, e.g. philosophy, linguistics, psychology and neurology, this complex phenomenon still needs further investigation into its discursive character with the an open mind and heart. To attain this aim, the theoretical and practical considerations are taken into account both to contextualize the discursive psychology of emotions in mediation and show how cognitive and linguistic activity expressed in language may lead to the emotional turn in the process of mediation. The double directions of this research into the discursive psychology of emotions have been partially inspired by the evaluative components of mediation forms. In the conducted research, we apply the methodology of discursive psychology with the discourse analysis as a tool. The practical data come from the recorded mediations online. The major findings of the conducted research result in the reconstruction of the emotional transformation model in mediation.

Keywords: discourse analysis, discursive psychology, emotions, mediation

Procedia PDF Downloads 156
7628 Evaluation of Formability of AZ61 Magnesium Alloy at Elevated Temperatures

Authors: Ramezani M., Neitzert T.

Abstract:

This paper investigates mechanical properties and formability of the AZ61 magnesium alloy at high temperatures. Tensile tests were performed at elevated temperatures of up to 400ºC. The results showed that as temperature increases, yield strength and ultimate tensile strength decrease significantly, while the material experiences an increase in ductility (maximum elongation before break). A finite element model has been developed to further investigate the formability of the AZ61 alloy by deep drawing a square cup. Effects of different process parameters such as punch and die geometry, forming speed and temperature as well as blank-holder force on deep drawability of the AZ61 alloy were studied and optimum values for these parameters are achieved which can be used as a design guide for deep drawing of this alloy.

Keywords: AZ61, formability, magnesium, mechanical properties

Procedia PDF Downloads 579
7627 Analytical Investigation of Ductility of Reinforced Concrete Beams Strengthening with Polypropylene Fibers

Authors: Rifat Sezer, Abdulhamid Aryan

Abstract:

The purpose of this study is to research both the ductility of the reinforced concrete beams without fiber and the ductility of the reinforced concrete beams with fiber. For this purpose, the analytical load - displacement curves of the beams were formed and the areas under these curves were compared. According to the results of this comparison, it is concluded that the reinforced concrete beams with polypropylene fiber are more ductile. The dimension of the used beam-samples for analytical model in this study is 20x30 cm, their length is 200 cm and their scale is ½. The reinforced concrete reference-beams are produced as one item and the reinforced concrete beams with P-0.60 kg/m3 polypropylene fiber are produced as one item. The modeling of reinforced concrete beams was utilized with Abaqus software.

Keywords: polypropylene, fiber-reinforced beams, strengthening of the beams, abaqus program

Procedia PDF Downloads 496
7626 Vibration of Nonhomogeneous Timoshenko Nanobeam Resting on Winkler-Pasternak Foundation

Authors: Somnath Karmakar, S. Chakraverty

Abstract:

This work investigates the vibration of nonhomogeneous Timoshenko nanobeam resting on the Winkler-Pasternak foundation. Eringen’s nonlocal theory has been used to investigate small-scale effects. The Differential Quadrature method is used to obtain the frequency parameters with various classical boundary conditions. The nonhomogeneous beam model has been considered, where Young’s modulus and density of the beam material vary linearly and quadratically. Convergence of frequency parameters is also discussed. The influence of mechanical properties and scaling parameters on vibration frequencies are investigated for different boundary conditions.

Keywords: Timoshenko beam, Eringen's nonlocal theory, differential quadrature method, nonhomogeneous nanobeam

Procedia PDF Downloads 115
7625 Effect of Aggregate Size on Mechanical Behavior of Passively Confined Concrete Subjected to 3D Loading

Authors: Ibrahim Ajani Tijani, C. W. Lim

Abstract:

Limited studies have examined the effect of size on the mechanical behavior of confined concrete subjected to 3-dimensional (3D) test. With the novel 3D testing system to produce passive confinement, concrete cubes were tested to examine the effect of size on stress-strain behavior of the specimens. The effect of size on 3D stress-strain relationship was scrutinized and compared to the stress-strain relationship available in the literature. It was observed that the ultimate stress and the corresponding strain was related to the confining rigidity and size. The size shows a significant effect on the intersection stress and a new model was proposed for the intersection stress based on the conceptual design of the confining plates.

Keywords: concrete, aggregate size, size effect, 3D compression, passive confinement

Procedia PDF Downloads 208
7624 Methyl Red Dye Adsorption On PMMA/GO and PMMA/GO-Fe3O4 Nanocomposites: Equilibrium Isotherm Studies

Authors: Mostafa Rajabi, Kazem Mahanpoor

Abstract:

Performances of the methyl red (MR) dye adsorption on poly(methyl methacrylate)/graphene oxide (PMMA/GO) and poly(methyl methacrylate)/graphene oxide-Fe3O4 (PMMA/GO-Fe3O4) nanocomposites as adsorbents were investigated. Our results showed that for adsorption of MR dye on PMMA/GO-Fe3O4 and PMMA/GO nanocomposites, 80 minutes, 298 K, and pH 2 were the best contact time, temperature and pH value for process, respectively, because the optimum adsorption of the MR dye with both nanocomposite adsorbents were observed in these values of the parameters. The equilibrium study results showed that PMMA/GO-Fe3O4 and PMMA/GO were suitable adsorbents for MR dye removing and were best in agreement with the Langmuir isotherm model.

Keywords: adsorption, isotherm, methyl methacrylate, methyl red, nanocomposite, nano magnetic Fe3O4

Procedia PDF Downloads 187
7623 Innovation Management Strategy towards the Detroit of Asia

Authors: Jarunee Wonglimpiyarat

Abstract:

This paper explores the innovation management strategy of Thailand in moving towards the Detroit of Asia. The study analyses Thailand’s automotive cluster based on Porter’s Diamond Model and national innovation system (NIS) framework. A qualitative methodology was carried out, using semi-structured interviews with the players in the Thai automotive industry. Thailand took a different NIS approach by pursuing an Original Equipment Manufacture (OEM) strategy to attract foreign investments in building its automotive cluster, a different path from other Asian countries that competed with Own Brand Manufacture (OBM) strategies. The findings provide useful lessons for other newly industrialized countries (NICs) in adopting the cluster policies to move up the technological ladders.

Keywords: innovation management strategy, national innovation system (NIS), Detroit of Asia, original equipment manufacturer (OEM)

Procedia PDF Downloads 346
7622 The Negative Impact of Mindfulness on Creativity: An Experimental Test

Authors: Marine Agogue, Beatrice Parguel, Emilie Canet

Abstract:

Defined as receptive attention to and awareness of present events and experience, mindfulness has grown in popularity over the past 30 years to become a trendy buzzword in business media, which regularly reports on its organizational benefits. Mindfulness would enhance or impede creative thinking depending on the type of meditation. Specifically, focused-attention meditation (focusing attention on one object instead of being open to perceive and observe any sensation or thought) would not be or negatively correlated to creativity. This research explores whether mood, in its two dimensions (i.e., hedonic tone, activation level), could mediate this potentially negative effect. The rationale is that focused-attention meditation is likely to improve hedonic tone but, in the meantime, damage activation level, resulting in opposite effects on creativity through the mediation effect of creative self-efficacy, i.e., the belief that one can perform successfully in an ideation setting. To test this conceptual model, a survey was administered to 97 subjects (53% women, mean age: 25 years), randomly assigned to three conditions (a 10-minute focused-attention meditation session vs. a 10-minute psychometric tests session vs. a control condition) and asked to participate in the egg creative task. Creativity was measured in terms of fluency, expansivity, and originality, the other variables using existing scales: hedonic tone (e.g., joyful, happy), activation level (e.g., passive, sluggish), creative self-efficacy (e.g., ‘I felt confident in my ability to do the task effectively’) and self-perceived creativity (e.g., ‘I have lots of original ideas’). The chains of mediation were tested using PROCESS macro (model 6) and controlled for subjects’ gender, age, and self-perceived creativity. Comparing the mindfulness and the control conditions, no difference appeared in terms of creativity, nor any mediation chain by hedonic tone. However, subjects who participated in the meditation session felt less active than those in the control condition, which decreased their creative self-efficacy, and creativity (whatever the indicator considered). Comparing the mindfulness and the psychometric tests conditions, analyses showed that creativity was higher in the psychometric tests condition. As previously, no mediation chain appeared by hedonic tone. However, subjects who participated in the meditation session felt less active than those in the psychometric tests condition, which decreased their creative self-efficacy, and creativity. These findings confirm that focused-attention meditation does not enhance creativity. They demonstrate an emotional underlying mechanism based on activation level and suggest that both positive and active mood states have the potential to enhance creativity through creative self-efficacy. In the end, they should discourage organizations from trying to nudge creativity using mindfulness ad hoc devices.

Keywords: creativity, mindfulness, creative self-efficacy, experiment

Procedia PDF Downloads 132
7621 Game of Funds: Efficiency and Policy Implications of the United Kingdom Research Excellence Framework

Authors: Boon Lee

Abstract:

Research publication is an essential output of universities because it not only promotes university recognition, it also receives government funding. The history of university research culture has been one of ‘publish or perish’ and universities have consistently encouraged their academics and researchers to produce research articles in reputable journals in order to maintain a level of competitiveness. In turn, the United Kingdom (UK) government funding is determined by the number and quality of research publications. This paper aims to investigate on whether more government funding leads to more quality papers. To that end, the paper employs a Network DEA model to evaluate the UK higher education performance over a period. Sources of efficiency are also determined via second stage regression analysis.

Keywords: efficiency, higher education, network data envelopment analysis, universities

Procedia PDF Downloads 114
7620 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 109