Search results for: Porter's diamond model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16927

Search results for: Porter's diamond model

12277 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 407
12276 Coarse-Graining in Micromagnetic Simulations of Magnetic Hyperthermia

Authors: Razyeh Behbahani, Martin L. Plumer, Ivan Saika-Voivod

Abstract:

Micromagnetic simulations based on the stochastic Landau-Lifshitz-Gilbert equation are used to calculate dynamic magnetic hysteresis loops relevant to magnetic hyperthermia applications. With the goal to effectively simulate room-temperature loops for large iron-oxide based systems at relatively slow sweep rates on the order of 1 Oe/ns or less, a coarse-graining scheme is proposed and tested. The scheme is derived from a previously developed renormalization-group approach. Loops associated with nanorods, used as building blocks for larger nanoparticles that were employed in preclinical trials (Dennis et al., 2009 Nanotechnology 20 395103), serve as the model test system. The scaling algorithm is shown to produce nearly identical loops over several decades in the model grain sizes. Sweep-rate scaling involving the damping constant alpha is also demonstrated.

Keywords: coarse-graining, hyperthermia, hysteresis loops, micromagnetic simulations

Procedia PDF Downloads 149
12275 Experimental Chip/Tool Temperature FEM Model Calibration by Infrared Thermography: A Case Study

Authors: Riccardo Angiuli, Michele Giannuzzi, Rodolfo Franchi, Gabriele Papadia

Abstract:

Temperature knowledge in machining is fundamental to improve the numerical and FEM models used for the study of some critical process aspects, such as the behavior of the worked material and tool. The extreme conditions in which they operate make it impossible to use traditional measuring instruments; infrared thermography can be used as a valid measuring instrument for temperature measurement during metal cutting. In the study, a large experimental program on superduplex steel (ASTM A995 gr. 5A) cutting was carried out, the relevant cutting temperatures were measured by infrared thermography when certain cutting parameters changed, from traditional values to extreme ones. The values identified were used to calibrate a FEM model for the prediction of residual life of the tools. During the study, the problems related to the detection of cutting temperatures by infrared thermography were analyzed, and a dedicated procedure was developed that could be used during similar processing.

Keywords: machining, infrared thermography, FEM, temperature measurement

Procedia PDF Downloads 184
12274 Assessment of Training, Job Attitudes and Motivation: A Mediation Model in Banking Sector of Pakistan

Authors: Abdul Rauf, Xiaoxing Liu, Rizwan Qaisar Danish, Waqas Amin

Abstract:

The core intention of this study is to analyze the linkage of training, job attitudes and motivation through a mediation model in the banking sector of Pakistan. Moreover, this study is executed to answer a range of queries regarding the consideration of employees about training, job satisfaction, motivation and organizational commitment. Hence, the association of training with job satisfaction, job satisfaction with motivation, organizational commitment with job satisfaction, organization commitment as independently with motivation and training directly related to motivation is determined in this course of study. A questionnaire crafted for comprehending the purpose of this study by including four variables such as training, job satisfaction, motivation and organizational commitment which have to measure. A sample of 450 employees from seventeen private (17) banks and two (2) public banks was taken on the basis of convenience sampling from Pakistan. However, 357 questionnaires, completely filled were received back. AMOS used for assessing the conformity factor analysis (CFA) model and statistical techniques practiced to scan the collected data (i.e.) descriptive statistics, regression analysis and correlation analysis. The empirical findings revealed that training and organizational commitment has a significant and positive impact directly on job satisfaction and motivation as well as through the mediator (job satisfaction) also the impact sensing in the same way on the motivation of employees in the financial Banks of Pakistan. In this research study, the banking sector is under discussion, so the findings could not generalize on other sectors such as manufacturing, textiles, telecom, and medicine, etc. The low sample size is also the limitation of this study. On the foundation of these results the management fascinates to make the revised strategies regarding training program for the employees as it enhances their motivation level, and job satisfaction on a regular basis.

Keywords: job satisfaction, motivation, organizational commitment, Pakistan, training

Procedia PDF Downloads 254
12273 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography

Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz

Abstract:

Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.

Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic

Procedia PDF Downloads 107
12272 The Relationship between Central Bank Independence and Inflation: Evidence from Africa

Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit

Abstract:

The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.

Keywords: central bank independence, inflation, macroeconomic variables, price stability

Procedia PDF Downloads 364
12271 Numerical Model to Study Calcium and Inositol 1,4,5-Trisphosphate Dynamics in a Myocyte Cell

Authors: Nisha Singh, Neeru Adlakha

Abstract:

Calcium signalling is one of the most important intracellular signalling mechanisms. A lot of approaches and investigators have been made in the study of calcium signalling in various cells to understand its mechanisms over recent decades. However, most of existing investigators have mainly focussed on the study of calcium signalling in various cells without paying attention to the dependence of calcium signalling on other chemical ions like inositol-1; 4; 5 triphosphate ions, etc. Some models for the independent study of calcium signalling and inositol-1; 4; 5 triphosphate signalling in various cells are present but very little attention has been paid by the researchers to study the interdependence of these two signalling processes in a cell. In this paper, we propose a coupled mathematical model to understand the interdependence of inositol-1; 4; 5 triphosphate dynamics and calcium dynamics in a myocyte cell. Such studies will provide the deeper understanding of various factors involved in calcium signalling in myocytes, which may be of great use to biomedical scientists for various medical applications.

Keywords: calcium signalling, coupling, finite difference method, inositol 1, 4, 5-triphosphate

Procedia PDF Downloads 292
12270 Utility Analysis of API Economy Based on Multi-Sided Platform Markets Model

Authors: Mami Sugiura, Shinichi Arakawa, Masayuki Murata, Satoshi Imai, Toru Katagiri, Motoyoshi Sekiya

Abstract:

API (Application Programming Interface) economy, where many participants join/interact and form the economy, is expected to increase collaboration between information services through API, and thereby, it is expected to increase market value from the service collaborations. In this paper, we introduce API evaluators, which are the activator of API economy by reviewing and/or evaluating APIs, and develop a multi-sided API economy model that formulates interactions among platform provider, API developers, consumers, and API evaluators. By obtaining the equilibrium that maximizes utility of all participants, the impact of API evaluators on the utility of participants in the API economy is revealed. Numerical results show that, with the existence of API evaluators, the number of developers and consumers increase by 1.5% and the utility of platformer increases by 2.3%. We also discuss the strategies of platform provider to maximize its utility under the existence of API evaluators.

Keywords: API economy, multi-sided markets, API evaluator, platform, platform provider

Procedia PDF Downloads 186
12269 Transport Mode Selection under Lead Time Variability and Emissions Constraint

Authors: Chiranjit Das, Sanjay Jharkharia

Abstract:

This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.

Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection

Procedia PDF Downloads 434
12268 A Novel Geometrical Approach toward the Mechanical Properties of Particle Reinforced Composites

Authors: Hamed Khezrzadeh

Abstract:

Many investigations on the micromechanical structure of materials indicate that there exist fractal patterns at the micro scale in some of the main construction and industrial materials. A recently presented micro-fractal theory brings together the well-known periodic homogenization and the fractal geometry to construct an appropriate model for determination of the mechanical properties of particle reinforced composite materials. The proposed multi-step homogenization scheme considers the mechanical properties of different constituent phases in the composite together with the interaction between these phases throughout a step-by-step homogenization technique. In the proposed model the interaction of different phases is also investigated. By using this method the effect of fibers grading on the mechanical properties also could be studied. The theory outcomes are compared to the experimental data for different types of particle-reinforced composites which very good agreement with the experimental data is observed.

Keywords: fractal geometry, homogenization, micromehcanics, particulate composites

Procedia PDF Downloads 293
12267 Modeling and Control Design of a Centralized Adaptive Cruise Control System

Authors: Markus Mazzola, Gunther Schaaf

Abstract:

A vehicle driving with an Adaptive Cruise Control System (ACC) is usually controlled decentrally, based on the information of radar systems and in some publications based on C2X-Communication (CACC) to guarantee stable platoons. In this paper, we present a Model Predictive Control (MPC) design of a centralized, server-based ACC-System, whereby the vehicular platoon is modeled and controlled as a whole. It is then proven that the proposed MPC design guarantees asymptotic stability and hence string stability of the platoon. The Networked MPC design is chosen to be able to integrate system constraints optimally as well as to reduce the effects of communication delay and packet loss. The performance of the proposed controller is then simulated and analyzed in an LTE communication scenario using the LTE/EPC Network Simulator LENA, which is based on the ns-3 network simulator.

Keywords: adaptive cruise control, centralized server, networked model predictive control, string stability

Procedia PDF Downloads 515
12266 Modeling Palm Oil Quality During the Ripening Process of Fresh Fruits

Authors: Afshin Keshvadi, Johari Endan, Haniff Harun, Desa Ahmad, Farah Saleena

Abstract:

Experiments were conducted to develop a model for analyzing the ripening process of oil palm fresh fruits in relation to oil yield and oil quality of palm oil produced. This research was carried out on 8-year-old Tenera (Dura × Pisifera) palms planted in 2003 at the Malaysian Palm Oil Board Research Station. Fresh fruit bunches were harvested from designated palms during January till May of 2010. The bunches were divided into three regions (top, middle and bottom), and fruits from the outer and inner layers were randomly sampled for analysis at 8, 12, 16 and 20 weeks after anthesis to establish relationships between maturity and oil development in the mesocarp and kernel. Computations on data related to ripening time, oil content and oil quality were performed using several computer software programs (MSTAT-C, SAS and Microsoft Excel). Nine nonlinear mathematical models were utilized using MATLAB software to fit the data collected. The results showed mean mesocarp oil percent increased from 1.24 % at 8 weeks after anthesis to 29.6 % at 20 weeks after anthesis. Fruits from the top part of the bunch had the highest mesocarp oil content of 10.09 %. The lowest kernel oil percent of 0.03 % was recorded at 12 weeks after anthesis. Palmitic acid and oleic acid comprised of more than 73 % of total mesocarp fatty acids at 8 weeks after anthesis, and increased to more than 80 % at fruit maturity at 20 weeks. The Logistic model with the highest R2 and the lowest root mean square error was found to be the best fit model.

Keywords: oil palm, oil yield, ripening process, anthesis, fatty acids, modeling

Procedia PDF Downloads 313
12265 Dynamic Process Model for Designing Smart Spaces Based on Context-Awareness and Computational Methods Principles

Authors: Heba M. Jahin, Ali F. Bakr, Zeyad T. Elsayad

Abstract:

As smart spaces can be defined as any working environment which integrates embedded computers, information appliances and multi-modal sensors to remain focused on the interaction between the users, their activity, and their behavior in the space; hence, smart space must be aware of their contexts and automatically adapt to their changing context-awareness, by interacting with their physical environment through natural and multimodal interfaces. Also, by serving the information used proactively. This paper suggests a dynamic framework through the architectural design process of the space based on the principles of computational methods and context-awareness principles to help in creating a field of changes and modifications. It generates possibilities, concerns about the physical, structural and user contexts. This framework is concerned with five main processes: gathering and analyzing data to generate smart design scenarios, parameters, and attributes; which will be transformed by coding into four types of models. Furthmore, connecting those models together in the interaction model which will represent the context-awareness system. Then, transforming that model into a virtual and ambient environment which represents the physical and real environments, to act as a linkage phase between the users and their activities taking place in that smart space . Finally, the feedback phase from users of that environment to be sure that the design of that smart space fulfill their needs. Therefore, the generated design process will help in designing smarts spaces that can be adapted and controlled to answer the users’ defined goals, needs, and activity.

Keywords: computational methods, context-awareness, design process, smart spaces

Procedia PDF Downloads 331
12264 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development

Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan

Abstract:

Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.

Keywords: cost, feasibility, rural electrification, small-scale hydropower

Procedia PDF Downloads 224
12263 Analyzing Changes in Runoff Patterns Due to Urbanization Using SWAT Models

Authors: Asawari Ajay Avhad

Abstract:

The Soil and Water Assessment Tool (SWAT) is a hydrological model designed to predict the complex interactions within natural and human-altered watersheds. This research applies the SWAT model to the Ulhas River basin, a small watershed undergoing urbanization and characterized by bowl-like topography. Three simulation scenarios (LC17, LC22, and LC27) are investigated, each representing different land use and land cover (LULC) configurations, to assess the impact of urbanization on runoff. The LULC for the year 2027 is generated using the MOLUSCE Plugin of QGIS, incorporating various spatial factors such as DEM, Distance from Road, Distance from River, Slope, and distance from settlements. Future climate data is simulated within the SWAT model using historical data spanning 30 years. A susceptibility map for runoff across the basin is created, classifying runoff into five susceptibility levels ranging from very low to very high. Sub-basins corresponding to major urban settlements are identified as highly susceptible to runoff. With consideration of future climate projections, a slight increase in runoff is forecasted. The reliability of the methodology was validated through the identification of sub-basins known for experiencing severe flood events, which were determined to be highly susceptible to runoff. The susceptibility map successfully pinpointed these sub-basins with a track record of extreme flood occurrences, thus reinforcing the credibility of the assessment methodology. This study suggests that the methodology employed could serve as a valuable tool in flood management planning.

Keywords: future land use impact, flood management, run off prediction, ArcSWAT

Procedia PDF Downloads 46
12262 Soil Parameters Identification around PMT Test by Inverse Analysis

Authors: I. Toumi, Y. Abed, A. Bouafia

Abstract:

This paper presents a methodology for identifying the cohesive soil parameters that takes into account different constitutive equations. The procedure, applied to identify the parameters of generalized Prager model associated to the Drucker & Prager failure criterion from a pressuremeter expansion curve, is based on an inverse analysis approach, which consists of minimizing the function representing the difference between the experimental curve and the simulated curve using a simplex algorithm. The model response on pressuremeter path and its identification from experimental data lead to the determination of the friction angle, the cohesion and the Young modulus. Some parameters effects on the simulated curves and stresses path around pressuremeter probe are presented. Comparisons between the parameters determined with the proposed method and those obtained by other means are also presented.

Keywords: cohesive soils, cavity expansion, pressuremeter test, finite element method, optimization procedure, simplex algorithm

Procedia PDF Downloads 294
12261 On Virtual Coordination Protocol towards 5G Interference Mitigation: Modelling and Performance Analysis

Authors: Bohli Afef

Abstract:

The fifth-generation (5G) wireless systems is featured by extreme densities of cell stations to overcome the higher future demand. Hence, interference management is a crucial challenge in 5G ultra-dense cellular networks. In contrast to the classical inter-cell interference coordination approach, which is no longer fit for the high density of cell-tiers, this paper proposes a novel virtual coordination based on the dynamic common cognitive monitor channel protocol to deal with the inter-cell interference issue. A tractable and flexible model for the coverage probability of a typical user is developed through the use of the stochastic geometry model. The analyses of the performance of the suggested protocol are illustrated both analytically and numerically in terms of coverage probability.

Keywords: ultra dense heterogeneous networks, dynamic common channel protocol, cognitive radio, stochastic geometry, coverage probability

Procedia PDF Downloads 325
12260 Establishment of a Classifier Model for Early Prediction of Acute Delirium in Adult Intensive Care Unit Using Machine Learning

Authors: Pei Yi Lin

Abstract:

Objective: The objective of this study is to use machine learning methods to build an early prediction classifier model for acute delirium to improve the quality of medical care for intensive care patients. Background: Delirium is a common acute and sudden disturbance of consciousness in critically ill patients. After the occurrence, it is easy to prolong the length of hospital stay and increase medical costs and mortality. In 2021, the incidence of delirium in the intensive care unit of internal medicine was as high as 59.78%, which indirectly prolonged the average length of hospital stay by 8.28 days, and the mortality rate is about 2.22% in the past three years. Therefore, it is expected to build a delirium prediction classifier through big data analysis and machine learning methods to detect delirium early. Method: This study is a retrospective study, using the artificial intelligence big data database to extract the characteristic factors related to delirium in intensive care unit patients and let the machine learn. The study included patients aged over 20 years old who were admitted to the intensive care unit between May 1, 2022, and December 31, 2022, excluding GCS assessment <4 points, admission to ICU for less than 24 hours, and CAM-ICU evaluation. The CAMICU delirium assessment results every 8 hours within 30 days of hospitalization are regarded as an event, and the cumulative data from ICU admission to the prediction time point are extracted to predict the possibility of delirium occurring in the next 8 hours, and collect a total of 63,754 research case data, extract 12 feature selections to train the model, including age, sex, average ICU stay hours, visual and auditory abnormalities, RASS assessment score, APACHE-II Score score, number of invasive catheters indwelling, restraint and sedative and hypnotic drugs. Through feature data cleaning, processing and KNN interpolation method supplementation, a total of 54595 research case events were extracted to provide machine learning model analysis, using the research events from May 01 to November 30, 2022, as the model training data, 80% of which is the training set for model training, and 20% for the internal verification of the verification set, and then from December 01 to December 2022 The CU research event on the 31st is an external verification set data, and finally the model inference and performance evaluation are performed, and then the model has trained again by adjusting the model parameters. Results: In this study, XG Boost, Random Forest, Logistic Regression, and Decision Tree were used to analyze and compare four machine learning models. The average accuracy rate of internal verification was highest in Random Forest (AUC=0.86), and the average accuracy rate of external verification was in Random Forest and XG Boost was the highest, AUC was 0.86, and the average accuracy of cross-validation was the highest in Random Forest (ACC=0.77). Conclusion: Clinically, medical staff usually conduct CAM-ICU assessments at the bedside of critically ill patients in clinical practice, but there is a lack of machine learning classification methods to assist ICU patients in real-time assessment, resulting in the inability to provide more objective and continuous monitoring data to assist Clinical staff can more accurately identify and predict the occurrence of delirium in patients. It is hoped that the development and construction of predictive models through machine learning can predict delirium early and immediately, make clinical decisions at the best time, and cooperate with PADIS delirium care measures to provide individualized non-drug interventional care measures to maintain patient safety, and then Improve the quality of care.

Keywords: critically ill patients, machine learning methods, delirium prediction, classifier model

Procedia PDF Downloads 75
12259 Modeling and Tracking of Deformable Structures in Medical Images

Authors: Said Ettaieb, Kamel Hamrouni, Su Ruan

Abstract:

This paper presents a new method based both on Active Shape Model and a priori knowledge about the spatio-temporal shape variation for tracking deformable structures in medical imaging. The main idea is to exploit the a priori knowledge of shape that exists in ASM and introduce new knowledge about the shape variation over time. The aim is to define a new more stable method, allowing the reliable detection of structures whose shape changes considerably in time. This method can also be used for the three-dimensional segmentation by replacing the temporal component by the third spatial axis (z). The proposed method is applied for the functional and morphological study of the heart pump. The functional aspect was studied through temporal sequences of scintigraphic images and morphology was studied through MRI volumes. The obtained results are encouraging and show the performance of the proposed method.

Keywords: active shape model, a priori knowledge, spatiotemporal shape variation, deformable structures, medical images

Procedia PDF Downloads 342
12258 Breast Cancer Prediction Using Score-Level Fusion of Machine Learning and Deep Learning Models

Authors: Sam Khozama, Ali M. Mayya

Abstract:

Breast cancer is one of the most common types in women. Early prediction of breast cancer helps physicians detect cancer in its early stages. Big cancer data needs a very powerful tool to analyze and extract predictions. Machine learning and deep learning are two of the most efficient tools for predicting cancer based on textual data. In this study, we developed a fusion model of two machine learning and deep learning models. To obtain the final prediction, Long-Short Term Memory (LSTM) and ensemble learning with hyper parameters optimization are used, and score-level fusion is used. Experiments are done on the Breast Cancer Surveillance Consortium (BCSC) dataset after balancing and grouping the class categories. Five different training scenarios are used, and the tests show that the designed fusion model improved the performance by 3.3% compared to the individual models.

Keywords: machine learning, deep learning, cancer prediction, breast cancer, LSTM, fusion

Procedia PDF Downloads 163
12257 Effort-Reward-Imbalance and Self-Rated Health Among Healthcare Professionals in the Gambia

Authors: Amadou Darboe, Kuo Hsien-Wen

Abstract:

Background/Objective: The Effort-Reward Imbalance (ERI) model by Siegrist et al (1986) have been widely used to examine the relationship between psychosocial factors at work and health. It claimed that failed reciprocity in terms of high efforts and low rewards elicits strong negative emotions in combination with sustained autonomic activation and is hazardous to health. The aim of this study is to identify the association between Self-rated Health and Effort-reward Imbalance (ERI) among Nurses and Environmental Health officers in the Gambia. Method: a cross-sectional study was conducted using a multi-stage random sampling of 296 healthcare professionals (206 nurses and 90 environmental health officers) working in public health facilities. The 22 items Effort-reward imbalance questionnaire (ERI-L version 22.11.2012) will be used to collect data on the psychosocial factors defined by the model. In addition, self-rated health will be assessed by using structured questionnaires containing Likert scale items. Results: We found that self-rated health among environmental health officers has a significant negative correlation with extrinsic effort and a positive significant correlations with occupational reward and job satisfaction. However, among the nurses only job satisfaction was significantly correlated with self-rated health and was positive. Overall, Extrinsic effort has a significant negative correlation with reward and job satisfaction but a positive correlation with over-commitment. Conclusion: Because low reward and high over-commitment among the nursing group, It is necessary to modify working conditions through improving psychosocial factors, such as reasonable allocation of resources to increase pay or rewards from government.

Keywords: effort-reward imbalance model, healthcare professionals, self-rated health

Procedia PDF Downloads 407
12256 A Timed and Colored Petri Nets for Modeling and Verify Cloud System Elasticity

Authors: Walid Louhichi, Mouhebeddine Berrima, Narjes Ben Rajed

Abstract:

Elasticity is the essential property of cloud computing. As the name suggests, it constitutes the ability of a cloud system to adjust resource provisioning in relation to fluctuating workload. There are two types of elasticity operations, vertical and horizontal. In this work, we are interested in horizontal scaling, which is ensured by two mechanisms; scaling in and scaling out. Following the sizing of the system, we can adopt scaling in in the event of over-supply and scaling out in the event of under-supply. In this paper, we propose a formal model, based on colored and temporized Petri nets, for the modeling of the duplication and the removal of a virtual machine from a server. This model is based on formal Petri Nets modeling language. The proposed models are edited, verified, and simulated with two examples implemented in CPNtools, which is a modeling tool for colored and timed Petri nets.

Keywords: cloud computing, elasticity, elasticity controller, petri nets, scaling in, scaling out

Procedia PDF Downloads 154
12255 Reducing Energy Consumption and GHG Emission by Integration of Flare Gas with Fuel Gas Network in Refinery

Authors: N. Tahouni, M. Gholami, M. H. Panjeshahi

Abstract:

Gas flaring is one of the most GHG emitting sources in the oil and gas industries. It is also a major way for wasting such an energy that could be better utilized and even generates revenue. Minimize flaring is an effective approach for reducing GHG emissions and also conserving energy in flaring systems. Integrating waste and flared gases into the fuel gas networks (FGN) of refineries is an efficient tool. A fuel gas network collects fuel gases from various source streams and mixes them in an optimal manner, and supplies them to different fuel sinks such as furnaces, boilers, turbines, etc. In this article we use fuel gas network model proposed by Hasan et al. as a base model and modify some of its features and add constraints on emission pollution by gas flaring to reduce GHG emissions as possible. Results for a refinery case study showed that integration of flare gas stream with waste and natural gas streams to construct an optimal FGN can significantly reduce total annualized cost and flaring emissions.

Keywords: flaring, fuel gas network, GHG emissions, stream

Procedia PDF Downloads 344
12254 Times Series Analysis of Depositing in Industrial Design in Brazil between 1996 and 2013

Authors: Jonas Pedro Fabris, Alberth Almeida Amorim Souza, Maria Emilia Camargo, Suzana Leitão Russo

Abstract:

With the law Nº. 9279, of May 14, 1996, the Brazilian government regulates rights and obligations relating to industrial property considering the economic development of the country as granting patents, trademark registration, registration of industrial designs and other forms of protection copyright. In this study, we show the application of the methodology of Box and Jenkins in the series of deposits of industrial design at the National Institute of Industrial Property for the period from May 1996 to April 2013. First, a graphical analysis of the data was done by observing the behavior of the data and the autocorrelation function. The best model found, based on the analysis of charts and statistical tests suggested by Box and Jenkins methodology, it was possible to determine the model number for the deposit of industrial design, SARIMA (2,1,0)(2,0,0), with an equal to 9.88% MAPE.

Keywords: ARIMA models, autocorrelation, Box and Jenkins Models, industrial design, MAPE, time series

Procedia PDF Downloads 544
12253 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique

Authors: Dibakar Chakrabarty, Mebada Suiting

Abstract:

Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.

Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM

Procedia PDF Downloads 248
12252 The Automatic Transliteration Model of Images of the Book Hamong Tani Using Statistical Approach

Authors: Agustinus Rudatyo Himamunanto, Anastasia Rita Widiarti

Abstract:

Transliteration using Javanese manuscripts is one of methods to preserve and legate the wealth of literature in the past for the present generation in Indonesia. The transliteration manual process commonly requires philologists and takes a relatively long time. The automatic transliteration process is expected to shorten the time so as to help the works of philologists. The preprocessing and segmentation stage firstly done is used to manage the document images, thus obtaining image script units that will compile input document images free from noise and have the similarity in properties in the thickness, size, and slope. The next stage of characteristic extraction is used to find unique characteristics that will distinguish each Javanese script image. One of characteristics that is used in this research is the number of black pixels in each image units. Each image of Java scripts contained in the data training will undergo the same process similar to the input characters. The system testing was performed with the data of the book Hamong Tani. The book Hamong Tani was selected due to its content, age and number of pages. Those were considered sufficient as a model experimental input. Based on the results of random page automatic transliteration process testing, it was determined that the maximum percentage correctness obtained was 81.53%. The percentage of success was obtained in 32x32 pixel input image size with the 5x5 image window. With regard to the results, it can be concluded that the automatic transliteration model offered is relatively good.

Keywords: Javanese script, character recognition, statistical, automatic transliteration

Procedia PDF Downloads 339
12251 Cold Model Experimental Research on Particle Velocity Distribution in Gas-Solid Circulating Fluidized Bed for Methanol-To-Olefins Process

Authors: Yongzheng Li, Hongfang Ma, Qiwen Sun, Haitao Zhang, Weiyong Ying

Abstract:

Radial profiles of particle velocities were investigated in a 6.1 m tall methanol-to-olefins cold model experimental device using a TSI laser Doppler velocimeter. The measurement of axial levels was conducted in the full developed region. The effect of axial level on flow development was not obvious under the same operating condition. Superficial gas velocity and solid circulating rate had significant influence on particle velocity in the center region of the riser. Besides, comparisons between upward, downward and average particle velocity were conducted. The average particle velocity was close to upward velocity and higher than downward velocity in radial locations except the wall region of riser.

Keywords: circulating fluidized bed, laser doppler velocimeter, particle velocity, radial profile

Procedia PDF Downloads 370
12250 Stock Market Developments, Income Inequality, Wealth Inequality

Authors: Quang Dong Dang

Abstract:

This paper examines the possible effects of stock market developments by channels on income and wealth inequality. We use the Bayesian Multilevel Model with the explanatory variables of the market’s channels, such as accessibility, efficiency, and market health in six selected countries: the US, UK, Japan, Vietnam, Thailand, and Malaysia. We found that generally, the improvements in the stock market alleviate income inequality. However, stock market expansions in higher-income countries are likely to trigger income inequality. We also found that while enhancing the quality of channels of the stock market has counter-effects on wealth equality distributions, open accessibilities help reduce wealth inequality distributions within the scope of the study. In addition, the inverted U-shaped hypothesis seems not to be valid in six selected countries between the period from 2006 to 2020.

Keywords: Bayesian multilevel model, income inequality, inverted u-shaped hypothesis, stock market development, wealth inequality

Procedia PDF Downloads 108
12249 Garden City in the Age of ICT: A Case Study of Dali

Authors: Luojie Tang, Libin Ouyang, Yihang Gao

Abstract:

The natural landscape and urban-rural structure, with their attractiveness in the Dali area around Erhai Lake, exhibit striking similarities with Howard's Garden City. With the emergence of the unique phenomenon of the first large-scale gathering of digital nomads in China in Dali, an analysis of Dali's natural, economic, and cultural representations and structures reveals that the Garden City model can no longer fully explain the current overall human living environment in Dali. By interpreting the bottom-up local construction process in Dali based on landscape identity, the transformation of production and lifestyle under new technologies such as ICT(Information and Communication Technology), and the values and lifestyle reshaping embodied in the "reverse urbanization" phenomenon of the middle class in Dali, it is believed that Dali has moved towards a "contemporary garden city influenced by new technology." The article summarizes the characteristics and connotations of this Garden City and provides corresponding strategies for its continued healthy development.

Keywords: dali, ICT, rural-urban relationship, garden city model

Procedia PDF Downloads 70
12248 Photobiomodulation Activates WNT/β-catenin Signaling for Wound Healing in an in Vitro Diabetic Wound Model

Authors: Dimakatso B. Gumede, Nicolette N. Houreld

Abstract:

Diabetic foot ulcers (DFUs) are a complication of diabetes mellitus (DM), a metabolic disease caused by insulin resistance or insufficiency, resulting in hyperglycaemia and low-grade chronic inflammation. Current therapies for treating DFUs include wound debridement, glycaemic control, and wound dressing. However, these therapies are moderately effective as there is a recurrence of these ulcers and an increased risk of lower limb amputations. Photobiomodulation (PBM), which is the application of non-invasive low-level light for wound healing at the spectrum of 660-1000 nm, has shown great promise in accelerating the healing of chronic wounds. However, its underlying mechanisms are not clearly defined. Studies have indicated that PBM induces wound healing via the activation of signaling pathways that are involved in tissue repair, such as the transforming growth factor-β (TGF-β). However, other signaling pathways, such as the WNT/β-catenin pathway, which is also critical for wound repair, have not been investigated. This study aimed to elucidate if PBM at 660 nm and a fluence of 5 J/cm² activates the WNT/β-catenin signaling pathway for wound healing in a diabetic cellular model. Human dermal fibroblasts (WS1) were continuously cultured high-glucose (26.5 mM D-glucose) environment to create a diabetic cellular model. A central scratch was created in the diabetic model to ‘wound’ the cells. The diabetic wounded (DW) cells were thereafter irradiated at 660 nm and a fluence of 5 J/cm². Cell migration, gene expression and protein assays were conducted at 24- and 48-h post-PBM. The results showed that PBM at 660 nm and a fluence of 5 J/cm² significantly increased cell migration in diabetic wounded cells at 24-h post-PBM. The expression of CTNNB1, ACTA2, COL1A1 and COL3A1 genes was also increased in DW cells post-PBM. Furthermore, there was increased cytoplasmic accumulation and nuclear localization of β-catenin at 24 h post-PBM. The findings in this study demonstrate that PBM activates the WNT/β-catenin signaling pathway by inducing the accumulation of β-catenin in diabetic wounded cells, leading to increased cell migration and expression of wound repair markers. These results thus indicate that PBM has the potential to improve wound healing in diabetic ulcers via activation of the WNT/β-catenin signaling pathway.

Keywords: wound healing, diabetic ulcers, photobiomodulation, WNT/β-catenin, signalling pathway

Procedia PDF Downloads 40