Search results for: accelerated failure time model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31654

Search results for: accelerated failure time model

29674 A Risk Assessment for the Small Hive Beetle Based on Meteorological Standard Measurements

Authors: J. Junk, M. Eickermann

Abstract:

The Small Hive Beetle, Aethina tumida (Coleoptera: Nitidulidae) is a parasite for honey bee colonies, Apis mellifera, and was recently introduced to the European continent, accidentally. Based on the literature, a model was developed by using regional meteorological variables (daily values of minimum, maximum and mean air temperature as well as mean soil temperature at 50 mm depth) to calculate the time-point of hive invasion by A. tumida in springtime, the development duration of pupae as well as the number of generations of A. tumida per year. Luxembourg was used as a test region for our model for 2005 to 2013. The model output indicates a successful surviving of the Small Hive Beetle in Luxembourg with two up to three generations per year. Additionally, based on our meteorological data sets a first migration of SHB to apiaries can be expected from mid of March up to April. Our approach can be transferred easily to other countries to estimate the risk potential for a successful introduction and spreading of A. tumida in Western Europe.

Keywords: Aethina tumida, air temperature, larval development, soil temperature

Procedia PDF Downloads 103
29673 Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Secondary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

This study is an attempt to obtain reliable data on the natural history of breast cancer growth. We analyze the opportunities for using classical mathematical models (exponential and logistic tumor growth models, Gompertz and von Bertalanffy tumor growth models) to try to describe growth of the primary tumor and the secondary distant metastases of human breast cancer. The research aim is to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoMPaS and corresponding software. We are interested in: 1) modelling the whole natural history of the primary tumor and the secondary distant metastases; 2) developing adequate and precise CoMPaS which reflects relations between the primary tumor and the secondary distant metastases; 3) analyzing the CoMPaS scope of application; 4) implementing the model as a software tool. The foundation of the CoMPaS is the exponential tumor growth model, which is described by determinate nonlinear and linear equations. The CoMPaS corresponds to TNM classification. It allows to calculate different growth periods of the primary tumor and the secondary distant metastases: 1) ‘non-visible period’ for the primary tumor; 2) ‘non-visible period’ for the secondary distant metastases; 3) ‘visible period’ for the secondary distant metastases. The CoMPaS is validated on clinical data of 10-years and 15-years survival depending on the tumor stage and diameter of the primary tumor. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer growth models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. The CoMPaS model and predictive software: a) fit to clinical trials data; b) detect different growth periods of the primary tumor and the secondary distant metastases; c) make forecast of the period of the secondary distant metastases appearance; d) have higher average prediction accuracy than the other tools; e) can improve forecasts on survival of breast cancer and facilitate optimization of diagnostic tests. The following are calculated by CoMPaS: the number of doublings for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases; tumor volume doubling time (days) for ‘non-visible’ and ‘visible’ growth period of the secondary distant metastases. The CoMPaS enables, for the first time, to predict ‘whole natural history’ of the primary tumor and the secondary distant metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on the primary tumor sizes. Summarizing: a) CoMPaS describes correctly the primary tumor growth of IA, IIA, IIB, IIIB (T1-4N0M0) stages without metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and inception of the secondary distant metastases.

Keywords: breast cancer, exponential growth model, mathematical model, metastases in lymph nodes, primary tumor, survival

Procedia PDF Downloads 328
29672 Digital Transformation: Actionable Insights to Optimize the Building Performance

Authors: Jovian Cheung, Thomas Kwok, Victor Wong

Abstract:

Buildings are entwined with smart city developments. Building performance relies heavily on electrical and mechanical (E&M) systems and services accounting for about 40 percent of global energy use. By cohering the advancement of technology as well as energy and operation-efficient initiatives into the buildings, people are enabled to raise building performance and enhance the sustainability of the built environment in their daily lives. Digital transformation in the buildings is the profound development of the city to leverage the changes and opportunities of digital technologies To optimize the building performance, intelligent power quality and energy management system is developed for transforming data into actions. The system is formed by interfacing and integrating legacy metering and internet of things technologies in the building and applying big data techniques. It provides operation and energy profile and actionable insights of a building, which enables to optimize the building performance through raising people awareness on E&M services and energy consumption, predicting the operation of E&M systems, benchmarking the building performance, and prioritizing assets and energy management opportunities. The intelligent power quality and energy management system comprises four elements, namely the Integrated Building Performance Map, Building Performance Dashboard, Power Quality Analysis, and Energy Performance Analysis. It provides predictive operation sequence of E&M systems response to the built environment and building activities. The system collects the live operating conditions of E&M systems over time to identify abnormal system performance, predict failure trends and alert users before anticipating system failure. The actionable insights collected can also be used for system design enhancement in future. This paper will illustrate how intelligent power quality and energy management system provides operation and energy profile to optimize the building performance and actionable insights to revitalize an existing building into a smart building. The system is driving building performance optimization and supporting in developing Hong Kong into a suitable smart city to be admired.

Keywords: intelligent buildings, internet of things technologies, big data analytics, predictive operation and maintenance, building performance

Procedia PDF Downloads 135
29671 Development of a Finite Element Model of the Upper Cervical Spine to Evaluate the Atlantoaxial Fixation Techniques

Authors: Iman Zafarparandeh, Muzammil Mumtaz, Paniz Taherzadeh, Deniz Erbulut

Abstract:

The instability in the atlantoaxial joint may occur due to cervical surgery, congenital anomalies, and trauma. There are different types of fixation techniques proposed for restoring the stability and preventing harmful neurological deterioration. Application of the screw constructs has become a popular alternative to the older techniques for stabilizing the joint. The main difference between the various screw constructs is the type of the screw which can be lateral mass screw, pedicle screw, transarticular screw, and translaminar screw. The aim of this paper is to study the effect of three popular screw constructs fixation techniques on the biomechanics of the atlantoaxial joint using the finite element (FE) method. A three-dimensional FE model of the upper cervical spine including the skull, C1 and C2 vertebrae, and groups of the existing ligaments were developed. The accurate geometry of the model was obtained from the CT data of a 35-year old male. Three screw constructs were designed to compare; Magerl transarticular screw (TA-Screw), Goel-Harms lateral mass screw and pedicle screw (LM-Screw and Pedicle-Screw), and Wright lateral mass screw and translaminar screw (LM-Screw and TL-Screw). Pure moments were applied to the model in the three main planes; flexion (Flex), extension (Ext), axial rotation (AR) and lateral bending (LB). The range of motion (ROM) of C0-C1 and C1-C2 segments for the implanted FE models are compared to the intact FE model and the in vitro study of Panjabi (1988). The Magerl technique showed less effect on the ROM of C0-C1 than the other two techniques in sagittal plane. In lateral bending and axial rotation, the Goel-Harms and Wright techniques showed less effect on the ROM of C0-C1 than the Magerl technique. The Magerl technique has the highest fusion rate as 99% in all loading directions for the C1-C2 segment. The Wright technique has the lowest fusion rate in LB as 79%. The three techniques resulted in the same fusion rate in extension loading as 99%. The maximum stress for the Magerl technique is the lowest in all load direction compared to other two techniques. The maximum stress in all direction was 234 Mpa and occurred in flexion with the Wright technique. The maximum stress for the Goel-Harms and Wright techniques occurred in lateral mass screw. The ROM obtained from the FE results support this idea that the fusion rate of the Magerl is more than 99%. Moreover, the maximum stress occurred in each screw constructs proves the less failure possibility for the Magerl technique. Another advantage of the Magerl technique is the less number of components compared to other techniques using screw constructs. Despite the benefits of the Magerl technique, there are drawbacks to using this method such as reduction of the C1 and C2 before screw placement. Therefore, other fixation methods such as Goel-Harms and Wright techniques find the solution for the drawbacks of the Magerl technique by adding screws separately to C1 and C2. The FE model implanted with the Wright technique showed the highest maximum stress almost in all load direction.

Keywords: cervical spine, finite element model, atlantoaxial, fixation technique

Procedia PDF Downloads 371
29670 Influence of a Company’s Dynamic Capabilities on Its Innovation Capabilities

Authors: Lovorka Galetic, Zeljko Vukelic

Abstract:

The advanced concepts of strategic and innovation management in the sphere of company dynamic and innovation capabilities, and achieving their mutual alignment and a synergy effect, are important elements in business today. This paper analyses the theory and empirically investigates the influence of a company’s dynamic capabilities on its innovation capabilities. A new multidimensional model of dynamic capabilities is presented, consisting of five factors appropriate to real time requirements, while innovation capabilities are considered pursuant to the official OECD and Eurostat standards. After examination of dynamic and innovation capabilities indicated their theoretical links, the empirical study testing the model and examining the influence of a company’s dynamic capabilities on its innovation capabilities showed significant results. In the study, a research model was posed to relate company dynamic and innovation capabilities. One side of the model features the variables that are the determinants of dynamic capabilities defined through their factors, while the other side features the determinants of innovation capabilities pursuant to the official standards. With regard to the research model, five hypotheses were set. The study was performed in late 2014 on a representative sample of large and very large Croatian enterprises with a minimum of 250 employees. The research instrument was a questionnaire administered to company top management. For both variables, the position of the company was tested in comparison to industry competitors, on a fivepoint scale. In order to test the hypotheses, correlation tests were performed to determine whether there is a correlation between each individual factor of company dynamic capabilities with the existence of its innovation capabilities, in line with the research model. The results indicate a strong correlation between a company’s possession of dynamic capabilities in terms of their factors, due to the new multi-dimensional model presented in this paper, with its possession of innovation capabilities. Based on the results, all five hypotheses were accepted. Ultimately, it was concluded that there is a strong association between the dynamic and innovation capabilities of a company. 

Keywords: dynamic capabilities, innovation capabilities, competitive advantage, business results

Procedia PDF Downloads 290
29669 Multiscale Simulation of Ink Seepage into Fibrous Structures through a Mesoscopic Variational Model

Authors: Athmane Bakhta, Sebastien Leclaire, David Vidal, Francois Bertrand, Mohamed Cheriet

Abstract:

This work presents a new three-dimensional variational model proposed for the simulation of ink seepage into paper sheets at the fiber level. The model, inspired by the Hising model, takes into account a finite volume of ink and describes the system state through gravity, cohesion, and adhesion force interactions. At the mesoscopic scale, the paper substrate is modeled using a discretized fiber structure generated using a numerical deposition procedure. A modified Monte Carlo method is introduced for the simulation of the ink dynamics. Besides, a multiphase lattice Boltzmann method is suggested to fine-tune the mesoscopic variational model parameters, and it is shown that the ink seepage behaviors predicted by the proposed model can resemble those predicted by a method relying on first principles.

Keywords: fibrous media, lattice Boltzmann, modelling and simulation, Monte Carlo, variational model

Procedia PDF Downloads 133
29668 Performance Evaluation of Vertical Handover on Silom Line BTS

Authors: Silumpa Suboonsan, Suwat Pattaramalai

Abstract:

In this paper, the performance of internet usage by using Vertical Handover (VHO) between cellular network and wireless local area network (WLAN) on Silom line Bangkok Mass Transit System (BTS) is evaluated. In the evaluation model, there is the WLAN on every BTS station and there are cellular base stations along the BTS path. The maximum data rates for cellular network are 7.2, 14.4, 42, and 100Mbps and for WLAN are 54, 150, and 300Mbps. The simulation are based on users using internet, watching VDOs and browsing web pages, on the BTS train from first station to the last station (full time usage) and on the BTS train for traveling some number of stations (random time). The results shows that VHO system has throughput a lot more than using only cellular network when the data rate of WLAN is more than one of cellular network. Lastly, the number of watching HD VDO and Full HD VDO is higher on VHO system on both regular time and rush hour of BTS travelling.

Keywords: vertical handover, WLAN, cellular, silom line BTS

Procedia PDF Downloads 459
29667 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation

Procedia PDF Downloads 354
29666 Analyzing the Effects of Supply and Demand Shocks in the Spanish Economy

Authors: José M Martín-Moreno, Rafaela Pérez, Jesús Ruiz

Abstract:

In this paper we use a small open economy Dynamic Stochastic General Equilibrium Model (DSGE) for the Spanish economy to search for a deeper characterization of the determinants of Spain’s macroeconomic fluctuations throughout the period 1970-2008. In order to do this, we distinguish between tradable and non-tradable goods to take into account the fact that the presence of non-tradable goods in this economy is one of the largest in the world. We estimate a DSGE model with supply and demand shocks (sectorial productivity, public spending, international real interest rate and preferences) using Kalman Filter techniques. We find the following results. First of all, our variance decomposition analysis suggests that 1) the preference shock basically accounts for private consumption volatility, 2) the idiosyncratic productivity shock accounts for non-tradable output volatility, and 3) the sectorial productivity shock along with the international interest rate both greatly account for tradable output. Secondly, the model closely replicates the time path observed in the data for the Spanish economy and finally, the model captures the main cyclical qualitative features of this economy reasonably well.

Keywords: business cycle, DSGE models, Kalman filter estimation, small open economy

Procedia PDF Downloads 400
29665 Comparison of Sourcing Process in Supply Chain Operation References Model and Business Information Systems

Authors: Batuhan Kocaoglu

Abstract:

Although using powerful systems like ERP (Enterprise Resource Planning), companies still cannot benchmark their processes and measure their process performance easily based on predefined SCOR (Supply Chain Operation References) terms. The purpose of this research is to identify common and corresponding processes to present a conceptual model to model and measure the purchasing process of an organization. The main steps for the research study are: Literature review related to 'procure to pay' process in ERP system; Literature review related to 'sourcing' process in SCOR model; To develop a conceptual model integrating 'sourcing' of SCOR model and 'procure to pay' of ERP model. In this study, we examined the similarities and differences between these two models. The proposed framework is based on the assumptions that are drawn from (1) the body of literature, (2) the authors’ experience by working in the field of enterprise and logistics information systems. The modeling framework provides a structured and systematic way to model and decompose necessary information from conceptual representation to process element specification. This conceptual model will help the organizations to make quality purchasing system measurement instruments and tools. And offered adaptation issues for ERP systems and SCOR model will provide a more benchmarkable and worldwide standard business process.

Keywords: SCOR, ERP, procure to pay, sourcing, reference model

Procedia PDF Downloads 346
29664 Removal of Nickel Ions from Industrial Effluents by Batch and Column Experiments: A Comparison of Activated Carbon with Pinus Roxburgii Saw Dust

Authors: Sardar Khana, Zar Ali Khana

Abstract:

Rapid industrial development and urbanization contribute a lot to wastewater discharge. The wastewater enters into natural aquatic ecosystems from industrial activities and considers as one of the main sources of water pollution. Discharge of effluents loaded with heavy metals into the surrounding environment has become a key issue regarding human health risk, environment, and food chain contamination. Nickel causes fatigue, cancer, headache, heart problems, skin diseases (Nickel Itch), and respiratory disorders. Nickel compounds such as Nickel Sulfide and Nickel oxides in industrial environment, if inhaled, have an association with an increased risk of lung cancer. Therefore the removal of Nickel from effluents before discharge is necessary. Removal of Nickel by low-cost biosorbents is an efficient method. This study was aimed to investigate the efficiency of activated carbon and Pinusroxburgiisaw dust for the removal of Nickel from industrial effluents using commercial Activated Carbon, and raw P.roxburgii saw dust. Batch and column adsorption experiments were conducted for the removal of Nickel. The study conducted indicates that removal of Nickel greatly dependent on pH, contact time, Nickel concentration, and adsorbent dose. Maximum removal occurred at pH 9, contact time of 600 min, and adsorbent dose of 1 g/100 mL. The highest removal was 99.62% and 92.39% (pH based), 99.76% and 99.9% (dose based), 99.80% and 100% (agitation time), 92% and 72.40% (Ni Conc. based) for P.roxburgii saw dust and activated Carbon, respectively. Similarly, the Ni removal in column adsorption was 99.77% and 99.99% (bed height based), 99.80% and 99.99% (Concentration based), 99.98%, and 99.81% (flow rate based) during column studies for Nickel using P.Roxburgiisaw dust and activated carbon, respectively. Results were compared with Freundlich isotherm model, which showed “r2” values of 0.9424 (Activated carbon) and 0.979 (P.RoxburgiiSaw Dust). While Langmuir isotherm model values were 0.9285 (Activated carbon) and 0.9999 (P.RoxburgiiSaw Dust), the experimental results were fitted to both the models. But the results were in close agreement with Langmuir isotherm model.

Keywords: nickel removal, batch, and column, activated carbon, saw dust, plant uptake

Procedia PDF Downloads 114
29663 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 67
29662 Optimization of Passive Vibration Damping of Space Structures

Authors: Emad Askar, Eldesoky Elsoaly, Mohamed Kamel, Hisham Kamel

Abstract:

The objective of this article is to improve the passive vibration damping of solar array (SA) used in space structures, by the effective application of numerical optimization. A case study of a SA is used for demonstration. A finite element (FE) model was created and verified by experimental testing. Optimization was then conducted by implementing the FE model with the genetic algorithm, to find the optimal placement of aluminum circular patches, to suppress the first two bending mode shapes. The results were verified using experimental testing. Finally, a parametric study was conducted using the FE model where patch locations, material type, and shape were varied one at a time, and the results were compared with the optimal ones. The results clearly show that through the proper application of FE modeling and numerical optimization, passive vibration damping of space structures has been successfully achieved.

Keywords: damping optimization, genetic algorithm optimization, passive vibration damping, solar array vibration damping

Procedia PDF Downloads 435
29661 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 75
29660 Efficient Estimation for the Cox Proportional Hazards Cure Model

Authors: Khandoker Akib Mohammad

Abstract:

While analyzing time-to-event data, it is possible that a certain fraction of subjects will never experience the event of interest, and they are said to be cured. When this feature of survival models is taken into account, the models are commonly referred to as cure models. In the presence of covariates, the conditional survival function of the population can be modelled by using the cure model, which depends on the probability of being uncured (incidence) and the conditional survival function of the uncured subjects (latency), and a combination of logistic regression and Cox proportional hazards (PH) regression is used to model the incidence and latency respectively. In this paper, we have shown the asymptotic normality of the profile likelihood estimator via asymptotic expansion of the profile likelihood and obtain the explicit form of the variance estimator with an implicit function in the profile likelihood. We have also shown the efficient score function based on projection theory and the profile likelihood score function are equal. Our contribution in this paper is that we have expressed the efficient information matrix as the variance of the profile likelihood score function. A simulation study suggests that the estimated standard errors from bootstrap samples (SMCURE package) and the profile likelihood score function (our approach) are providing similar and comparable results. The numerical result of our proposed method is also shown by using the melanoma data from SMCURE R-package, and we compare the results with the output obtained from the SMCURE package.

Keywords: Cox PH model, cure model, efficient score function, EM algorithm, implicit function, profile likelihood

Procedia PDF Downloads 123
29659 Impact of Reverse Technology Transfer on Innovation Capabilities: An Econometric Analysis for Mexican Transnational Corporations

Authors: Lissette Alejandra Lara, Mario Gomez, Jose Carlos Rodriguez

Abstract:

ransnational corporations (TNCs) as units in which it is possible technology and knowledge transfer across borders and the potential for generating innovation and contributing in economic development both in home and host countries have been widely acknowledged in the foreign direct investment (FDI) literature. Particularly, the accelerated expansion of emerging countries TNCs in the last decades has guided an uprising research stream that measure the presence of reverse technology transfer, defined as the extent to which emerging countries’ TNCs use outward FDI in a host country through certain mechanisms to absorb and transfer knowledge thus improving its technological capabilities in the home country. The objective of this paper is to test empirically the presence of reverse technology transfer and its impact on the innovation capabilities in Mexican transnational corporations (MXTNCs) as a part of the emerging countries TNCs that have successfully entered to industrialized markets. Using a panel dataset of 22 MXTNCs over the period 1994-2015, the results of the econometric model demonstrate that the amount of Mexican outward FDI and the research and development (R&D) expenditure in host developed countries had a positive impact on the innovation capabilities at the firm and industry level. There is also evidence that management of acquired brands and the organizational structure of Mexican subsidiaries improved these capabilities. Implications for internationalization strategies of emerging countries corporations and future research guidelines are discussed.

Keywords: emerging countries, foreign direct investment, innovation capabilities, Mexican transnational corporations, reverse technology transfer

Procedia PDF Downloads 210
29658 Impact of Financial System’s Development on Economic Development: An Empirical Investigation

Authors: Vilma Deltuvaitė

Abstract:

Comparisons of financial development across countries are central to answering many of the questions on factors leading to economic development. For this reason this study analyzes the implications of financial system’s development on country’s economic development. The aim of the article: to analyze the impact of financial system’s development on economic development. The following research methods were used: systemic, logical and comparative analysis of scientific literature, analysis of statistical data, time series model (Autoregressive Distributed Lag (ARDL) Model). The empirical results suggest about positive short and long term effect of stock market development on GDP per capita.

Keywords: banking sector, economic development, financial system’s development, stock market, private bond market

Procedia PDF Downloads 364
29657 Effect of Different Model Drugs on the Properties of Model Membranes from Fishes

Authors: M. Kumpugdee-Vollrath, T. G. D. Phu, M. Helmis

Abstract:

A suitable model membrane to study the pharmacological effect of pharmaceutical products is human stratum corneum because this layer of human skin is the outermost layer and it is an important barrier to be passed through. Other model membranes which were also used are for example skins from pig, mouse, reptile or fish. We are interested in fish skins in this project. The advantages of the fish skins are, that they can be obtained from the supermarket or fish shop. However, the fish skins should be freshly prepared and used directly without storage. In order to understand the effect of different model drugs e.g. lidocaine HCl, resveratrol, paracetamol, ibuprofen, acetyl salicylic acid on the properties of the model membrane from various types of fishes e.g. trout, salmon, cod, plaice permeation tests were performed and differential scanning calorimetry was applied.

Keywords: fish skin, model membrane, permeation, DSC, lidocaine HCl, resveratrol, paracetamol, ibuprofen, acetyl salicylic acid

Procedia PDF Downloads 450
29656 Comparative Dielectric Properties of 1,2-Dichloroethane with n-Methylformamide and n,n-Dimethylformamide Using Time Domain Reflectometry Technique in Microwave Frequency

Authors: Shagufta Tabassum, V. P. Pawar, jr., G. N. Shinde

Abstract:

The study of dielectric relaxation properties of polar liquids in the binary mixture has been carried out at 10, 15, 20 and 25 ºC temperatures for 11 different concentrations using time domain reflectometry technique. The dielectric properties of a solute-solvent mixture of polar liquids in the frequency range of 10 MHz to 30 GHz gives the information regarding formation of monomers and multimers and also an interaction between the molecules of the liquid mixture under study. The dielectric parameters have been obtained by the least squares fit method using the Debye equation characterized by a single relaxation time without relaxation time distribution.

Keywords: excess properties, relaxation time, static dielectric constant, and time domain reflectometry technique

Procedia PDF Downloads 137
29655 Study of Acoustic Resonance of Model Liquid Rocket Combustion Chamber and Its Suppression

Authors: Vimal O. Kumar, C. K. Muthukumaran, P. Rakesh

Abstract:

Liquid rocket engine (LRE) combustion chamber is subjected to pressure oscillation during the combustion process. The combustion noise (acoustic noise) is a broad band, small amplitude, high frequency component pressure oscillation. They constitute only a minor fraction ( < 1%) of the entire combustion process. However, this high frequency oscillation is huge concern during the design phase of LRE combustion chamber as it would cause catastrophic failure of the chamber. Depends on the chamber geometry, certain frequencies form standing wave pattern, and they resonate with high amplitude and are known as Eigen modes. These Eigen modes could cause failures unless it is suppressed to be within safe limits. These modes are categorized into radial, tangential, and azimuthal modes, and their structure inside the combustion chamber is of interest to the researchers. In the present proposal, experimental as well as numerical simulation will be performed to obtain the frequency-amplitude characteristics of the model combustion chamber for different baffle configuration. The main objective of this study is to find effect of baffle configuration that would provide better suppression of acoustic modes. The experimental study aims at measuring the frequency amplitude characteristics at certain points in the chamber wall. The experimental measurement will be also used for scheme used in numerical simulation. In addition to experiments, numerical simulation would provide detailed structure of the Eigenmodes exhibited and their level of suppression with the aid of different baffle configurations.

Keywords: baffle, instability, liquid rocket engine, pressure response of chamber

Procedia PDF Downloads 108
29654 A Review on Water Models of Surface Water Environment

Authors: Shahbaz G. Hassan

Abstract:

Water quality models are very important to predict the changes in surface water quality for environmental management. The aim of this paper is to give an overview of the water qualities, and to provide directions for selecting models in specific situation. Water quality models include one kind of model based on a mechanistic approach, while other models simulate water quality without considering a mechanism. Mechanistic models can be widely applied and have capabilities for long-time simulation, with highly complexity. Therefore, more spaces are provided to explain the principle and application experience of mechanistic models. Mechanism models have certain assumptions on rivers, lakes and estuaries, which limits the application range of the model, this paper introduces the principles and applications of water quality model based on the above three scenarios. On the other hand, mechanistic models are more easily to compute, and with no limit to the geographical conditions, but they cannot be used with confidence to simulate long term changes. This paper divides the empirical models into two broad categories according to the difference of mathematical algorithm, models based on artificial intelligence and models based on statistical methods.

Keywords: empirical models, mathematical, statistical, water quality

Procedia PDF Downloads 244
29653 Lyapunov Functions for Extended Ross Model

Authors: Rahele Mosleh

Abstract:

This paper gives a survey of results on global stability of extended Ross model for malaria by constructing some elegant Lyapunov functions for two cases of epidemic, including disease-free and endemic occasions. The model is a nonlinear seven-dimensional system of ordinary differential equations that simulates this phenomenon in a more realistic fashion. We discuss the existence of positive disease-free and endemic equilibrium points of the model. It is stated that extended Ross model possesses invariant solutions for human and mosquito in a specific domain of the system.

Keywords: global stability, invariant solutions, Lyapunov function, stationary points

Procedia PDF Downloads 151
29652 Optimisation of Structural Design by Integrating Genetic Algorithms in the Building Information Modelling Environment

Authors: Tofigh Hamidavi, Sepehr Abrishami, Pasquale Ponterosso, David Begg

Abstract:

Structural design and analysis is an important and time-consuming process, particularly at the conceptual design stage. Decisions made at this stage can have an enormous effect on the entire project, as it becomes ever costlier and more difficult to alter the choices made early on in the construction process. Hence, optimisation of the early stages of structural design can provide important efficiencies in terms of cost and time. This paper suggests a structural design optimisation (SDO) framework in which Genetic Algorithms (GAs) may be used to semi-automate the production and optimisation of early structural design alternatives. This framework has the potential to leverage conceptual structural design innovation in Architecture, Engineering and Construction (AEC) projects. Moreover, this framework improves the collaboration between the architectural stage and the structural stage. It will be shown that this SDO framework can make this achievable by generating the structural model based on the extracted data from the architectural model. At the moment, the proposed SDO framework is in the process of validation, involving the distribution of an online questionnaire among structural engineers in the UK.

Keywords: building information, modelling, BIM, genetic algorithm, GA, architecture-engineering-construction, AEC, optimisation, structure, design, population, generation, selection, mutation, crossover, offspring

Procedia PDF Downloads 221
29651 Approach for Updating a Digital Factory Model by Photogrammetry

Authors: R. Hellmuth, F. Wehner

Abstract:

Factory planning has the task of designing products, plants, processes, organization, areas, and the construction of a factory. The requirements for factory planning and the building of a factory have changed in recent years. Regular restructuring is becoming more important in order to maintain the competitiveness of a factory. Restrictions in new areas, shorter life cycles of product and production technology as well as a VUCA world (Volatility, Uncertainty, Complexity & Ambiguity) lead to more frequent restructuring measures within a factory. A digital factory model is the planning basis for rebuilding measures and becomes an indispensable tool. Short-term rescheduling can no longer be handled by on-site inspections and manual measurements. The tight time schedules require up-to-date planning models. Due to the high adaptation rate of factories described above, a methodology for rescheduling factories on the basis of a modern digital factory twin is conceived and designed for practical application in factory restructuring projects. The focus is on rebuild processes. The aim is to keep the planning basis (digital factory model) for conversions within a factory up to date. This requires the application of a methodology that reduces the deficits of existing approaches. The aim is to show how a digital factory model can be kept up to date during ongoing factory operation. A method based on photogrammetry technology is presented. The focus is on developing a simple and cost-effective solution to track the many changes that occur in a factory building during operation. The method is preceded by a hardware and software comparison to identify the most economical and fastest variant. 

Keywords: digital factory model, photogrammetry, factory planning, restructuring

Procedia PDF Downloads 96
29650 A Dynamic Software Product Line Approach to Self-Adaptive Genetic Algorithms

Authors: Abdelghani Alidra, Mohamed Tahar Kimour

Abstract:

Genetic algorithm must adapt themselves at design time to cope with the search problem specific requirements and at runtime to balance exploration and convergence objectives. In a previous article, we have shown that modeling and implementing Genetic Algorithms (GA) using the software product line (SPL) paradigm is very appreciable because they constitute a product family sharing a common base of code. In the present article we propose to extend the use of the feature model of the genetic algorithms family to model the potential states of the GA in what is called a Dynamic Software Product Line. The objective of this paper is the systematic generation of a reconfigurable architecture that supports the dynamic of the GA and which is easily deduced from the feature model. The resultant GA is able to perform dynamic reconfiguration autonomously to fasten the convergence process while producing better solutions. Another important advantage of our approach is the exploitation of recent advances in the domain of dynamic SPLs to enhance the performance of the GAs.

Keywords: self-adaptive genetic algorithms, software engineering, dynamic software product lines, reconfigurable architecture

Procedia PDF Downloads 265
29649 Tracy: A Java Library to Render a 3D Graphical Human Model

Authors: Sina Saadati, Mohammadreza Razzazi

Abstract:

Since Java is an object-oriented language, It can be used to solve a wide range of problems. One of the considerable usages of this language can be found in Agent-based modeling and simulation. Despite the significant power of Java, There is not an easy method to render a 3-dimensional human model. In this article, we are about to develop a library which helps modelers present a 3D human model and control it with Java. The library runs two server programs. The first one is a web page server that can connect to any browser and present an HTML code. The second server connects to the browser and controls the movement of the model. So, the modeler will be able to develop a simulation and display a good-looking human model without any knowledge of any graphical tools.

Keywords: agent-based modeling and simulation, human model, graphics, Java, distributed systems

Procedia PDF Downloads 89
29648 The Role of Arousal in Time Perception: Implications for Emotional Driving

Authors: Ewa Siedlecka

Abstract:

Emotional stress is an important risk factor in the rate and severity of traffic accidents. Moreover, incorrect time perception is implicated in the increase of traffic violations, such as running red lights or collisions. While the role of emotional arousal on perceived time is well-established, the role of physiological arousal in time perception remains unexamined. Specific emotions can be, however, associated with distinct physiological responses. In the current research, two studies examined the role of physiological arousal in time perception. In the first experiment, 41 participants engaged in a cold pressor task and had their time perception measured throughout the experiment. In the second study, 138 participants engaged in either isometric or deep breathing exercises. These activities were designed to simulate the sympathetic and parasympathetic nervous systems, respectively. Participants completed a bisection task to measure time perception in both studies, as well as a physiological response via an Electrocardiography (ECG). Results found that activation of the parasympathetic nervous system is associated with greater time perception. These findings are discussed with reference to models of time perception, as well as implications for emotional driving and misperceptions of speed. It is important to consider the role of physiology in the misperception of time, as these factors can lead to increases in driving accidents.

Keywords: emotions, nervous system, physiology, time perception

Procedia PDF Downloads 304
29647 A Novel Rapid Well Control Technique Modelled in Computational Fluid Dynamics Software

Authors: Michael Williams

Abstract:

The ability to control a flowing well is of the utmost important. During the kill phase, heavy weight kill mud is circulated around the well. While increasing bottom hole pressure near wellbore formation, the damage is increased. The addition of high density spherical objects has the potential to minimise this near wellbore damage, increase bottom hole pressure and reduce operational time to kill the well. This operational time saving is seen in the rapid deployment of high density spherical objects instead of building high density drilling fluid. The research aims to model the well kill process using a Computational Fluid Dynamics software. A model has been created as a proof of concept to analyse the flow of micron sized spherical objects in the drilling fluid. Initial results show that this new methodology of spherical objects in drilling fluid agrees with traditional stream lines seen in non-particle flow. Additional models have been created to demonstrate that areas of higher flow rate around the bit can lead to increased probability of wash out of formations but do not affect the flow of micron sized spherical objects. Interestingly, areas that experience dimensional changes such as tool joints and various BHA components do not appear at this initial stage to experience increased velocity or create areas of turbulent flow, which could lead to further borehole stability. In conclusion, the initial models of this novel well control methodology have not demonstrated any adverse flow patterns, which would conclude that this model may be viable under field conditions.

Keywords: well control, fluid mechanics, safety, environment

Procedia PDF Downloads 159
29646 Probabilistic Modeling of Post-Liquefaction Ground Deformation

Authors: Javad Sadoghi Yazdi, Robb Eric S. Moss

Abstract:

This paper utilizes a probabilistic liquefaction triggering method for modeling post-liquefaction ground deformation. This cone penetration test CPT-based liquefaction triggering is employed to estimate the factor of safety against liquefaction (FSL) and compute the maximum cyclic shear strain (γmax). The study identifies a maximum PL value of 90% across various relative densities, which challenges the decrease from 90% to 70% as relative density decreases. It reveals that PL ranges from 5% to 50% for volumetric strain (εvol) less than 1%, while for εvol values between 1% and 3.2%, PL spans from 50% to 90%. The application of the CPT-based simplified liquefaction triggering procedures has been employed in previous researches to estimate liquefaction ground-failure indices, such as the Liquefaction Potential Index (LPI) and Liquefaction Severity Number (LSN). However, several studies have been conducted to highlight the variability in liquefaction probability calculations, suggesting a more accurate depiction of liquefaction likelihood. Consequently, the utilization of these simplified methods may not offer practical efficiency. This paper further investigates the efficacy of various established liquefaction vulnerability parameters, including LPI and LSN, in explaining the observed liquefaction-induced damage within residential zones of Christchurch, New Zealand using results from CPT database.

Keywords: cone penetration test (CPT), liquefaction, postliquefaction, ground failure

Procedia PDF Downloads 46
29645 Magnetic Activated Carbon: Preparation, Characterization, and Application for Vanadium Removal

Authors: Hakimeh Sharififard, Mansooreh Soleimani

Abstract:

In this work, the magnetic activated carbon nanocomposite (Fe-CAC) has been synthesized by anchorage iron hydr(oxide) nanoparticles onto commercial activated carbon (CAC) surface and characterized using BET, XRF, SEM techniques. The influence of various removal parameters such as pH, contact time and initial concentration of vanadium on vanadium removal was evaluated using CAC and Fe-CAC in batch method. The sorption isotherms were studied using Langmuir, Freundlich and Dubinin–Radushkevich (D–R) isotherm models. These equilibrium data were well described by the Freundlich model. Results showed that CAC had the vanadium adsorption capacity of 37.87 mg/g, while the Fe-AC was able to adsorb 119.01 mg/g of vanadium. Kinetic data was found to confirm pseudo-second-order kinetic model for both adsorbents.

Keywords: magnetic activated carbon, remove, vanadium, nanocomposite, freundlich

Procedia PDF Downloads 442