Search results for: fluid model
12163 Proposal for a Framework for Teaching Entrepreneurship and Innovation Using the Methods and Current Methodologies
Authors: Marcelo T. Okano, Jaqueline C. Bueno, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi
Abstract:
Developing countries are increasingly finding that entrepreneurship and innovation are the ways to speed up their developments and initiate or encourage technological development. The educational institutions such as universities, colleges and colleges of technology, has two main roles in this process, to guide and train entrepreneurs and provide technological knowledge and encourage innovation. Thus there was completing the triple helix model of innovation with universities, government and industry. But the teaching of entrepreneurship and innovation can not be only the traditional model, with blackboard, chalk and classroom. The new methods and methodologies such as Canvas, elevator pitching, design thinking, etc. require students to get involved and to experience the simulations of business, expressing their ideas and discussing them. The objective of this research project is to identify the main methods and methodologies used for the teaching of entrepreneurship and innovation, to propose a framework, test it and make a case study. To achieve the objective of this research, firstly was a survey of the literature on the entrepreneurship and innovation, business modeling, business planning, Canvas business model, design thinking and other subjects about the themes. Secondly, we developed the framework for teaching entrepreneurship and innovation based on bibliographic research. Thirdly, we tested the framework in a higher education class IT management for a semester. Finally, we detail the results in the case study in a course of IT management. As important results we improve the level of understanding and business administration students, allowing them to manage own affairs. Methods such as canvas and business plan helped students to plan and shape the ideas and business. Pitching for entrepreneurs and investors in the market brought a reality for students. The prototype allowed the company groups develop their projects. The proposed framework allows entrepreneurship education and innovation can leave the classroom, bring the reality of business roundtables to university relying on investors and real entrepreneurs.Keywords: entrepreneurship, innovation, Canvas, traditional model
Procedia PDF Downloads 57712162 Unveiling the Domino Effect: Barriers and Strategies in the Adoption of Telecommuting as a Post-Pandemic Workspace
Authors: Divnesh Lingam, Devi Rengamani Seenivasagam, Prashant Chand, Caleb Yee, John Chief, Rajeshkannan Ananthanarayanan
Abstract:
Telecommuting Post-Pandemic: Barriers, Solutions, and Strategies. Amidst the COVID-19 outbreak in 2020, remote work emerged as a vital business continuity measure. This study investigates telecommuting’s modern work model, exploring its benefits and obstacles. Utilizing Interpretive Structural Modelling uncovers barriers hindering telecommuting adoption. A validated set of thirteen barriers is examined through departmental surveys, revealing interrelationships. The resulting model highlights interactions and dependencies, forming a foundational framework. By addressing dominant barriers, a domino effect on subservient barriers is demonstrated. This research fosters further exploration, proposing management strategies for successful telecommuting adoption and reshaping the traditional workspace.Keywords: barriers, interpretive structural modelling, post-pandemic, telecommuting
Procedia PDF Downloads 9412161 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain
Authors: Bita Payami-Shabestari, Dariush Eslami
Abstract:
The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory
Procedia PDF Downloads 12912160 High Pressure Multiphase Flow Experiments: The Impact of Pressure on Flow Patterns Using an X-Ray Tomography Visualisation System
Authors: Sandy Black, Calum McLaughlin, Alessandro Pranzitelli, Marc Laing
Abstract:
Multiphase flow structures of two-phase multicomponent fluids were experimentally investigated in a large diameter high-pressure pipeline up to 130 bar at TÜV SÜD’s National Engineering Laboratory Advanced Multiphase Facility. One of the main objectives of the experimental test campaign was to evaluate the impact of pressure on multiphase flow patterns as much of the existing information is based on low-pressure measurements. The experiments were performed in a horizontal and vertical orientation in both 4-inch and 6-inch pipework using nitrogen, ExxsolTM D140 oil, and a 6% aqueous solution of NaCl at incremental pressures from 10 bar to 130 bar. To visualise the detailed structure of the flow of the entire cross-section of the pipe, a fast response X-ray tomography system was used. A wide range of superficial velocities from 0.6 m/s to 24.0 m/s for gas and 0.04 m/s and 6.48 m/s for liquid was examined to evaluate different flow regimes. The results illustrated the suppression of instabilities between the gas and the liquid at the measurement location and that intermittent or slug flow was observed less frequently as the pressure was increased. CFD modellings of low and high-pressure simulations were able to successfully predict the likelihood of intermittent flow; however, further tuning is necessary to predict the slugging frequency. The dataset generated is unique as limited datasets exist above 100 bar and is of considerable value to multiphase flow specialists and numerical modellers.Keywords: computational fluid dynamics, high pressure, multiphase, X-ray tomography
Procedia PDF Downloads 14312159 The Principle of Methodological Rationality and Security of Organisations
Authors: Jan Franciszek Jacko
Abstract:
This investigation presents the principle of methodological rationality of decision making and discusses the impact of an organisation's members' methodologically rational or irrational decisions on its security. This study formulates and partially justifies some research hypotheses regarding the impact. The thinking experiment is used according to Max Weber's ideal types method. Two idealised situations("models") are compared: Model A, whereall decision-makers follow methodologically rational decision-making procedures. Model B, in which these agents follow methodologically irrational decision-making practices. Analysing and comparing the two models will allow the formulation of some research hypotheses regarding the impact of methodologically rational and irrational attitudes of members of an organisation on its security. In addition to the method, phenomenological analyses of rationality and irrationality are applied.Keywords: methodological rationality, rational decisions, security of organisations, philosophy of economics
Procedia PDF Downloads 13912158 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain
Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang
Abstract:
Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature
Procedia PDF Downloads 37412157 Piezo-Extracted Model Based Chloride/ Carbonation Induced Corrosion Assessment in Reinforced Concrete Structures
Authors: Gupta. Ashok, V. talakokula, S. bhalla
Abstract:
Rebar corrosion is one of the main causes of damage and premature failure of the reinforced concrete (RC) structures worldwide, causing enormous costs for inspection, maintenance, restoration and replacement. Therefore, early detection of corrosion and timely remedial action on the affected portion can facilitate an optimum utilization of the structure, imparting longevity to it. The recent advent of the electro-mechanical impedance (EMI) technique using piezo sensors (PZT) for structural health monitoring (SHM) has provided a new paradigm to the maintenance engineers to diagnose the onset of the damage at the incipient stage itself. This paper presents a model based approach for corrosion assessment based on the equivalent parameters extracted from the impedance spectrum of concrete-rebar system using the EMI technique via the PZT sensors.Keywords: impedance, electro-mechanical, stiffness, mass, damping, equivalent parameters
Procedia PDF Downloads 54312156 Identification of Key Parameters for Benchmarking of Combined Cycle Power Plants Retrofit
Authors: S. Sabzchi Asl, N. Tahouni, M. H. Panjeshahi
Abstract:
Benchmarking of a process with respect to energy consumption, without accomplishing a full retrofit study, can save both engineering time and money. In order to achieve this goal, the first step is to develop a conceptual-mathematical model that can easily be applied to a group of similar processes. In this research, we have aimed to identify a set of key parameters for the model which is supposed to be used for benchmarking of combined cycle power plants. For this purpose, three similar combined cycle power plants were studied. The results showed that ambient temperature, pressure and relative humidity, number of HRSG evaporator pressure levels and relative power in part load operation are the main key parameters. Also, the relationships between these parameters and produced power (by gas/ steam turbine), gas turbine and plant efficiency, temperature and mass flow rate of the stack flue gas were investigated.Keywords: combined cycle power plant, energy benchmarking, modelling, retrofit
Procedia PDF Downloads 30512155 Massive Intrapartum Hemorrhage Following by Inner Myometrial Laceration during a Vaginal Delivery: A Rare Case Report
Authors: Bahareh Khakifirooz, Arian Shojaei, Amirhossein Hajialigol, Bahare Abdolahi
Abstract:
Laceration of the inner layer of the myometrium can cause massive bleeding during and after childbirth, which can lead to the death of the mother if it is not diagnosed in time. We studied a rare case of massive intrapartum bleeding following myometrial laceration that was diagnosed correctly, and the patient survived with in-time treatments. The patient was a 26 years-old woman who was under observation for term pregnancy and complaint of rupture of membranes (ROM) and vaginal bleeding. Following the spontaneous course of labor and without receiving oxytocin, during the normal course of labor, she had an estimated total blood loss of 750 mL bleeding, which, despite the normal fetal heart rate and with the mother's indication for cesarean section, was transferred to the operating room and underwent cesarean section. During the cesarean section, the amniotic fluid was clear; after the removal of the placenta, severe and clear bleeding was flowing from the posterior wall of the uterus, which was caused by the laceration of the inner layer of the myometrium in the posterior wall of the lower segment of the uterus. The myometrial laceration was repaired with absorbable continuous locked sutures, and hemostasis was established, then, the patient used uterotonic drugs, and after monitoring, the patient was discharged from the hospital in good condition.Keywords: intrapartum hemorrhage, inner myometrial laceration, labor, Increased intrauterine pressure
Procedia PDF Downloads 2612154 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data
Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin
Abstract:
Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.Keywords: big data, machine learning, ontology model, urban data model
Procedia PDF Downloads 41812153 Investigation of Software Integration for Simulations of Buoyancy-Driven Heat Transfer in a Vehicle Underhood during Thermal Soak
Authors: R. Yuan, S. Sivasankaran, N. Dutta, K. Ebrahimi
Abstract:
This paper investigates the software capability and computer-aided engineering (CAE) method of modelling transient heat transfer process occurred in the vehicle underhood region during vehicle thermal soak phase. The heat retention from the soak period will be beneficial to the cold start with reduced friction loss for the second 14°C worldwide harmonized light-duty vehicle test procedure (WLTP) cycle, therefore provides benefits on both CO₂ emission reduction and fuel economy. When vehicle undergoes soak stage, the airflow and the associated convective heat transfer around and inside the engine bay is driven by the buoyancy effect. This effect along with thermal radiation and conduction are the key factors to the thermal simulation of the engine bay to obtain the accurate fluids and metal temperature cool-down trajectories and to predict the temperatures at the end of the soak period. Method development has been investigated in this study on a light-duty passenger vehicle using coupled aerodynamic-heat transfer thermal transient modelling method for the full vehicle under 9 hours of thermal soak. The 3D underhood flow dynamics were solved inherently transient by the Lattice-Boltzmann Method (LBM) method using the PowerFlow software. This was further coupled with heat transfer modelling using the PowerTHERM software provided by Exa Corporation. The particle-based LBM method was capable of accurately handling extremely complicated transient flow behavior on complex surface geometries. The detailed thermal modelling, including heat conduction, radiation, and buoyancy-driven heat convection, were integrated solved by PowerTHERM. The 9 hours cool-down period was simulated and compared with the vehicle testing data of the key fluid (coolant, oil) and metal temperatures. The developed CAE method was able to predict the cool-down behaviour of the key fluids and components in agreement with the experimental data and also visualised the air leakage paths and thermal retention around the engine bay. The cool-down trajectories of the key components obtained for the 9 hours thermal soak period provide vital information and a basis for the further development of reduced-order modelling studies in future work. This allows a fast-running model to be developed and be further imbedded with the holistic study of vehicle energy modelling and thermal management. It is also found that the buoyancy effect plays an important part at the first stage of the 9 hours soak and the flow development during this stage is vital to accurately predict the heat transfer coefficients for the heat retention modelling. The developed method has demonstrated the software integration for simulating buoyancy-driven heat transfer in a vehicle underhood region during thermal soak with satisfying accuracy and efficient computing time. The CAE method developed will allow integration of the design of engine encapsulations for improving fuel consumption and reducing CO₂ emissions in a timely and robust manner, aiding the development of low-carbon transport technologies.Keywords: ATCT/WLTC driving cycle, buoyancy-driven heat transfer, CAE method, heat retention, underhood modeling, vehicle thermal soak
Procedia PDF Downloads 15412152 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 9812151 A Conceptual Model of Preparing School Counseling Students as Related Service Providers in the Transition Process
Authors: LaRon A. Scott, Donna M. Gibson
Abstract:
Data indicate that counselor education programs in the United States do not prepare their students adequately to serve students with disabilities nor provide counseling as a related service. There is a need to train more school counselors to provide related services to students with disabilities, for many reasons, but specifically, school counselors are participating in Individualized Education Programs (IEP) and transition planning meetings for students with disabilities where important academic, mental health and post-secondary education decisions are made. While school counselors input is perceived very important to the process, they may not have the knowledge or training in this area to feel confident in offering required input in these meetings. Using a conceptual research design, a model that can be used to prepare school counseling students as related service providers and effective supports to address transition for students with disabilities was developed as a component of this research. The authors developed the Collaborative Model of Preparing School Counseling Students as Related Service Providers to Students with Disabilities, based on a conceptual framework that involves an integration of Social Cognitive Career Theory (SCCT) and evidenced-based practices based on Self-Determination Theory (SDT) to provide related and transition services and planning with students with disabilities. The authors’ conclude that with five overarching competencies, (1) knowledge and understanding of disabilities, (2) knowledge and expertise in group counseling to students with disabilities, (3), knowledge and experience in specific related service components, (4) knowledge and experience in evidence-based counseling interventions, (5) knowledge and experiencing in evidenced-based transition and career planning services, that school counselors can enter the field with the necessary expertise to adequately serve all students. Other examples and strategies are suggested, and recommendations for preparation programs seeking to integrate a model to prepare school counselors to implement evidenced-based transition strategies in supporting students with disabilities are includedKeywords: transition education, social cognitive career theory, self-determination, counseling
Procedia PDF Downloads 24312150 Communication and Management of Incidental Pathology in a Cohort of 1,214 Consecutive Appendicectomies
Authors: Matheesha Herath, Ned Kinnear, Bridget Heijkoop, Eliza Bramwell, Alannah Frazetto, Amy Noll, Prajay Patel, Derek Hennessey, Greg Otto, Christopher Dobbins, Tarik Sammour, James Moore
Abstract:
Background: Important incidental pathology requiring further action is commonly found during appendicectomy, macro- and microscopically. It is unknown whether the acute surgical unit (ASU) model affects the management and disclosure of these findings. Methods: An ASU model was introduced at our institution on 01/08/2012. In this retrospective cohort study, all patients undergoing appendicectomy 2.5 years before (traditional group) or after (ASU group) this date were compared. The primary outcomes were rates of appropriate management of the incidental findings and communication of the findings to the patient and to their general practitioner (GP). Results: 1,214 patients underwent emergency appendicectomy; 465 in the traditional group and 749 in the ASU group. 80 (6.6%) patients (25 and 55 in each respective period) had important incidental findings. There were 24 patients with benign polyps, 15 with neuro-endocrine tumour, 11 with endometriosis, 8 with pelvic inflammatory disease, 8 Enterobius vermicularis infection, 7 with low grade mucinous cystadenoma, 3 with inflammatory bowel disease, 2 with diverticulitis, 2 with tubo-ovarian mass, 1 with secondary appendiceal malignancy and none with primary appendiceal adenocarcinoma. One patient had dual pathologies. There was no difference between the traditional and ASU group with regards to communication of the findings to the patient (p=0.44) and their GP (p=0.27), and there was no difference in the rates of appropriate management (p=0.21). Conclusions: The introduction of an ASU model did not change rates of surgeon-to-patient and surgeon-to-GP communication nor affect rates of appropriate management of important incidental pathology during an appendectomy.Keywords: acute care surgery, appendicitis, appendicectomy, incidental
Procedia PDF Downloads 14412149 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis
Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi
Abstract:
Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.Keywords: Lab-on-chip, LOC, micro-mixer, OpenFOAM, PDMS
Procedia PDF Downloads 16112148 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism
Abstract:
Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning
Procedia PDF Downloads 1912147 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation
Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski
Abstract:
In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming
Procedia PDF Downloads 40712146 Coarse-Graining in Micromagnetic Simulations of Magnetic Hyperthermia
Authors: Razyeh Behbahani, Martin L. Plumer, Ivan Saika-Voivod
Abstract:
Micromagnetic simulations based on the stochastic Landau-Lifshitz-Gilbert equation are used to calculate dynamic magnetic hysteresis loops relevant to magnetic hyperthermia applications. With the goal to effectively simulate room-temperature loops for large iron-oxide based systems at relatively slow sweep rates on the order of 1 Oe/ns or less, a coarse-graining scheme is proposed and tested. The scheme is derived from a previously developed renormalization-group approach. Loops associated with nanorods, used as building blocks for larger nanoparticles that were employed in preclinical trials (Dennis et al., 2009 Nanotechnology 20 395103), serve as the model test system. The scaling algorithm is shown to produce nearly identical loops over several decades in the model grain sizes. Sweep-rate scaling involving the damping constant alpha is also demonstrated.Keywords: coarse-graining, hyperthermia, hysteresis loops, micromagnetic simulations
Procedia PDF Downloads 14912145 Experimental Chip/Tool Temperature FEM Model Calibration by Infrared Thermography: A Case Study
Authors: Riccardo Angiuli, Michele Giannuzzi, Rodolfo Franchi, Gabriele Papadia
Abstract:
Temperature knowledge in machining is fundamental to improve the numerical and FEM models used for the study of some critical process aspects, such as the behavior of the worked material and tool. The extreme conditions in which they operate make it impossible to use traditional measuring instruments; infrared thermography can be used as a valid measuring instrument for temperature measurement during metal cutting. In the study, a large experimental program on superduplex steel (ASTM A995 gr. 5A) cutting was carried out, the relevant cutting temperatures were measured by infrared thermography when certain cutting parameters changed, from traditional values to extreme ones. The values identified were used to calibrate a FEM model for the prediction of residual life of the tools. During the study, the problems related to the detection of cutting temperatures by infrared thermography were analyzed, and a dedicated procedure was developed that could be used during similar processing.Keywords: machining, infrared thermography, FEM, temperature measurement
Procedia PDF Downloads 18412144 Numerical and Experimental Comparison of Surface Pressures around a Scaled Ship Wind-Assisted Propulsion System
Authors: James Cairns, Marco Vezza, Richard Green, Donald MacVicar
Abstract:
Significant legislative changes are set to revolutionise the commercial shipping industry. Upcoming emissions restrictions will force operators to look at technologies that can improve the efficiency of their vessels -reducing fuel consumption and emissions. A device which may help in this challenge is the Ship Wind-Assisted Propulsion system (SWAP), an actively controlled aerofoil mounted vertically on the deck of a ship. The device functions in a similar manner to a sail on a yacht, whereby the aerodynamic forces generated by the sail reach an equilibrium with the hydrodynamic forces on the hull and a forward velocity results. Numerical and experimental testing of the SWAP device is presented in this study. Circulation control takes the form of a co-flow jet aerofoil, utilising both blowing from the leading edge and suction from the trailing edge. A jet at the leading edge uses the Coanda effect to energise the boundary layer in order to delay flow separation and create high lift with low drag. The SWAP concept has been originated by the research and development team at SMAR Azure Ltd. The device will be retrofitted to existing ships so that a component of the aerodynamic forces acts forward and partially reduces the reliance on existing propulsion systems. Wind tunnel tests have been carried out at the de Havilland wind tunnel at the University of Glasgow on a 1:20 scale model of this system. The tests aim to understand the airflow characteristics around the aerofoil and investigate the approximate lift and drag coefficients that an early iteration of the SWAP device may produce. The data exhibits clear trends of increasing lift as injection momentum increases, with critical flow attachment points being identified at specific combinations of jet momentum coefficient, Cµ, and angle of attack, AOA. Various combinations of flow conditions were tested, with the jet momentum coefficient ranging from 0 to 0.7 and the AOA ranging from 0° to 35°. The Reynolds number across the tested conditions ranged from 80,000 to 240,000. Comparisons between 2D computational fluid dynamics (CFD) simulations and the experimental data are presented for multiple Reynolds-Averaged Navier-Stokes (RANS) turbulence models in the form of normalised surface pressure comparisons. These show good agreement for most of the tested cases. However, certain simulation conditions exhibited a well-documented shortcoming of RANS-based turbulence models for circulation control flows and over-predicted surface pressures and lift coefficient for fully attached flow cases. Work must be continued in finding an all-encompassing modelling approach which predicts surface pressures well for all combinations of jet injection momentum and AOA.Keywords: CFD, circulation control, Coanda, turbo wing sail, wind tunnel
Procedia PDF Downloads 13512143 Assessment of Training, Job Attitudes and Motivation: A Mediation Model in Banking Sector of Pakistan
Authors: Abdul Rauf, Xiaoxing Liu, Rizwan Qaisar Danish, Waqas Amin
Abstract:
The core intention of this study is to analyze the linkage of training, job attitudes and motivation through a mediation model in the banking sector of Pakistan. Moreover, this study is executed to answer a range of queries regarding the consideration of employees about training, job satisfaction, motivation and organizational commitment. Hence, the association of training with job satisfaction, job satisfaction with motivation, organizational commitment with job satisfaction, organization commitment as independently with motivation and training directly related to motivation is determined in this course of study. A questionnaire crafted for comprehending the purpose of this study by including four variables such as training, job satisfaction, motivation and organizational commitment which have to measure. A sample of 450 employees from seventeen private (17) banks and two (2) public banks was taken on the basis of convenience sampling from Pakistan. However, 357 questionnaires, completely filled were received back. AMOS used for assessing the conformity factor analysis (CFA) model and statistical techniques practiced to scan the collected data (i.e.) descriptive statistics, regression analysis and correlation analysis. The empirical findings revealed that training and organizational commitment has a significant and positive impact directly on job satisfaction and motivation as well as through the mediator (job satisfaction) also the impact sensing in the same way on the motivation of employees in the financial Banks of Pakistan. In this research study, the banking sector is under discussion, so the findings could not generalize on other sectors such as manufacturing, textiles, telecom, and medicine, etc. The low sample size is also the limitation of this study. On the foundation of these results the management fascinates to make the revised strategies regarding training program for the employees as it enhances their motivation level, and job satisfaction on a regular basis.Keywords: job satisfaction, motivation, organizational commitment, Pakistan, training
Procedia PDF Downloads 25412142 Numerical Modelling of Skin Tumor Diagnostics through Dynamic Thermography
Authors: Luiz Carlos Wrobel, Matjaz Hribersek, Jure Marn, Jurij Iljaz
Abstract:
Dynamic thermography has been clinically proven to be a valuable diagnostic technique for skin tumor detection as well as for other medical applications such as breast cancer diagnostics, diagnostics of vascular diseases, fever screening, dermatological and other applications. Thermography for medical screening can be done in two different ways, observing the temperature response under steady-state conditions (passive or static thermography), and by inducing thermal stresses by cooling or heating the observed tissue and measuring the thermal response during the recovery phase (active or dynamic thermography). The numerical modelling of heat transfer phenomena in biological tissue during dynamic thermography can aid the technique by improving process parameters or by estimating unknown tissue parameters based on measured data. This paper presents a nonlinear numerical model of multilayer skin tissue containing a skin tumor, together with the thermoregulation response of the tissue during the cooling-rewarming processes of dynamic thermography. The model is based on the Pennes bioheat equation and solved numerically by using a subdomain boundary element method which treats the problem as axisymmetric. The paper includes computational tests and numerical results for Clark II and Clark IV tumors, comparing the models using constant and temperature-dependent thermophysical properties, which showed noticeable differences and highlighted the importance of using a local thermoregulation model.Keywords: boundary element method, dynamic thermography, static thermography, skin tumor diagnostic
Procedia PDF Downloads 10712141 The Relationship between Central Bank Independence and Inflation: Evidence from Africa
Authors: R. Bhattu Babajee, Marie Sandrine Estelle Benoit
Abstract:
The past decades have witnessed a considerable institutional shift towards Central Bank Independence across economies of the world. The motivation behind such a change is the acceptance that increased central bank autonomy has the power of alleviating inflation bias. Hence, studying whether Central Bank Independence acts as a significant factor behind the price stability in the African economies or whether this macroeconomic aim in these countries result from other economic, political or social factors is a pertinent issue. The main research objective of this paper is to assess the relationship between central bank autonomy and inflation in African economies where inflation has proved to be a serious problem. In this optic, we shall measure the degree of CBI in Africa by computing the turnover rates of central banks governors thereby studying whether decisions made by African central banks are affected by external forces. The purpose of this study is to investigate empirically the association between Central Bank Independence (CBI) and inflation for 10 African economies over a period of 17 years, from 1995 to 2012. The sample includes Botswana, Egypt, Ghana, Kenya, Madagascar, Mauritius, Mozambique, Nigeria, South Africa, and Uganda. In contrast to empirical research, we have not been using the usual static panel model for it is associated with potential mis specification arising from the absence of dynamics. To this issue a dynamic panel data model which integrates several control variables has been used. Firstly, the analysis includes dynamic terms to explain the tenacity of inflation. Given the confirmation of inflation inertia, that is very likely in African countries there exists the need for including lagged inflation in the empirical model. Secondly, due to known reverse causality between Central Bank Independence and inflation, the system generalized method of moments (GMM) is employed. With GMM estimators, the presence of unknown forms of heteroskedasticity is admissible as well as auto correlation in the error term. Thirdly, control variables have been used to enhance the efficiency of the model. The main finding of this paper is that central bank independence is negatively associated with inflation even after including control variables.Keywords: central bank independence, inflation, macroeconomic variables, price stability
Procedia PDF Downloads 36512140 Numerical Model to Study Calcium and Inositol 1,4,5-Trisphosphate Dynamics in a Myocyte Cell
Authors: Nisha Singh, Neeru Adlakha
Abstract:
Calcium signalling is one of the most important intracellular signalling mechanisms. A lot of approaches and investigators have been made in the study of calcium signalling in various cells to understand its mechanisms over recent decades. However, most of existing investigators have mainly focussed on the study of calcium signalling in various cells without paying attention to the dependence of calcium signalling on other chemical ions like inositol-1; 4; 5 triphosphate ions, etc. Some models for the independent study of calcium signalling and inositol-1; 4; 5 triphosphate signalling in various cells are present but very little attention has been paid by the researchers to study the interdependence of these two signalling processes in a cell. In this paper, we propose a coupled mathematical model to understand the interdependence of inositol-1; 4; 5 triphosphate dynamics and calcium dynamics in a myocyte cell. Such studies will provide the deeper understanding of various factors involved in calcium signalling in myocytes, which may be of great use to biomedical scientists for various medical applications.Keywords: calcium signalling, coupling, finite difference method, inositol 1, 4, 5-triphosphate
Procedia PDF Downloads 29212139 Utility Analysis of API Economy Based on Multi-Sided Platform Markets Model
Authors: Mami Sugiura, Shinichi Arakawa, Masayuki Murata, Satoshi Imai, Toru Katagiri, Motoyoshi Sekiya
Abstract:
API (Application Programming Interface) economy, where many participants join/interact and form the economy, is expected to increase collaboration between information services through API, and thereby, it is expected to increase market value from the service collaborations. In this paper, we introduce API evaluators, which are the activator of API economy by reviewing and/or evaluating APIs, and develop a multi-sided API economy model that formulates interactions among platform provider, API developers, consumers, and API evaluators. By obtaining the equilibrium that maximizes utility of all participants, the impact of API evaluators on the utility of participants in the API economy is revealed. Numerical results show that, with the existence of API evaluators, the number of developers and consumers increase by 1.5% and the utility of platformer increases by 2.3%. We also discuss the strategies of platform provider to maximize its utility under the existence of API evaluators.Keywords: API economy, multi-sided markets, API evaluator, platform, platform provider
Procedia PDF Downloads 18612138 Transport Mode Selection under Lead Time Variability and Emissions Constraint
Authors: Chiranjit Das, Sanjay Jharkharia
Abstract:
This study is focused on transport mode selection under lead time variability and emissions constraint. In order to reduce the carbon emissions generation due to transportation, organization has often faced a dilemmatic choice of transport mode selection since logistic cost and emissions reduction are complementary with each other. Another important aspect of transportation decision is lead-time variability which is least considered in transport mode selection problem. Thus, in this study, we provide a comprehensive mathematical based analytical model to decide transport mode selection under emissions constraint. We also extend our work through analysing the effect of lead time variability in the transport mode selection by a sensitivity analysis. In order to account lead time variability into the model, two identically normally distributed random variables are incorporated in this study including unit lead time variability and lead time demand variability. Therefore, in this study, we are addressing following questions: How the decisions of transport mode selection will be affected by lead time variability? How lead time variability will impact on total supply chain cost under carbon emissions? To accomplish these objectives, a total transportation cost function is developed including unit purchasing cost, unit transportation cost, emissions cost, holding cost during lead time, and penalty cost for stock out due to lead time variability. A set of modes is available to transport each node, in this paper, we consider only four transport modes such as air, road, rail, and water. Transportation cost, distance, emissions level for each transport mode is considered as deterministic and static in this paper. Each mode is having different emissions level depending on the distance and product characteristics. Emissions cost is indirectly affected by the lead time variability if there is any switching of transport mode from lower emissions prone transport mode to higher emissions prone transport mode in order to reduce penalty cost. We provide a numerical analysis in order to study the effectiveness of the mathematical model. We found that chances of stock out during lead time will be higher due to the higher variability of lead time and lad time demand. Numerical results show that penalty cost of air transport mode is negative that means chances of stock out zero, but, having higher holding and emissions cost. Therefore, air transport mode is only selected when there is any emergency order to reduce penalty cost, otherwise, rail and road transport is the most preferred mode of transportation. Thus, this paper is contributing to the literature by a novel approach to decide transport mode under emissions cost and lead time variability. This model can be extended by studying the effect of lead time variability under some other strategic transportation issues such as modal split option, full truck load strategy, and demand consolidation strategy etc.Keywords: carbon emissions, inventory theoretic model, lead time variability, transport mode selection
Procedia PDF Downloads 43412137 A Novel Geometrical Approach toward the Mechanical Properties of Particle Reinforced Composites
Authors: Hamed Khezrzadeh
Abstract:
Many investigations on the micromechanical structure of materials indicate that there exist fractal patterns at the micro scale in some of the main construction and industrial materials. A recently presented micro-fractal theory brings together the well-known periodic homogenization and the fractal geometry to construct an appropriate model for determination of the mechanical properties of particle reinforced composite materials. The proposed multi-step homogenization scheme considers the mechanical properties of different constituent phases in the composite together with the interaction between these phases throughout a step-by-step homogenization technique. In the proposed model the interaction of different phases is also investigated. By using this method the effect of fibers grading on the mechanical properties also could be studied. The theory outcomes are compared to the experimental data for different types of particle-reinforced composites which very good agreement with the experimental data is observed.Keywords: fractal geometry, homogenization, micromehcanics, particulate composites
Procedia PDF Downloads 29312136 Modeling and Control Design of a Centralized Adaptive Cruise Control System
Authors: Markus Mazzola, Gunther Schaaf
Abstract:
A vehicle driving with an Adaptive Cruise Control System (ACC) is usually controlled decentrally, based on the information of radar systems and in some publications based on C2X-Communication (CACC) to guarantee stable platoons. In this paper, we present a Model Predictive Control (MPC) design of a centralized, server-based ACC-System, whereby the vehicular platoon is modeled and controlled as a whole. It is then proven that the proposed MPC design guarantees asymptotic stability and hence string stability of the platoon. The Networked MPC design is chosen to be able to integrate system constraints optimally as well as to reduce the effects of communication delay and packet loss. The performance of the proposed controller is then simulated and analyzed in an LTE communication scenario using the LTE/EPC Network Simulator LENA, which is based on the ns-3 network simulator.Keywords: adaptive cruise control, centralized server, networked model predictive control, string stability
Procedia PDF Downloads 51512135 Modeling Palm Oil Quality During the Ripening Process of Fresh Fruits
Authors: Afshin Keshvadi, Johari Endan, Haniff Harun, Desa Ahmad, Farah Saleena
Abstract:
Experiments were conducted to develop a model for analyzing the ripening process of oil palm fresh fruits in relation to oil yield and oil quality of palm oil produced. This research was carried out on 8-year-old Tenera (Dura × Pisifera) palms planted in 2003 at the Malaysian Palm Oil Board Research Station. Fresh fruit bunches were harvested from designated palms during January till May of 2010. The bunches were divided into three regions (top, middle and bottom), and fruits from the outer and inner layers were randomly sampled for analysis at 8, 12, 16 and 20 weeks after anthesis to establish relationships between maturity and oil development in the mesocarp and kernel. Computations on data related to ripening time, oil content and oil quality were performed using several computer software programs (MSTAT-C, SAS and Microsoft Excel). Nine nonlinear mathematical models were utilized using MATLAB software to fit the data collected. The results showed mean mesocarp oil percent increased from 1.24 % at 8 weeks after anthesis to 29.6 % at 20 weeks after anthesis. Fruits from the top part of the bunch had the highest mesocarp oil content of 10.09 %. The lowest kernel oil percent of 0.03 % was recorded at 12 weeks after anthesis. Palmitic acid and oleic acid comprised of more than 73 % of total mesocarp fatty acids at 8 weeks after anthesis, and increased to more than 80 % at fruit maturity at 20 weeks. The Logistic model with the highest R2 and the lowest root mean square error was found to be the best fit model.Keywords: oil palm, oil yield, ripening process, anthesis, fatty acids, modeling
Procedia PDF Downloads 31312134 Dynamic Process Model for Designing Smart Spaces Based on Context-Awareness and Computational Methods Principles
Authors: Heba M. Jahin, Ali F. Bakr, Zeyad T. Elsayad
Abstract:
As smart spaces can be defined as any working environment which integrates embedded computers, information appliances and multi-modal sensors to remain focused on the interaction between the users, their activity, and their behavior in the space; hence, smart space must be aware of their contexts and automatically adapt to their changing context-awareness, by interacting with their physical environment through natural and multimodal interfaces. Also, by serving the information used proactively. This paper suggests a dynamic framework through the architectural design process of the space based on the principles of computational methods and context-awareness principles to help in creating a field of changes and modifications. It generates possibilities, concerns about the physical, structural and user contexts. This framework is concerned with five main processes: gathering and analyzing data to generate smart design scenarios, parameters, and attributes; which will be transformed by coding into four types of models. Furthmore, connecting those models together in the interaction model which will represent the context-awareness system. Then, transforming that model into a virtual and ambient environment which represents the physical and real environments, to act as a linkage phase between the users and their activities taking place in that smart space . Finally, the feedback phase from users of that environment to be sure that the design of that smart space fulfill their needs. Therefore, the generated design process will help in designing smarts spaces that can be adapted and controlled to answer the users’ defined goals, needs, and activity.Keywords: computational methods, context-awareness, design process, smart spaces
Procedia PDF Downloads 331