Search results for: computational fluid dynamics.
3022 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Partitioned Solution Approach and an Exponential Model
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
The solution of the nonlinear dynamic equilibrium equations of base-isolated structures adopting a conventional monolithic solution approach, i.e. an implicit single-step time integration method employed with an iteration procedure, and the use of existing nonlinear analytical models, such as differential equation models, to simulate the dynamic behavior of seismic isolators can require a significant computational effort. In order to reduce numerical computations, a partitioned solution method and a one dimensional nonlinear analytical model are presented in this paper. A partitioned solution approach can be easily applied to base-isolated structures in which the base isolation system is much more flexible than the superstructure. Thus, in this work, the explicit conditionally stable central difference method is used to evaluate the base isolation system nonlinear response and the implicit unconditionally stable Newmark’s constant average acceleration method is adopted to predict the superstructure linear response with the benefit in avoiding iterations in each time step of a nonlinear dynamic analysis. The proposed mathematical model is able to simulate the dynamic behavior of seismic isolators without requiring the solution of a nonlinear differential equation, as in the case of widely used differential equation model. The proposed mixed explicit-implicit time integration method and nonlinear exponential model are adopted to analyze a three dimensional seismically isolated structure with a lead rubber bearing system subjected to earthquake excitation. The numerical results show the good accuracy and the significant computational efficiency of the proposed solution approach and analytical model compared to the conventional solution method and mathematical model adopted in this work. Furthermore, the low stiffness value of the base isolation system with lead rubber bearings allows to have a critical time step considerably larger than the imposed ground acceleration time step, thus avoiding stability problems in the proposed mixed method.Keywords: base-isolated structures, earthquake engineering, mixed time integration, nonlinear exponential model
Procedia PDF Downloads 2803021 Efficient Filtering of Graph Based Data Using Graph Partitioning
Authors: Nileshkumar Vaishnav, Aditya Tatu
Abstract:
An algebraic framework for processing graph signals axiomatically designates the graph adjacency matrix as the shift operator. In this setup, we often encounter a problem wherein we know the filtered output and the filter coefficients, and need to find out the input graph signal. Solution to this problem using direct approach requires O(N3) operations, where N is the number of vertices in graph. In this paper, we adapt the spectral graph partitioning method for partitioning of graphs and use it to reduce the computational cost of the filtering problem. We use the example of denoising of the temperature data to illustrate the efficacy of the approach.Keywords: graph signal processing, graph partitioning, inverse filtering on graphs, algebraic signal processing
Procedia PDF Downloads 3113020 Beyond Sexual Objectification: Moderation Analysis of Trauma and Overexcitability Dynamics in Women
Authors: Ritika Chaturvedi
Abstract:
Introduction: Sexual objectification, characterized by the reduction of an individual to a mere object of sexual desire, remains a pervasive societal issue with profound repercussions on individual well-being. Such experiences, often rooted in systemic and cultural norms, have long-lasting implications for mental and emotional health. This study aims to explore the intricate relationship between experiences of sexual objectification and insidious trauma, further investigating the potential moderating effects of overexcitability as proposed by Dabrowski's theory of positive disintegration. Methodology: The research involved a comprehensive cohort of 204 women, spanning ages from 18 to 65 years. Participants were tasked with completing self-administered questionnaires designed to capture their experiences with sexual objectification. Additionally, the questionnaire assessed symptoms indicative of insidious trauma and explored overexcitability across five distinct domains: emotional, intellectual, psychomotor, sensory, and imaginational. Employing advanced statistical techniques, including multiple regression and moderation analysis, the study sought to decipher the intricate interplay among these variables. Findings: The study's results revealed a compelling positive correlation between experiences of sexual objectification and the onset of symptoms indicative of insidious trauma. This correlation underscores the profound and detrimental effects of sexual objectification on an individual's psychological well-being. Interestingly, the moderation analyses introduced a nuanced understanding, highlighting the differential roles of various overexcitability. Specifically, emotional, intellectual, and sensual overexcitability were found to exacerbate trauma symptomatology. In contrast, psychomotor overexcitability emerged as a protective factor, demonstrating a mitigating influence on the relationship between sexual objectification and trauma. Implications: The study's findings hold significant implications for a diverse array of stakeholders, encompassing mental health practitioners, educators, policymakers, and advocacy groups. The identified moderating effects of overexcitability emphasize the need for tailored interventions that consider individual differences in coping and resilience mechanisms. By recognizing the pivotal role of overexcitability in modulating the traumatic consequences of sexual objectification, this research advocates for the development of more nuanced and targeted support frameworks. Moreover, the study underscores the importance of continued research endeavors to unravel the intricate mechanisms and dynamics underpinning these relationships. Such endeavors are crucial for fostering the evolution of informed, evidence-based interventions and strategies aimed at mitigating the adverse effects of sexual objectification and promoting holistic well-being.Keywords: sexual objectification, insidious trauma, emotional overexcitability, intellectual overexcitability, sensual overexcitability, psychomotor overexcitability, imaginational overexcitability
Procedia PDF Downloads 533019 Solving Extended Linear Complementarity Problems (XLCP) - Wood and Environment
Authors: Liberto Pombal, Christian Dieter Jaekel
Abstract:
The objective of this work is to establish theoretical and numerical conditions for Solving Extended Linear Complementarity Problems (XLCP), with emphasis on the Horizontal Linear Complementarity Problem (HLCP). Two new strategies for solving complementarity problems are presented, using differentiable and penalized functions, which resulted in a natural formalization for the Linear Horizontal case. The computational results of all suggested strategies are also discussed in depth in this paper. The implication in practice allows solving and optimizing, in an innovative way, the (forestry) problems of the value chain of the industrial wood sector in Angola.Keywords: complementarity, box constrained, optimality conditions, wood and environment
Procedia PDF Downloads 573018 An Optimal Control Model for the Dynamics of Visceral Leishmaniasis
Authors: Ibrahim M. Elmojtaba, Rayan M. Altayeb
Abstract:
Visceral leishmaniasis (VL) is a vector-borne disease caused by the protozoa parasite of the genus leishmania. The transmission of the parasite to humans and animals occurs via the bite of adult female sandflies previously infected by biting and sucking blood of an infectious humans or animals. In this paper we use a previously proposed model, and then applied two optimal controls, namely treatment and vaccination to that model to investigate optimal strategies for controlling the spread of the disease using treatment and vaccination as the system control variables. The possible impact of using combinations of the two controls, either one at a time or two at a time on the spread of the disease is also examined. Our results provide a framework for vaccination and treatment strategies to reduce susceptible and infection individuals of VL in five years.Keywords: visceral leishmaniasis, treatment, vaccination, optimal control, numerical simulation
Procedia PDF Downloads 4043017 A Survey of Attacks and Security Requirements in Wireless Sensor Networks
Authors: Vishnu Pratap Singh Kirar
Abstract:
Wireless sensor network (WSN) is a network of many interconnected networked systems, they equipped with energy resources and they are used to detect other physical characteristics. On WSN, there are many researches are performed in past decades. WSN applicable in many security systems govern by military and in many civilian related applications. Thus, the security of WSN gets attention of researchers and gives an opportunity for many future aspects. Still, there are many other issues are related to deployment and overall coverage, scalability, size, energy efficiency, quality of service (QoS), computational power and many more. In this paper we discus about various applications and security related issue and requirements of WSN.Keywords: wireless sensor network (WSN), wireless network attacks, wireless network security, security requirements
Procedia PDF Downloads 4913016 Simulation of Heat Exchanger Behavior during LOCA Accident in THTL Test Loop
Authors: R. Mahmoodi, A. R. Zolfaghari
Abstract:
In nuclear power plants, loss of coolant from the primary system is the type of reduced removed capacity that is given most attention; such an accident is referred as Loss of Coolant Accident (LOCA). In the current study, investigation of shell and tube THTL heat exchanger behavior during LOCA is implemented by ANSYS CFX simulation software in both steady state and transient mode of turbulent fluid flow according to experimental conditions. Numerical results obtained from ANSYS CFX simulation show good agreement with experimental data of THTL heat exchanger. The results illustrate that in large break LOCA as short term accident, heat exchanger could not fast response to temperature variables but in the long term, the temperature of shell side of heat exchanger will be increase.Keywords: shell-and-tube heat exchanger, shell-side, CFD, flow and heat transfer, LOCA
Procedia PDF Downloads 4413015 2D RF ICP Torch Modelling with Fluid Plasma
Authors: Mokhtar Labiod, Nabil Ikhlef, Keltoum Bouherine, Olivier Leroy
Abstract:
A numerical model for the radio-frequency (RF) Argon discharge chamber is developed to simulate the low pressure low temperature inductively coupled plasma. This model will be of fundamental importance in the design of the plasma magnetic control system. Electric and magnetic fields inside the discharge chamber are evaluated by solving a magnetic vector potential equation. To start with, the equations of the ideal magnetohydrodynamics theory will be presented describing the basic behaviour of magnetically confined plasma and equations are discretized with finite element method in cylindrical coordinates. The discharge chamber is assumed to be axially symmetric and the plasma is treated as a compressible gas. Plasma generation due to ionization is added to the continuity equation. Magnetic vector potential equation is solved for the electromagnetic fields. A strong dependence of the plasma properties on the discharge conditions and the gas temperature is obtained.Keywords: direct-coupled model, magnetohydrodynamic, modelling, plasma torch simulation
Procedia PDF Downloads 4343014 Computational Tool for Surface Electromyography Analysis; an Easy Way for Non-Engineers
Authors: Fabiano Araujo Soares, Sauro Emerick Salomoni, Joao Paulo Lima da Silva, Igor Luiz Moura, Adson Ferreira da Rocha
Abstract:
This paper presents a tool developed in the Matlab platform. It was developed to simplify the analysis of surface electromyography signals (S-EMG) in a way accessible to users that are not familiarized with signal processing procedures. The tool receives data by commands in window fields and generates results as graphics and excel tables. The underlying math of each S-EMG estimator is presented. Setup window and result graphics are presented. The tool was presented to four non-engineer users and all of them managed to appropriately use it after a 5 minutes instruction period.Keywords: S-EMG estimators, electromyography, surface electromyography, ARV, RMS, MDF, MNF, CV
Procedia PDF Downloads 5593013 Yarkovsky Effect on the Orbital Dynamics of the Asteroid (101955) Bennu
Authors: Sanjay Narayan Deo, Badam Singh Kushvah
Abstract:
Bennu(101955) is a half kilometer potentially hazardous near-Earth asteroid. We analyze the influence of Yarkovsky effect and relativistic effect of the Sun on the motion of the asteroid Bennu. The transverse model is used to compute Yarkovsky force on asteroid Bennu. Our dynamical model includes Newtonian perturbations of eight planets, the Moon, the Sun and three massive asteroid (1Ceres, 2Palas and 4Vesta). We showed the variation in orbital elements of nominal orbit of the asteroid. In the presence of Yarkovsky effect, the Semi-major axis of the orbit of the asteroid is decreases by 350 m over one period of orbital motion. The magnitude of Yarkovsky force is computed. We find that maximum magnitude of Yarkovsky force is 0.09 N at the perihelion . We also found that the magnitude of the Sun relativity effect is greater than the Yarkovsky effect on the motion the asteroid Bennu.Keywords: Bennu, orbital elements, relativistic effect, Yarkovsky effect
Procedia PDF Downloads 2963012 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2403011 Piracy Killed the Radio Star: A System Archetype Analysis of Digital Music Theft
Authors: Marton Gergely
Abstract:
Digital experience goods, such as music and video, are readily available and easily accessible through a sundry of illegal mediums. Furthermore, the rate of music theft has been increasing at a seemingly unstoppable rate. Instead of studying the effect of copyright infringement on affected shareholders, this paper aims to examine the overall impact that digital music piracy has on society as a whole. Through a systems dynamics approach, an archetype is built to model the behavior of both legal and illegal music users. Additionally, the effects over time are considered. The conceptual model suggests that if piracy continues to grow at the current pace, industry shareholders will eventually lose the motivation to supply new music. In turn, this tragedy would affect not only the illegal players, but legal consumers as well, by means of a decrease in overall quality of life.Keywords: music piracy, illegal downloading, tragedy of the commons, system archetypes
Procedia PDF Downloads 3573010 Theoretical and Experimental Investigation of Fe and Ni-TCNQ on Graphene
Authors: A. Shahsavar, Z. Jakub
Abstract:
Due to the outstanding properties of the 2D metal-organic frameworks (MOF), intensive computational and experimental studies have been done. However, the lack of fundamental studies of MOFs on the graphene backbone is observed. This work studies Fe and Ni as metal and tetracyanoquinodimethane (TCNQ) with a high electron affinity as an organic linker functionalized on graphene. Here we present DFT calculations results to unveil the electronic and magnetic properties of iron and nickel-TCNQ physisorbed on graphene. Adsorption and Fermi energies, structural, and magnetic properties will be reported. Our experimental observations prove Fe- and NiTCNQ@Gr/Ir(111) are thermally highly stable up to 500 and 250°C, respectively, making them promising materials for single-atom catalysts or high-density storage media.Keywords: DFT, graphene, MTCNQ, self-assembly
Procedia PDF Downloads 1323009 Performance Study of Cascade Refrigeration System Using Alternative Refrigerants
Authors: Gulshan Sachdeva, Vaibhav Jain, S. S. Kachhwaha
Abstract:
Cascade refrigeration systems employ series of single stage vapor compression units which are thermally coupled with evaporator/condenser cascades. Different refrigerants are used in each of the circuit depending on the optimum characteristics shown by the refrigerant for a particular application. In the present research study, a steady state thermodynamic model is developed which simulates the working of an actual cascade system. The model provides COP and all other system parameters like total compressor work, temperature, pressure, enthalpy and entropy at different state points. The working fluid in Low Temperature Circuit (LTC) is CO2 (R744) while ammonia (R717), propane (R290), propylene (R1270), R404A and R12 are the refrigerants in High Temperature Circuit (HTC). The performance curves of ammonia, propane, propylene, and R404A are compared with R12 to find its nearest substitute. Results show that ammonia is the best substitute of R12.Keywords: cascade system, refrigerants, thermodynamic model, production engineering
Procedia PDF Downloads 3613008 Polarimetric Study of System Gelatin / Carboxymethylcellulose in the Food Field
Authors: Sihem Bazid, Meriem El Kolli, Aicha Medjahed
Abstract:
Proteins and polysaccharides are the two types of biopolymers most frequently used in the food industry to control the mechanical properties and structural stability and organoleptic properties of the products. The textural and structural properties of these two types of blend polymers depend on their interaction and their ability to form organized structures. From an industrial point of view, a better understanding of mixtures protein / polysaccharide is an important issue since they are already heavily involved in processed food. It is in this context that we have chosen to work on a model system composed of a fibrous protein mixture (gelatin)/anionic polysaccharide (sodium carboxymethylcellulose). Gelatin, one of the most popular biopolymers, is widely used in food, pharmaceutical, cosmetic and photographic applications, because of its unique functional and technological properties. Sodium Carboxymethylcellulose (NaCMC) is an anionic linear polysaccharide derived from cellulose. It is an important industrial polymer with a wide range of applications. The functional properties of this anionic polysaccharide can be modified by the presence of proteins with which it might interact. Another factor may also manage the interaction of protein-polysaccharide mixtures is the triple helix of the gelatin. Its complex synthesis method results in an extracellular assembly containing several levels. Collagen can be in a soluble state or associate into fibrils, which can associate in fiber. Each level corresponds to an organization recognized by the cellular and metabolic system. Gelatin allows this approach, the formation of gelatin gel has triple helical folding of denatured collagen chains, this gel has been the subject of numerous studies, and it is now known that the properties depend only on the rate of triple helices forming the network. Chemical modification of this system is quite controlled. Observe the dynamics of the triple helix may be relevant in understanding the interactions involved in protein-polysaccharides mixtures. Gelatin is central to any industrial process, understand and analyze the molecular dynamics induced by the triple helix in the transitions gelatin, can have great economic importance in all fields and especially the food. The goal is to understand the possible mechanisms involved depending on the nature of the mixtures obtained. From a fundamental point of view, it is clear that the protective effect of NaCMC on gelatin and conformational changes of the α helix are strongly influenced by the nature of the medium. Our goal is to minimize the maximum the α helix structure changes to maintain more stable gelatin and protect against denaturation that occurs during such conversion processes in the food industry. In order to study the nature of interactions and assess the properties of mixtures, polarimetry was used to monitor the optical parameters and to assess the rate of helicity gelatin.Keywords: gelatin, sodium carboxymethylcellulose, interaction gelatin-NaCMC, the rate of helicity, polarimetry
Procedia PDF Downloads 3133007 The Interdisciplinary Synergy Between Computer Engineering and Mathematics
Authors: Mitat Uysal, Aynur Uysal
Abstract:
Computer engineering and mathematics share a deep and symbiotic relationship, with mathematics providing the foundational theories and models for computer engineering advancements. From algorithm development to optimization techniques, mathematics plays a pivotal role in solving complex computational problems. This paper explores key mathematical principles that underpin computer engineering, illustrating their significance through a case study that demonstrates the application of optimization techniques using Python code. The case study addresses the well-known vehicle routing problem (VRP), an extension of the traveling salesman problem (TSP), and solves it using a genetic algorithm.Keywords: VRP, TSP, genetic algorithm, computer engineering, optimization
Procedia PDF Downloads 153006 Flexible Arm Manipulator Control for Industrial Tasks
Authors: Mircea Ivanescu, Nirvana Popescu, Decebal Popescu, Dorin Popescu
Abstract:
This paper addresses the control problem of a class of hyper-redundant arms. In order to avoid discrepancy between the mathematical model and the actual dynamics, the dynamic model with uncertain parameters of this class of manipulators is inferred. A procedure to design a feedback controller which stabilizes the uncertain system has been proposed. A PD boundary control algorithm is used in order to control the desired position of the manipulator. This controller is easy to implement from the point of view of measuring techniques and actuation. Numerical simulations verify the effectiveness of the presented methods. In order to verify the suitability of the control algorithm, a platform with a 3D flexible manipulator has been employed for testing. Experimental tests on this platform illustrate the applications of the techniques developed in the paper.Keywords: distributed model, flexible manipulator, observer, robot control
Procedia PDF Downloads 3213005 Calculation of the Added Mass of a Submerged Object with Variable Sizes at Different Distances from the Wall via Lattice Boltzmann Simulations
Authors: Nastaran Ahmadpour Samani, Shahram Talebi
Abstract:
Added mass is an important quantity in analysis of the motion of a submerged object ,which can be calculated by solving the equation of potential flow around the object . Here, we consider systems in which a square object is submerged in a channel of fluid and moves parallel to the wall. The corresponding added mass at a given distance from the wall d and for the object size s (which is the side of square object) is calculated via lattice Blotzmann simulation . By changing d and s separately, their effect on the added mass is studied systematically. The simulation results reveal that for the systems in which d > 4s, the distance does not influence the added mass any more. The added mass increases when the object approaches the wall and reaches its maximum value as it moves on the wall (d -- > 0). In this case, the added mass is about 73% larger than which of the case d=4s. In addition, it is observed that the added mass increases by increasing of the object size s and vice versa.Keywords: Lattice Boltzmann simulation , added mass, square, variable size
Procedia PDF Downloads 4763004 Investigation of Heat Transfer of Nanofluids in Circular Microchannels
Authors: Bayram Sahin, Hourieh Bayramian, Emre Mandev, Murat Ceylan
Abstract:
In industrial applications, the demand for the enhancement of heat transfer is a common engineering problem. The use of additives to heat transfer fluid is a technique applied to enhance the heat transfer performance of base fluids. In this study, the thermal performance of nanofluids consisting of SiO2 particles and deionized water in circular microchannels was investigated experimentally. SiO2 nanoparticles with diameter of 15 nm were added to water to prepare nanofluids with 0.2% and 0.4% volume fractions. Heat transfer characteristics were calculated by using temperature, flow and pressure measurements. The thermal conductivity and viscosity values required for the calculations are measured separately. It is observed that the Nusselt number increases at the all volume fraction of particles, by increasing the Reynolds number and the volumetric ratios of the particles. The highest heat transfer enhancement is obtained at Re = 2160 and 0.4 % vol. by 14% under the condition of a constant pumping power.Keywords: nanofluid, microchannel, heat transfer, SiO2-water nanofluid
Procedia PDF Downloads 3873003 Copper Coil Heat Exchanger Performance for Greenhouse Heating: An Experimental and Theoretical Study
Authors: Maha Bakkari, R.Tadili
Abstract:
The present work is a study of the performance of a solar copper coil heating system in a greenhouse microclimate. Our system is based on the circulation of a Heat transfer fluid, which is water in our case, in a closed loop under the greenhouse's roof in order to store heat all day, and then this heat will supply the greenhouse during the night. In order to evaluate our greenhouse, we made an experimental study in two identical greenhouses, where the first one is equipped with a heating system and the second (without heating) is used for control. The heating system allows the establishment of the thermal balance and determines the mass of water necessary for the process in order to ensure its functioning during the night. The results obtained showed that this solar heating system and the climatic parameters inside the experimental greenhouse were improved, and it presents a significant gain compared to a controlled greenhouse without a heating system. This research is one of the solutions that help to reduce the greenhouse effect of the planet Earth, a problem that worries the world.Keywords: solar energy, energy storage, greenhouse, environment
Procedia PDF Downloads 793002 Stereoscopic Motion Design: Design Futures
Authors: Edgar Teixeira, Eurico Carrapatoso
Abstract:
As 3D displays become increasingly affordable, while production techniques and computational resources to create stereoscopic content being ever more accessible, a new dimension is literally introduced along with new expressive and immersive potentialities in support of designing for the screen. Prospective design visionaries have already at the reach of their hands an innovative and powerful visualization technology, which enables them to actively envision future trends and vanguardist directions. This paper explores the aesthetic and informational potentialities of stereoscopic motion graphics, providing insight on the application of 3D displays in design practice, proposing strategies to investigate stereoscopic communication, discussing potential repercussions to extant theory and impacts on audience.Keywords: design, visual communication, technology, stereoscopy, 3D media
Procedia PDF Downloads 4073001 The Effect of Improvement Programs in the Mean Time to Repair and in the Mean Time between Failures on Overall Lead Time: A Simulation Using the System Dynamics-Factory Physics Model
Authors: Marcel Heimar Ribeiro Utiyama, Fernanda Caveiro Correia, Dario Henrique Alliprandini
Abstract:
The importance of the correct allocation of improvement programs is of growing interest in recent years. Due to their limited resources, companies must ensure that their financial resources are directed to the correct workstations in order to be the most effective and survive facing the strong competition. However, to our best knowledge, the literature about allocation of improvement programs does not analyze in depth this problem when the flow shop process has two capacity constrained resources. This is a research gap which is deeply studied in this work. The purpose of this work is to identify the best strategy to allocate improvement programs in a flow shop with two capacity constrained resources. Data were collected from a flow shop process with seven workstations in an industrial control and automation company, which process 13.690 units on average per month. The data were used to conduct a simulation with the System Dynamics-Factory Physics model. The main variables considered, due to their importance on lead time reduction, were the mean time between failures and the mean time to repair. The lead time reduction was the output measure of the simulations. Ten different strategies were created: (i) focused time to repair improvement, (ii) focused time between failures improvement, (iii) distributed time to repair improvement, (iv) distributed time between failures improvement, (v) focused time to repair and time between failures improvement, (vi) distributed time to repair and between failures improvement, (vii) hybrid time to repair improvement, (viii) hybrid time between failures improvements, (ix) time to repair improvement strategy towards the two capacity constrained resources, (x) time between failures improvement strategy towards the two capacity constrained resources. The ten strategies tested are variations of the three main strategies for improvement programs named focused, distributed and hybrid. Several comparisons among the effect of the ten strategies in lead time reduction were performed. The results indicated that for the flow shop analyzed, the focused strategies delivered the best results. When it is not possible to perform a large investment on the capacity constrained resources, companies should use hybrid approaches. An important contribution to the academy is the hybrid approach, which proposes a new way to direct the efforts of improvements. In addition, the study in a flow shop with two strong capacity constrained resources (more than 95% of utilization) is an important contribution to the literature. Another important contribution is the problem of allocation with two CCRs and the possibility of having floating capacity constrained resources. The results provided the best improvement strategies considering the different strategies of allocation of improvement programs and different positions of the capacity constrained resources. Finally, it is possible to state that both strategies, hybrid time to repair improvement and hybrid time between failures improvement, delivered best results compared to the respective distributed strategies. The main limitations of this study are mainly regarding the flow shop analyzed. Future work can further investigate different flow shop configurations like a varying number of workstations, different number of products or even different positions of the two capacity constrained resources.Keywords: allocation of improvement programs, capacity constrained resource, hybrid strategy, lead time, mean time to repair, mean time between failures
Procedia PDF Downloads 1243000 Heat Transfer Enhancement by Turbulent Impinging Jet with Jet's Velocity Field Excitations Using OpenFOAM
Authors: Naseem Uddin
Abstract:
Impinging jets are used in variety of engineering and industrial applications. This paper is based on numerical simulations of heat transfer by turbulent impinging jet with velocity field excitations using different Reynolds Averaged Navier-Stokes Equations models. Also Detached Eddy Simulations are conducted to investigate the differences in the prediction capabilities of these two simulation approaches. In this paper the excited jet is simulated in non-commercial CFD code OpenFOAM with the goal to understand the influence of dynamics of impinging jet on heat transfer. The jet’s frequencies are altered keeping in view the preferred mode of the jet. The Reynolds number based on mean velocity and diameter is 23,000 and jet’s outlet-to-target wall distance is 2. It is found that heat transfer at the target wall can be influenced by judicious selection of amplitude and frequencies.Keywords: excitation, impinging jet, natural frequency, turbulence models
Procedia PDF Downloads 2742999 A Nonstandard Finite Difference Method for Weather Derivatives Pricing Model
Authors: Clarinda Vitorino Nhangumbe, Fredericks Ebrahim, Betuel Canhanga
Abstract:
The price of an option weather derivatives can be approximated as a solution of the two-dimensional convection-diffusion dominant partial differential equation derived from the Ornstein-Uhlenbeck process, where one variable represents the weather dynamics and the other variable represent the underlying weather index. With appropriate financial boundary conditions, the solution of the pricing equation is approximated using a nonstandard finite difference method. It is shown that the proposed numerical scheme preserves positivity as well as stability and consistency. In order to illustrate the accuracy of the method, the numerical results are compared with other methods. The model is tested for real weather data.Keywords: nonstandard finite differences, Ornstein-Uhlenbeck process, partial differential equations approach, weather derivatives
Procedia PDF Downloads 1102998 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs
Authors: André Augusto Ceballos Melo
Abstract:
Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.Keywords: stable diffusion, neural interface, smart prosthetic, augmenting
Procedia PDF Downloads 1012997 In silico Model of Transamination Reaction Mechanism
Authors: Sang-Woo Han, Jong-Shik Shin
Abstract:
w-Transaminase (w-TA) is broadly used for synthesizing chiral amines with a high enantiopurity. However, the reaction mechanism of w-TA has been not well studied, contrary to a-transaminase (a-TA) such as AspTA. Here, we propose in silico model on the reaction mechanism of w-TA. Based on the modeling results which showed large free energy gaps between external aldimine and quinonoid on deamination (or ketimine and quinonoid on amination), withdrawal of Ca-H seemed as a critical step which determines the reaction rate on both amination and deamination reactions, which is consistent with previous researches. Hyperconjugation was also observed in both external aldimine and ketimine which weakens Ca-H bond to elevate Ca-H abstraction.Keywords: computational modeling, reaction intermediates, w-transaminase, in silico model
Procedia PDF Downloads 5452996 Correlation to Predict the Effect of Particle Type on Axial Voidage Profile in Circulating Fluidized Beds
Authors: M. S. Khurram, S. A. Memon, S. Khan
Abstract:
Bed voidage behavior among different flow regimes for Geldart A, B, and D particles (fluid catalytic cracking catalyst (FCC), particle A and glass beads) of diameter range 57-872 μm, apparent density 1470-3092 kg/m3, and bulk density range 890-1773 kg/m3 were investigated in a gas-solid circulating fluidized bed of 0.1 m-i.d. and 2.56 m-height of plexi-glass. Effects of variables (gas velocity, particle properties, and static bed height) were analyzed on bed voidage. The axial voidage profile showed a typical trend along the riser: a dense bed at the lower part followed by a transition in the splash zone and a lean phase in the freeboard. Bed expansion and dense bed voidage increased with an increase of gas velocity as usual. From experimental results, a generalized model relationship based on inverse fluidization number for dense bed voidage from bubbling to fast fluidization regimes was presented.Keywords: axial voidage, circulating fluidized bed, splash zone, static bed
Procedia PDF Downloads 2862995 Enhanced Face Recognition with Daisy Descriptors Using 1BT Based Registration
Authors: Sevil Igit, Merve Meric, Sarp Erturk
Abstract:
In this paper, it is proposed to improve Daisy descriptor based face recognition using a novel One-Bit Transform (1BT) based pre-registration approach. The 1BT based pre-registration procedure is fast and has low computational complexity. It is shown that the face recognition accuracy is improved with the proposed approach. The proposed approach can facilitate highly accurate face recognition using DAISY descriptor with simple matching and thereby facilitate a low-complexity approach.Keywords: face recognition, Daisy descriptor, One-Bit Transform, image registration
Procedia PDF Downloads 3672994 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1052993 Sensitivity Analysis during the Optimization Process Using Genetic Algorithms
Authors: M. A. Rubio, A. Urquia
Abstract:
Genetic algorithms (GA) are applied to the solution of high-dimensional optimization problems. Additionally, sensitivity analysis (SA) is usually carried out to determine the effect on optimal solutions of changes in parameter values of the objective function. These two analyses (i.e., optimization and sensitivity analysis) are computationally intensive when applied to high-dimensional functions. The approach presented in this paper consists in performing the SA during the GA execution, by statistically analyzing the data obtained of running the GA. The advantage is that in this case SA does not involve making additional evaluations of the objective function and, consequently, this proposed approach requires less computational effort than conducting optimization and SA in two consecutive steps.Keywords: optimization, sensitivity, genetic algorithms, model calibration
Procedia PDF Downloads 436