Search results for: multi-objective particle swarm optimization
3014 Vibration Control of Hermetic Compressors Using Flexible Multi-Body Dynamics Theory
Authors: Armin Amindari
Abstract:
Hermetic compressors are used widely for refrigeration, heat pump, and air conditioning applications. With the improvement of energy conservation and environmental protection requirements, inverter compressors that operates at different speeds have become increasingly attractive in the industry. Although speed change capability is more efficient, passing through resonant frequencies may lead to excessive vibrations. In this work, an integrated vibration control approach based on flexible multi-body dynamics theory is used for optimizing the vibration amplitudes of the compressor at different operating speeds. To examine the compressor vibrations, all the forces and moments exerted on the cylinder block were clarified and minimized using balancers attached to the upper and lower ends of the motor rotor and crankshaft. The vibration response of the system was simulated using Motionview™ software. In addition, mass-spring optimization was adopted to shift the resonant frequencies out of the operating speeds. The modal shapes of the system were studied using Optistruct™ solver. Using this approach, the vibrations were reduced up to 56% through dynamic simulations. The results were in high agreement with various experimental test data. In addition, the vibration resonance problem observed at low speeds was solved by shifting the resonant frequencies through optimization studies.Keywords: vibration, MBD, compressor, hermetic
Procedia PDF Downloads 1013013 Method and Apparatus for Optimized Job Scheduling in the High-Performance Computing Cloud Environment
Authors: Subodh Kumar, Amit Varde
Abstract:
Typical on-premises high-performance computing (HPC) environments consist of a fixed number and a fixed set of computing hardware. During the design of the HPC environment, the hardware components, including but not limited to CPU, Memory, GPU, and networking, are carefully chosen from select vendors for optimal performance. High capital cost for building the environment is a prime factor influencing the design environment. A class of software called “Job Schedulers” are critical to maximizing these resources and running multiple workloads to extract the maximum value for the high capital cost. In principle, schedulers work by preventing workloads and users from monopolizing the finite hardware resources by queuing jobs in a workload. A cloud-based HPC environment does not have the limitations of fixed (type of and quantity of) hardware resources. In theory, users and workloads could spin up any number and type of hardware resource. This paper discusses the limitations of using traditional scheduling algorithms for cloud-based HPC workloads. It proposes a new set of features, called “HPC optimizers,” for maximizing the benefits of the elasticity and scalability of the cloud with the goal of cost-performance optimization of the workload.Keywords: high performance computing, HPC, cloud computing, optimization, schedulers
Procedia PDF Downloads 943012 Microfluidic Fluid Shear Mechanotransduction Device Using Linear Optimization of Hydraulic Channels
Authors: Sanat K. Dash, Rama S. Verma, Sarit K. Das
Abstract:
A logarithmic microfluidic shear device was designed and fabricated for cellular mechanotransduction studies. The device contains four cell culture chambers in which flow was modulated to achieve a logarithmic increment. Resistance values were optimized to make the device compact. The network of resistances was developed according to a unique combination of series and parallel resistances as found via optimization. Simulation results done in Ansys 16.1 matched the analytical calculations and showed the shear stress distribution at different inlet flow rates. Fabrication of the device was carried out using conventional photolithography and PDMS soft lithography. Flow profile was validated taking DI water as working fluid and measuring the volume collected at all four outlets. Volumes collected at the outlets were in accordance with the simulation results at inlet flow rates ranging from 1 ml/min to 0.1 ml/min. The device can exert fluid shear stresses ranging four orders of magnitude on the culture chamber walls which will cover shear stress values from interstitial flow to blood flow. This will allow studying cell behavior in the long physiological range of shear stress in a single run reducing number of experiments.Keywords: microfluidics, mechanotransduction, fluid shear stress, physiological shear
Procedia PDF Downloads 1323011 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design
Authors: Mohammad Bagher Anvari, Arman Shojaei
Abstract:
Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.Keywords: incremental launching, bridge construction, finite element model, optimization
Procedia PDF Downloads 1063010 A Simulation-Optimization Approach to Control Production, Subcontracting and Maintenance Decisions for a Deteriorating Production System
Authors: Héctor Rivera-Gómez, Eva Selene Hernández-Gress, Oscar Montaño-Arango, Jose Ramon Corona-Armenta
Abstract:
This research studies the joint production, maintenance and subcontracting control policy for an unreliable deteriorating manufacturing system. Production activities are controlled by a derivation of the Hedging Point Policy, and given that the system is subject to deterioration, it reduces progressively its capacity to satisfy product demand. Multiple deterioration effects are considered, reflected mainly in the quality of the parts produced and the reliability of the machine. Subcontracting is available as support to satisfy product demand; also overhaul maintenance can be conducted to reduce the effects of deterioration. The main objective of the research is to determine simultaneously the production, maintenance and subcontracting rate which minimize the total incurred cost. A stochastic dynamic programming model is developed and solved through a simulation-based approach composed of statistical analysis and optimization with the response surface methodology. The obtained results highlight the strong interactions between production, deterioration and quality which justify the development of an integrated model. A numerical example and a sensitivity analysis are presented to validate our results.Keywords: subcontracting, optimal control, deterioration, simulation, production planning
Procedia PDF Downloads 5813009 Analysis of the Torque Required for Mixing LDPE with Natural Fibre and DCP
Authors: A. E. Delgado, W. Aperador
Abstract:
This study evaluated the incidence of concentrated natural fibre, as well as the effects of adding a crosslinking agent on the torque when those components are mixed with low density polyethylene (LDPE). The natural fibre has a particle size of between 0.8-1.2mm and a moisture content of 0.17%. An internal mixer was used to measure the torque required to mix the polymer with the fibre. The effect of the fibre content and crosslinking agent on the torque was also determined. A change was observed in the morphology of the mixes using SEM differential scanning microscopy.Keywords: WPC, DCP, LDPE, natural fibre, torque
Procedia PDF Downloads 4203008 A Mathematical Model for Reliability Redundancy Optimization Problem of K-Out-Of-N: G System
Authors: Gak-Gyu Kim, Won Il Jung
Abstract:
According to a remarkable development of science and technology, function and role of the system of engineering fields has recently been diversified. The system has become increasingly more complex and precise, and thus, system designers intended to maximize reliability concentrate more effort at the design stage. This study deals with the reliability redundancy optimization problem (RROP) for k-out-of-n: G system configuration with cold standby and warm standby components. This paper further intends to present the optimal mathematical model through which the following three elements of (i) multiple components choices, (ii) redundant components quantity and (iii) the choice of redundancy strategies may be combined in order to maximize the reliability of the system. Therefore, we focus on the following three issues. First, we consider RROP that there exists warm standby state as well as cold standby state of the component. Second, as eliminating an approximation approach of the previous RROP studies, we construct a precise model for system reliability. Third, given transition time when the state of components changes, we present not simply a workable solution but the advanced method. For the wide applicability of RROPs, moreover, we use absorbing continuous time Markov chain and matrix analytic methods in the suggested mathematical model.Keywords: RROP, matrix analytic methods, k-out-of-n: G system, MTTF, absorbing continuous time Markov Chain
Procedia PDF Downloads 2553007 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures
Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara
Abstract:
The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.Keywords: IoT, fog computing, task offloading, efficient crow search algorithm
Procedia PDF Downloads 593006 The Choicest Design of InGaP/GaAs Heterojunction Solar Cell
Authors: Djaafar Fatiha, Ghalem Bachir, Hadri Bagdad
Abstract:
We studied mainly the influence of temperature, thickness, molar fraction and the doping of the various layers (emitter, base, BSF and window) on the performances of a photovoltaic solar cell. In a first stage, we optimized the performances of the InGaP/GaAs dual-junction solar cell while varying its operation temperature from 275°K to 375 °K with an increment of 25°C using a virtual wafer fabrication TCAD Silvaco. The optimization at 300 °K led to the following result: Icc =14.22 mA/cm2, Voc =2.42V, FF=91.32 %, η= 22.76 % which is close with those found in the literature. In a second stage ,we have varied the molar fraction of different layers as well their thickness and the doping of both emitters and bases and we have registered the result of each variation until obtaining an optimal efficiency of the proposed solar cell at 300°K which was of Icc=14.35mA/cm2,Voc=2.47V,FF=91.34,and η=23.33% for In(1-x)Ga(x)P molar fraction( x=0.5).The elimination of a layer BSF on the back face of our cell, enabled us to make a remarkable improvement of the short-circuit current (Icc=14.70 mA/cm2) and a decrease in open circuit voltage Voc and output η which reached 1.46V and 11.97% respectively. Therefore, we could determine the critical parameters of the cell and optimize its various technological parameters to obtain the best performance for a dual junction solar cell .This work opens the way with new prospects in the field of the photovoltaic one. Such structures will thus simplify the manufacturing processes of the cells; will thus reduce the costs while producing high outputs of photovoltaic conversion.Keywords: modeling, simulation, multijunction, optimization, Silvaco ATLAS
Procedia PDF Downloads 5053005 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials
Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park
Abstract:
Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes
Procedia PDF Downloads 3103004 Coordinated Renewal Planning of Civil Infrastructure Systems
Authors: Hesham Osman
Abstract:
The challenges facing aging urban infrastructure systems require a more holistic and comprehensive approach to their management. The large number of urban infrastructure renewal activities occurring in cities throughout the world leads to social, economic and environmental impacts on the communities in its vicinity. As such, a coordinated effort is required to streamline these activities. This paper presents a framework to enable temporal (time-based) coordination of water, sewer and road intervention activities. Intervention activities include routine maintenance, renewal, and replacement of physical assets. The coordination framework considers 1) Life-cycle costs, 2) Infrastructure level-of-service, and 3) Risk exposure to system operators. The model enables infrastructure asset managers to trade-off options of delaying versus bringing forward intervention activities of one system in order to be executed in conjunction with another co-located system in the right-of-way. The framework relies on a combination of meta-heuristics and goal-based optimization. In order to demonstrate the applicability of the framework, a case study for a major infrastructure corridor in Cairo, Egypt is taken as an example. Results show that the framework can be scaled-up to include other infrastructure systems located in the right-of-way like electricity, gas and telecom, provided that information can be shared among these entities.Keywords: infrastructure, rehabilitation, construction, optimization
Procedia PDF Downloads 2983003 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management
Procedia PDF Downloads 3433002 Li2o Loss of Lithium Niobate Nanocrystals during High-Energy Ball-Milling
Authors: Laura Kocsor, Laszlo Peter, Laszlo Kovacs, Zsolt Kis
Abstract:
The aim of our research is to prepare rare-earth-doped lithium niobate (LiNbO3) nanocrystals, having only a few dopant ions in the focal point of an exciting laser beam. These samples will be used to achieve individual addressing of the dopant ions by light beams in a confocal microscope setup. One method for the preparation of nanocrystalline materials is to reduce the particle size by mechanical grinding. High-energy ball-milling was used in several works to produce nano lithium niobate. Previously, it was reported that dry high-energy ball-milling of lithium niobate in a shaker mill results in the partial reduction of the material, which leads to a balanced formation of bipolarons and polarons yielding gray color together with oxygen release and Li2O segregation on the open surfaces. In the present work we focus on preparing LiNbO3 nanocrystals by high-energy ball-milling using a Fritsch Pulverisette 7 planetary mill. Every ball-milling process was carried out in zirconia vial with zirconia balls of different sizes (from 3 mm to 0.1 mm), wet grinding with water, and the grinding time being less than an hour. Gradually decreasing the ball size to 0.1 mm, an average particle size of about 10 nm could be obtained determined by dynamic light scattering and verified by scanning electron microscopy. High-energy ball-milling resulted in sample darkening evidenced by optical absorption spectroscopy measurements indicating that the material underwent partial reduction. The unwanted lithium oxide loss decreases the Li/Nb ratio in the crystal, strongly influencing the spectroscopic properties of lithium niobate. Zirconia contamination was found in ground samples proved by energy-dispersive X-ray spectroscopy measurements; however, it cannot be explained based on the hardness properties of the materials involved in the ball-milling process. It can be understood taking into account the presence of lithium hydroxide formed the segregated lithium oxide and water during the ball-milling process, through chemically induced abrasion. The quantity of the segregated Li2O was measured by coulometric titration. During the wet milling process in the planetary mill, it was found that the lithium oxide loss increases linearly in the early phase of the milling process, then a saturation of the Li2O loss can be seen. This change goes along with the disappearance of the relatively large particles until a relatively narrow size distribution is achieved in accord with the dynamic light scattering measurements. With the 3 mm ball size and 1100 rpm rotation rate, the mean particle size achieved is 100 nm, and the total Li2O loss is about 1.2 wt.% of the original LiNbO3. Further investigations have been done to minimize the Li2O segregation during the ball-milling process. Since the Li2O loss was observed to increase with the growing total surface of the particles, the influence of ball-milling parameters on its quantity has also been studied.Keywords: high-energy ball-milling, lithium niobate, mechanochemical reaction, nanocrystals
Procedia PDF Downloads 1363001 NaOH/Pumice and LiOH/Pumice as Heterogeneous Solid Base Catalysts for Biodiesel Production from Soybean Oil: An Optimization Study
Authors: Joy Marie Mora, Mark Daniel De Luna, Tsair-Wang Chung
Abstract:
Transesterification reaction of soybean oil with methanol was carried out to produce fatty acid methyl esters (FAME) using calcined alkali metal (Na and Li) supported by pumice silica as the solid base catalyst. Pumice silica catalyst was activated by loading alkali metal ions to its surface via an ion-exchange method. Response surface methodology (RSM) in combination with Box-Behnken design (BBD) was used to optimize the operating parameters in biodiesel production, namely: reaction temperature, methanol to oil molar ratio, reaction time, and catalyst concentration. Using the optimized sets of parameters, FAME yields using sodium and lithium silicate catalysts were 98.80% and 98.77%, respectively. A pseudo-first order kinetic equation was applied to evaluate the kinetic parameters of the reaction. The prepared catalysts were characterized by several techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), Brunauer-Emmett-Teller (BET) sorptometer, and scanning electron microscopy (SEM). In addition, the reusability of the catalysts was successfully tested in two subsequent cycles.Keywords: alkali metal, biodiesel, Box-Behnken design, heterogeneous catalyst, kinetics, optimization, pumice, transesterification
Procedia PDF Downloads 3063000 Energy Efficient Retrofitting and Optimization of Dual Mixed Refrigerant Natural Gas Liquefaction Process
Authors: Muhammad Abdul Qyyum, Kinza Qadeer, Moonyong Lee
Abstract:
Globally, liquefied natural gas (LNG) has drawn interest as a green energy source in comparison with other fossil fuels, mainly because of its ease of transport and low carbon dioxide emissions. It is expected that demand for LNG will grow steadily over the next few decades. In addition, because the demand for clean energy is increasing, LNG production facilities are expanding into new natural gas reserves across the globe. However, LNG production is an energy and cost intensive process because of the huge power requirements for compression and refrigeration. Therefore, one of the major challenges in the LNG industry is to improve the energy efficiency of existing LNG processes through economic and ecological strategies. The advancement in expansion devices such as two-phase cryogenic expander (TPE) and cryogenic hydraulic turbine (HT) were exploited for energy and cost benefits in natural gas liquefaction. Retrofitting the conventional Joule–Thompson (JT) valve with TPE and HT have the potential to improve the energy efficiency of LNG processes. This research investigated the potential feasibility of the retrofitting of a dual mixed refrigerant (DMR) process by replacing the isenthalpic expansion with isentropic expansion corresponding to energy efficient LNG production. To fully take the potential benefit of the proposed process retrofitting, the proposed DMR schemes were optimized by using a Coggins optimization approach, which was implemented in Microsoft Visual Studio (MVS) environment and linked to the rigorous HYSYS® model. The results showed that the required energy of the proposed isentropic expansion based DMR process could be saved up to 26.5% in comparison with the conventional isenthalpic based DMR process using the JT valves. Utilization of the recovered energy into boosting the natural gas feed pressure could further improve the energy efficiency of the LNG process up to 34% as compared to the base case. This work will help the process engineers to overcome the challenges relating to energy efficiency and safety concerns of LNG processes. Furthermore, the proposed retrofitting scheme can also be implemented to improve the energy efficiency of other isenthalpic expansion based energy intensive cryogenic processes.Keywords: cryogenic liquid turbine, Coggins optimization, dual mixed refrigerant, energy efficient LNG process, two-phase expander
Procedia PDF Downloads 1482999 Bionaut™: A Minimally Invasive Microsurgical Platform to Treat Non-Communicating Hydrocephalus in Dandy-Walker Malformation
Authors: Suehyun Cho, Darrell Harrington, Florent Cros, Olin Palmer, John Caputo, Michael Kardosh, Eran Oren, William Loudon, Alex Kiselyov, Michael Shpigelmacher
Abstract:
The Dandy-Walker malformation (DWM) represents a clinical syndrome manifesting as a combination of posterior fossa cyst, hypoplasia of the cerebellar vermis, and obstructive hydrocephalus. Anatomic hallmarks include hypoplasia of the cerebellar vermis, enlargement of the posterior fossa, and cystic dilatation of the fourth ventricle. Current treatments of DWM, including shunting of the cerebral spinal fluid ventricular system and endoscopic third ventriculostomy (ETV), are frequently clinically insufficient, require additional surgical interventions, and carry risks of infections and neurological deficits. Bionaut Labs develops an alternative way to treat Dandy-Walker Malformation (DWM) associated with non-communicating hydrocephalus. We utilize our discreet microsurgical Bionaut™ particles that are controlled externally and remotely to perform safe, accurate, effective fenestration of the Dandy-Walker cyst, specifically in the posterior fossa of the brain, to directly normalize intracranial pressure. Bionaut™ allows for complex non-linear trajectories not feasible by any conventional surgical techniques. The microsurgical particle safely reaches targets in the lower occipital section of the brain. Bionaut™ offers a minimally invasive surgical alternative to highly involved posterior craniotomy or shunts via direct fenestration of the fourth ventricular cyst at the locus defined by the individual anatomy. Our approach offers significant advantages over the current standards of care in patients exhibiting anatomical challenge(s) as a manifestation of DWM, and therefore, is intended to replace conventional therapeutic strategies. Current progress, including platform optimization, Bionaut™ control, and real-time imaging and in vivo safety studies of the Bionauts™ in large animals, specifically the spine and the brain of ovine models, will be discussed.Keywords: Bionaut™, cerebral spinal fluid, CSF, cyst, Dandy-Walker, fenestration, hydrocephalus, micro-robot
Procedia PDF Downloads 2232998 Solitons and Universes with Acceleration Driven by Bulk Particles
Authors: A. C. Amaro de Faria Jr, A. M. Canone
Abstract:
Considering a scenario where our universe is taken as a 3d domain wall embedded in a 5d dimensional Minkowski space-time, we explore the existence of a richer class of solitonic solutions and their consequences for accelerating universes driven by collisions of bulk particle excitations with the walls. In particular it is shown that some of these solutions should play a fundamental role at the beginning of the expansion process. We present some of these solutions in cosmological scenarios that can be applied to models that describe the inflationary period of the Universe.Keywords: solitons, topological defects, branes, kinks, accelerating universes in brane scenarios
Procedia PDF Downloads 1412997 Modelling Soil Inherent Wind Erodibility Using Artifical Intellligent and Hybrid Techniques
Authors: Abbas Ahmadi, Bijan Raie, Mohammad Reza Neyshabouri, Mohammad Ali Ghorbani, Farrokh Asadzadeh
Abstract:
In recent years, vast areas of Urmia Lake in Dasht-e-Tabriz has dried up leading to saline sediments exposure on the surface lake coastal areas being highly susceptible to wind erosion. This study was conducted to investigate wind erosion and its relevance to soil physicochemical properties and also modeling of wind erodibility (WE) using artificial intelligence techniques. For this purpose, 96 soil samples were collected from 0-5 cm depth in 414000 hectares using stratified random sampling method. To measure the WE, all samples (<8 mm) were exposed to 5 different wind velocities (9.5, 11, 12.5, 14.1 and 15 m s-1 at the height of 20 cm) in wind tunnel and its relationship with soil physicochemical properties was evaluated. According to the results, WE varied within the range of 76.69-9.98 (g m-2 min-1)/(m s-1) with a mean of 10.21 and coefficient of variation of 94.5% showing a relatively high variation in the studied area. WE was significantly (P<0.01) affected by soil physical properties, including mean weight diameter, erodible fraction (secondary particles smaller than 0.85 mm) and percentage of the secondary particle size classes 2-4.75, 1.7-2 and 0.1-0.25 mm. Results showed that the mean weight diameter, erodible fraction and percentage of size class 0.1-0.25 mm demonstrated stronger relationship with WE (coefficients of determination were 0.69, 0.67 and 0.68, respectively). This study also compared efficiency of multiple linear regression (MLR), gene expression programming (GEP), artificial neural network (MLP), artificial neural network based on genetic algorithm (MLP-GA) and artificial neural network based on whale optimization algorithm (MLP-WOA) in predicting of soil wind erodibility in Dasht-e-Tabriz. Among 32 measured soil variable, percentages of fine sand, size classes of 1.7-2.0 and 0.1-0.25 mm (secondary particles) and organic carbon were selected as the model inputs by step-wise regression. Findings showed MLP-WOA as the most powerful artificial intelligence techniques (R2=0.87, NSE=0.87, ME=0.11 and RMSE=2.9) to predict soil wind erodibility in the study area; followed by MLP-GA, MLP, GEP and MLR and the difference between these methods were significant according to the MGN test. Based on the above finding MLP-WOA may be used as a promising method to predict soil wind erodibility in the study area.Keywords: wind erosion, erodible fraction, gene expression programming, artificial neural network
Procedia PDF Downloads 732996 An Optimization of Machine Parameters for Modified Horizontal Boring Tool Using Taguchi Method
Authors: Thirasak Panyaphirawat, Pairoj Sapsmarnwong, Teeratas Pornyungyuen
Abstract:
This paper presents the findings of an experimental investigation of important machining parameters for the horizontal boring tool modified to mouth with a horizontal lathe machine to bore an overlength workpiece. In order to verify a usability of a modified tool, design of experiment based on Taguchi method is performed. The parameters investigated are spindle speed, feed rate, depth of cut and length of workpiece. Taguchi L9 orthogonal array is selected for four factors three level parameters in order to minimize surface roughness (Ra and Rz) of S45C steel tubes. Signal to noise ratio analysis and analysis of variance (ANOVA) is performed to study an effect of said parameters and to optimize the machine setting for best surface finish. The controlled factors with most effect are depth of cut, spindle speed, length of workpiece, and feed rate in order. The confirmation test is performed to test the optimal setting obtained from Taguchi method and the result is satisfactory.Keywords: design of experiment, Taguchi design, optimization, analysis of variance, machining parameters, horizontal boring tool
Procedia PDF Downloads 4402995 Optimization of Hemp Fiber Reinforced Concrete for Various Environmental Conditions
Authors: Zoe Chang, Max Williams, Gautham Das
Abstract:
The purpose of this study is to evaluate the incorporation of hemp fibers (HF) in concrete. Hemp fiber reinforced concrete (HFRC) is becoming more popular as an alternative for regular mix designs. This study was done to evaluate the compressive strength of HFRC regarding mix procedure. Hemp fibers were obtained from the manufacturer and hand-processed to ensure uniformity in width and length. The fibers were added to the concrete as both wet and dry mixes to investigate and optimize the mix design process. Results indicated that the dry mix had a compressive strength of 1157 psi compared to the wet mix of 985 psi. This dry mix compressive strength was within range of the standard mix compressive strength of 1533 psi. The statistical analysis revealed that the mix design process needs further optimization and uniformity concerning the addition of HF. Regression analysis revealed the standard mix design had a coefficient of 0.9 as compared to the dry mix of 0.375, indicating a variation in the mixing process. While completing the dry mix, the addition of plain hemp fibers caused them to intertwine, creating lumps and inconsistency. However, during the wet mixing process, combining water and hemp fibers before incorporation allows the fibers to uniformly disperse within the mix; hence the regression analysis indicated a better coefficient of 0.55. This study concludes that HRFC is a viable alternative to regular mixes; however, more research surrounding its characteristics needs to be conducted.Keywords: hemp fibers, hemp reinforced concrete, wet & dry, freeze thaw testing, compressive strength
Procedia PDF Downloads 2012994 Pegylated Liposomes of Trans Resveratrol, an Anticancer Agent, for Enhancing Therapeutic Efficacy and Long Circulation
Authors: M. R. Vijayakumar, Sanjay Kumar Singh, Lakshmi, Hithesh Dewangan, Sanjay Singh
Abstract:
Trans resveratrol (RES) is a natural molecule proved for cancer preventive and therapeutic activities devoid of any potential side effects. However, the therapeutic application of RES in disease management is limited because of its rapid elimination from blood circulation thereby low biological half life in mammals. Therefore, the main objective of this study is to enhance the circulation as well as therapeutic efficacy using PEGylated liposomes. D-α-tocopheryl polyethylene glycol 1000 succinate (vitamin E TPGS) is applied as steric surface decorating agent to prepare RES liposomes by thin film hydration method. The prepared nanoparticles were evaluated by various state of the art techniques such as dynamic light scattering (DLS) technique for particle size and zeta potential, TEM for shape, differential scanning calorimetry (DSC) for interaction analysis and XRD for crystalline changes of drug. Encapsulation efficiency and invitro drug release were determined by dialysis bag method. Cancer cell viability studies were performed by MTT assay, respectively. Pharmacokinetic studies were performed in sprague dawley rats. The prepared liposomes were found to be spherical in shape. Particle size and zeta potential of prepared formulations varied from 64.5±3.16 to 262.3±7.45 nm and -2.1 to 1.76 mV, respectively. DSC study revealed absence of potential interaction. XRD study revealed presence of amorphous form in liposomes. Entrapment efficiency was found to be 87.45±2.14 % and the drug release was found to be controlled up to 24 hours. Minimized MEC in MTT assay and tremendous enhancement in circulation time of RES PEGylated liposomes than its pristine form revealed that the stearic stabilized PEGylated liposomes can be an alternative tool to commercialize this molecule for chemopreventive and therapeutic applications in cancer.Keywords: trans resveratrol, cancer nanotechnology, long circulating liposomes, bioavailability enhancement, liposomes for cancer therapy, PEGylated liposomes
Procedia PDF Downloads 5912993 Dynamic Route Optimization in Vehicle Adhoc Networks: A Heuristics Routing Protocol
Authors: Rafi Ullah, Shah Muhammad Emaduddin, Taha Jilani
Abstract:
Vehicle Adhoc Networks (VANET) belongs to a special class of Mobile Adhoc Network (MANET) with high mobility. Network is created by road side vehicles equipped with communication devices like GPS and Wifi etc. Since the environment is highly dynamic due to difference in speed and high mobility of vehicles and weak stability of the network connection, it is a challenging task to design an efficient routing protocol for such an unstable environment. Our proposed algorithm uses heuristic for the calculation of optimal path for routing the packet efficiently in collaboration with several other parameters like geographical location, speed, priority, the distance among the vehicles, communication range, and networks congestion. We have incorporated probabilistic, heuristic and machine learning based approach inconsistency with the relay function of the memory buffer to keep the packet moving towards the destination. These parameters when used in collaboration provide us a very strong and admissible heuristics. We have mathematically proved that the proposed technique is efficient for the routing of packets, especially in a medical emergency situation. These networks can be used for medical emergency, security, entertainment and routing purposes.Keywords: heuristics routing, intelligent routing, VANET, route optimization
Procedia PDF Downloads 1802992 Study of Launch Recovery Control Dynamics of Retro Propulsive Reusable Rockets
Authors: Pratyush Agnihotri
Abstract:
The space missions are very costly because the transportation to the space is highly expensive and therefore there is the need to achieve complete re-usability in our launch vehicles to make the missions highly economic by cost cutting of the material recovered. Launcher reusability is the most efficient approach to decreasing admittance to space access economy, however stays an incredible specialized hurdle for the aerospace industry. Major concern of the difficulties lies in guidance and control procedure and calculations, specifically for those of the controlled landing stage, which should empower an exact landing with low fuel edges. Although cutting edge ways for navigation and control are present viz hybrid navigation and robust control. But for powered descent and landing of first stage of launch vehicle the guidance control is need to enable on board optimization. At first the CAD model of the launch vehicle I.e. space x falcon 9 rocket is presented for better understanding of the architecture that needs to be identified for the guidance and control solution for the recovery of the launcher. The focus is on providing the landing phase guidance scheme for recovery and re usability of first stage using retro propulsion. After reviewing various GNC solutions, to achieve accuracy in pre requisite landing online convex and successive optimization are explored as the guidance schemes.Keywords: guidance, navigation, control, retro propulsion, reusable rockets
Procedia PDF Downloads 942991 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications
Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong
Abstract:
This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.Keywords: asymptotically quasi-nonexpansive nonself-mapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space
Procedia PDF Downloads 2622990 Understanding the Fundamental Driver of Semiconductor Radiation Tolerance with Experiment and Theory
Authors: Julie V. Logan, Preston T. Webster, Kevin B. Woller, Christian P. Morath, Michael P. Short
Abstract:
Semiconductors, as the base of critical electronic systems, are exposed to damaging radiation while operating in space, nuclear reactors, and particle accelerator environments. What innate property allows some semiconductors to sustain little damage while others accumulate defects rapidly with dose is, at present, poorly understood. This limits the extent to which radiation tolerance can be implemented as a design criterion. To address this problem of determining the driver of semiconductor radiation tolerance, the first step is to generate a dataset of the relative radiation tolerance of a large range of semiconductors (exposed to the same radiation damage and characterized in the same way). To accomplish this, Rutherford backscatter channeling experiments are used to compare the displaced lattice atom buildup in InAs, InP, GaP, GaN, ZnO, MgO, and Si as a function of step-wise alpha particle dose. With this experimental information on radiation-induced incorporation of interstitial defects in hand, hybrid density functional theory electron densities (and their derived quantities) are calculated, and their gradient and Laplacian are evaluated to obtain key fundamental information about the interactions in each material. It is shown that simple, undifferentiated values (which are typically used to describe bond strength) are insufficient to predict radiation tolerance. Instead, the curvature of the electron density at bond critical points provides a measure of radiation tolerance consistent with the experimental results obtained. This curvature and associated forces surrounding bond critical points disfavors localization of displaced lattice atoms at these points, favoring their diffusion toward perfect lattice positions. With this criterion to predict radiation tolerance, simple density functional theory simulations can be conducted on potential new materials to gain insight into how they may operate in demanding high radiation environments.Keywords: density functional theory, GaN, GaP, InAs, InP, MgO, radiation tolerance, rutherford backscatter channeling
Procedia PDF Downloads 1752989 Applications for Additive Manufacturing Technology for Reducing the Weight of Body Parts of Gas Turbine Engines
Authors: Liubov Magerramova, Mikhail Petrov, Vladimir Isakov, Liana Shcherbinina, Suren Gukasyan, Daniil Povalyukhin, Olga Klimova-Korsmik, Darya Volosevich
Abstract:
Aircraft engines are developing along the path of increasing resource, strength, reliability, and safety. The building of gas turbine engine body parts is a complex design and technological task. Particularly complex in the design and manufacturing are the casings of the input stages of helicopter gearboxes and central drives of aircraft engines. Traditional technologies, such as precision casting or isothermal forging, are characterized by significant limitations in parts production. For parts like housing, additive technologies guarantee spatial freedom and limitless or flexible design. This article presents the results of computational and experimental studies. These investigations justify the applicability of additive technologies (AT) to reduce the weight of aircraft housing gearbox parts by up to 32%. This is possible due to geometrical optimization compared to the classical, less flexible manufacturing methods and as-casted aircraft parts with over-insured values of safety factors. Using an example of the body of the input stage of an aircraft gearbox, visualization of the layer-by-layer manufacturing of a part based on thermal deformation was demonstrated.Keywords: additive technologies, gas turbine engines, topological optimization, synthesis process
Procedia PDF Downloads 1192988 Inclusion Body Refolding at High Concentration for Large-Scale Applications
Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening
Abstract:
High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.Keywords: dialysis, inclusion body, refolding, solubilization
Procedia PDF Downloads 2952987 Development of Algorithms for Solving and Analyzing Special Problems Transports Type
Authors: Dmitri Terzi
Abstract:
The article presents the results of an algorithmic study of a special optimization problem of the transport type (traveling salesman problem): 1) To solve the problem, a new natural algorithm has been developed based on the decomposition of the initial data into convex hulls, which has a number of advantages; it is applicable for a fairly large dimension, does not require a large amount of memory, and has fairly good performance. The relevance of the algorithm lies in the fact that, in practice, programs for problems with the number of traversal points of no more than twenty are widely used. For large-scale problems, the availability of algorithms and programs of this kind is difficult. The proposed algorithm is natural because the optimal solution found by the exact algorithm is not always feasible due to the presence of many other factors that may require some additional restrictions. 2) Another inverse problem solved here is to describe a class of traveling salesman problems that have a predetermined optimal solution. The constructed algorithm 2 allows us to characterize the structure of traveling salesman problems, as well as construct test problems to evaluate the effectiveness of algorithms and other purposes. 3) The appendix presents a software implementation of Algorithm 1 (in MATLAB), which can be used to solve practical problems, as well as in the educational process on operations research and optimization methods.Keywords: traveling salesman problem, solution construction algorithm, convex hulls, optimality verification
Procedia PDF Downloads 772986 On-Chip Sensor Ellipse Distribution Method and Equivalent Mapping Technique for Real-Time Hardware Trojan Detection and Location
Authors: Longfei Wang, Selçuk Köse
Abstract:
Hardware Trojan becomes great concern as integrated circuit (IC) technology advances and not all manufacturing steps of an IC are accomplished within one company. Real-time hardware Trojan detection is proven to be a feasible way to detect randomly activated Trojans that cannot be detected at testing stage. On-chip sensors serve as a great candidate to implement real-time hardware Trojan detection, however, the optimization of on-chip sensors has not been thoroughly investigated and the location of Trojan has not been carefully explored. On-chip sensor ellipse distribution method and equivalent mapping technique are proposed based on the characteristics of on-chip power delivery network in this paper to address the optimization and distribution of on-chip sensors for real-time hardware Trojan detection as well as to estimate the location and current consumption of hardware Trojan. Simulation results verify that hardware Trojan activation can be effectively detected and the location of a hardware Trojan can be efficiently estimated with less than 5% error for a realistic power grid using our proposed methods. The proposed techniques therefore lay a solid foundation for isolation and even deactivation of hardware Trojans through accurate location of Trojans.Keywords: hardware trojan, on-chip sensor, power distribution network, power/ground noise
Procedia PDF Downloads 3932985 Optical Flow Technique for Supersonic Jet Measurements
Authors: Haoxiang Desmond Lim, Jie Wu, Tze How Daniel New, Shengxian Shi
Abstract:
This paper outlines the development of a novel experimental technique in quantifying supersonic jet flows, in an attempt to avoid seeding particle problems frequently associated with particle-image velocimetry (PIV) techniques at high Mach numbers. Based on optical flow algorithms, the idea behind the technique involves using high speed cameras to capture Schlieren images of the supersonic jet shear layers, before they are subjected to an adapted optical flow algorithm based on the Horn-Schnuck method to determine the associated flow fields. The proposed method is capable of offering full-field unsteady flow information with potentially higher accuracy and resolution than existing point-measurements or PIV techniques. Preliminary study via numerical simulations of a circular de Laval jet nozzle successfully reveals flow and shock structures typically associated with supersonic jet flows, which serve as useful data for subsequent validation of the optical flow based experimental results. For experimental technique, a Z-type Schlieren setup is proposed with supersonic jet operated in cold mode, stagnation pressure of 8.2 bar and exit velocity of Mach 1.5. High-speed single-frame or double-frame cameras are used to capture successive Schlieren images. As implementation of optical flow technique to supersonic flows remains rare, the current focus revolves around methodology validation through synthetic images. The results of validation test offers valuable insight into how the optical flow algorithm can be further improved to improve robustness and accuracy. Details of the methodology employed and challenges faced will be further elaborated in the final conference paper should the abstract be accepted. Despite these challenges however, this novel supersonic flow measurement technique may potentially offer a simpler way to identify and quantify the fine spatial structures within the shock shear layer.Keywords: Schlieren, optical flow, supersonic jets, shock shear layer
Procedia PDF Downloads 313