Search results for: computational simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6291

Search results for: computational simulation

501 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding

Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang

Abstract:

As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.

Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis

Procedia PDF Downloads 8
500 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 213
499 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 127
498 Screening for Non-hallucinogenic Neuroplastogens as Drug Candidates for the Treatment of Anxiety, Depression, and Posttraumatic Stress Disorder

Authors: Jillian M. Hagel, Joseph E. Tucker, Peter J. Facchini

Abstract:

With the aim of establishing a holistic approach for the treatment of central nervous system (CNS) disorders, we are pursuing a drug development program rapidly progressing through discovery and characterization phases. The drug candidates identified in this program are referred to as neuroplastogens owing to their ability to mediate neuroplasticity, which can be beneficial to patients suffering from anxiety, depression, or posttraumatic stress disorder. These and other related neuropsychiatric conditions are associated with the onset of neuronal atrophy, which is defined as a reduction in the number and/or productivity of neurons. The stimulation of neuroplasticity results in an increase in the connectivity between neurons and promotes the restoration of healthy brain function. We have synthesized a substantial catalogue of proprietary indolethylamine derivatives based on the general structures of serotonin (5-hydroxytryptamine) and psychedelic molecules such as N,N-dimethyltryptamine (DMT) and psilocin (4-hydroxy-DMT) that function as neuroplastogens. A primary objective in our screening protocol is the identification of derivatives associated with a significant reduction in hallucination, which will allow administration of the drug at a dose that induces neuroplasticity and triggers other efficacious outcomes in the treatment of targeted CNS disorders but which does not cause a psychedelic response in the patient. Both neuroplasticity and hallucination are associated with engagement of the 5HT2A receptor, requiring drug candidates differentially coupled to these two outcomes at a molecular level. We use novel and proprietary artificial intelligence algorithms to predict the mode of binding to the 5HT2A receptor, which has been shown to correlate with the hallucinogenic response. Hallucination is tested using the mouse head-twitch response model, whereas mouse marble-burying and sucrose preference assays are used to evaluate anxiolytic and anti-depressive potential. Neuroplasticity is assays using dendritic outgrowth assays and cell-based ELISA analysis. Pharmacokinetics and additional receptor-binding analyses also contribute the selection of lead candidates. A summary of the program is presented.

Keywords: neuroplastogen, non-hallucinogenic, drug development, anxiety, depression, PTSD, indolethylamine derivatives, psychedelic-inspired, 5-HT2A receptor, computational chemistry, head-twitch response behavioural model, neurite outgrowth assay

Procedia PDF Downloads 109
497 Movable Airfoil Arm (MAA) and Ducting Effect to Increase the Efficiency of a Helical Turbine

Authors: Abdi Ismail, Zain Amarta, Riza Rifaldy Argaputra

Abstract:

The Helical Turbine has the highest efficiency in comparison with the other hydrokinetic turbines. However, the potential of the Helical Turbine efficiency can be further improved so that the kinetic energy of a water current can be converted into mechanical energy as much as possible. This paper explains the effects by adding a Movable Airfoil Arm (MAA) and ducting on a Helical Turbine. The first research conducted an analysis of the efficiency comparison between a Plate Arm Helical Turbine (PAHT) versus a Movable Arm Helical Turbine Airfoil (MAAHT) at various water current velocities. The first step is manufacturing a PAHT and MAAHT. The PAHT and MAAHT has these specifications (as a fixed variable): 80 cm in diameter, a height of 88 cm, 3 blades, NACA 0018 blade profile, a 10 cm blade chord and a 60o inclination angle. The MAAHT uses a NACA 0012 airfoil arm that can move downward 20o, the PAHT uses a 5 mm plate arm. At the current velocity of 0.8, 0.85 and 0.9 m/s, the PAHT respectively generates a mechanical power of 92, 117 and 91 watts (a consecutive efficiency of 16%, 17% and 11%). At the same current velocity variation, the MAAHT respectively generates 74, 60 and 43 watts (a consecutive efficiency of 13%, 9% and 5%). Therefore, PAHT has a better performance than the MAAHT. Using analysis from CFD (Computational Fluid Dynamics), the drag force of MAA is greater than the one generated by the plate arm. By using CFD analysis, the drag force that occurs on the MAA is more dominant than the lift force, therefore the MAA can be called a drag device, whereas the lift force that occurs on the helical blade is more dominant than the drag force, therefore it can be called a lift device. Thus, the lift device cannot be combined with the drag device, because the drag device will become a hindrance to the lift device rotation. The second research conducted an analysis of the efficiency comparison between a Ducted Helical Turbine (DHT) versus a Helical Turbine (HT) through experimental studies. The first step is manufacturing the DHT and HT. The Helical turbine specifications (as a fixed variable) are: 40 cm in diameter, a height of 88 cm, 3 blades, NACA 0018 blade profile, 10 cm blade chord and a 60o inclination angle. At the current speed of 0.7, 0.8, 0.9 and 1.1 m/s, the HT respectively generates a mechanical power of 72, 85, 93 and 98 watts (a consecutive efficiency of 38%, 30%, 23% and 13%). At the same current speed variation, the DHT generates a mechanical power of 82, 98, 110 and 134 watts (a consecutive efficiency of 43%, 34%, 27% and 18%), respectively. The usage of ducting causes the water current speed around the turbine to increase.

Keywords: hydrokinetic turbine, helical turbine, movable airfoil arm, ducting

Procedia PDF Downloads 356
496 Effect of Discharge Pressure Conditions on Flow Characteristics in Axial Piston Pump

Authors: Jonghyuk Yoon, Jongil Yoon, Seong-Gyo Chung

Abstract:

In many kinds of industries which usually need a large amount of power, an axial piston pump has been widely used as a main power source of a hydraulic system. The axial piston pump is a type of positive displacement pump that has several pistons in a circular array within a cylinder block. As the cylinder block and pistons start to rotate, since the exposed ends of the pistons are constrained to follow the surface of the swashed plate, the pistons are driven to reciprocate axially and then a hydraulic power is produced. In the present study, a numerical simulation which has three dimensional full model of the axial piston pump was carried out using a commercial CFD code (Ansys CFX 14.5). In order to take into consideration motion of compression and extension by the reciprocating pistons, the moving boundary conditions were applied as a function of the rotation angle to that region. In addition, this pump using hydraulic oil as working fluid is intentionally designed as a small amount of oil leaks out in order to lubricate moving parts. Since leakage could directly affect the pump efficiency, evaluation of effect of oil-leakage is very important. In order to predict the effect of the oil leakage on the pump efficiency, we considered the leakage between piston-shoe and swash-plate by modeling cylindrical shaped-feature at the end of the cylinder. In order to validate the numerical method used in this study, the numerical results of the flow rate at the discharge port are compared with the experimental data, and good agreement between them was shown. Using the validated numerical method, the effect of the discharge pressure was also investigated. The result of the present study can be useful information of small axial piston pump used in many different manufacturing industries. Acknowledgement: This research was financially supported by the “Next-generation construction machinery component specialization complex development program” through the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT).

Keywords: axial piston pump, CFD, discharge pressure, hydraulic system, moving boundary condition, oil leaks

Procedia PDF Downloads 232
495 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study

Authors: Thomas Arink, Isam Janajreh

Abstract:

The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).

Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires

Procedia PDF Downloads 503
494 Design of Large Parallel Underground Openings in Himalayas: A Case Study of Desilting Chambers for Punatsangchhu-I, Bhutan

Authors: Kanupreiya, Rajani Sharma

Abstract:

Construction of a single underground structure is itself a challenging task, and it becomes more critical in tectonically active young mountains such as the Himalayas which are highly anisotropic. The Himalayan geology mostly comprises of incompetent and sheared rock mass in addition to fold/faults, rock burst, and water ingress. Underground tunnels form the most essential and important structure in run-of-river hydroelectric projects. Punatsangchhu I hydroelectric project (PHEP-I), Bhutan (1200 MW) is a run-of-river scheme which has four parallel underground desilting chambers. The Punatsangchhu River carries a large quantity of silt load during monsoon season. Desilting chambers were provided to remove the silt particles of size greater than and equal to 0.2 mm with 90% efficiency, thereby minimizing the rate of damage to turbines. These chambers are 330 m long, 18 m wide at the center and 23.87 m high, with a 5.87 m hopper portion. The geology of desilting chambers was known from an exploratory drift which exposed low dipping foliation joint and six joint sets. The RMR and Q value in this reach varied from 40 to 60 and 1 to 6 respectively. This paper describes different rock engineering principles undertaken for safe excavation and rock support of the moderately jointed, blocky and thinly foliated biotite gneiss. For the design of rock support system of desilting chambers, empirical and numerical analysis was adopted. Finite element analysis was carried out for cavern design and finalization of pillar width using Phase2. Phase2 is a powerful tool for simulation of stage-wise excavation with simultaneous provision of support system. As the geology of the region had 7 sets of joints, in addition to FEM based approach, safety factors for potentially unstable wedges were checked using UnWedge. The final support recommendations were based on continuous face mapping, numerical modelling, empirical calculations, and practical experiences.

Keywords: dam siltation, Himalayan geology, hydropower, rock support, numerical modelling

Procedia PDF Downloads 78
493 Research of Stalled Operational Modes of Axial-Flow Compressor for Diagnostics of Pre-Surge State

Authors: F. Mohammadsadeghi

Abstract:

Relevance of research: Axial compressors are used in both aircraft engine construction and ground-based gas turbine engines. The compressor is considered to be one of the main gas turbine engine units, which define absolute and relative indicators of engine in general. Failure of compressor often leads to drastic consequences. Therefore, safe (stable) operation must be maintained when using axial compressor. Currently, we can observe a tendency of increase of power unit, productivity, circumferential velocity and compression ratio of axial compressors in gas turbine engines of aircraft and ground-based application whereas metal consumption of their structure tends to fall. This causes the increase of dynamic loads as well as danger of damage of high load compressor or engine structure elements in general due to transient processes. In operating practices of aeronautical engineering and ground units with gas turbine drive the operational stability failure of gas turbine engines is one of relatively often failure causes what can lead to emergency situations. Surge occurrence is considered to be an absolute buckling failure. This is one of the most dangerous and often occurring types of instability. However detailed were the researches of this phenomenon the development of measures for surge before-the-fact prevention is still relevant. This is why the research of transient processes for axial compressors is necessary in order to provide efficient, stable and secure operation. The paper addresses the problem of automatic control system improvement by integrating the anti-surge algorithms for axial compressor of aircraft gas turbine engine. Paper considers dynamic exhaustion of gas dynamic stability of compressor stage, results of numerical simulation of airflow flowing through the airfoil at design and stalling modes, experimental researches to form the criteria that identify the compressor state at pre-surge mode detection. Authors formulated basic ways for developing surge preventing systems, i.e. forming the algorithms that allow detecting the surge origination and the systems that implement the proposed algorithms.

Keywords: axial compressor, rotation stall, Surg, unstable operation of gas turbine engine

Procedia PDF Downloads 391
492 Speckle-Based Phase Contrast Micro-Computed Tomography with Neural Network Reconstruction

Authors: Y. Zheng, M. Busi, A. F. Pedersen, M. A. Beltran, C. Gundlach

Abstract:

X-ray phase contrast imaging has shown to yield a better contrast compared to conventional attenuation X-ray imaging, especially for soft tissues in the medical imaging energy range. This can potentially lead to better diagnosis for patients. However, phase contrast imaging has mainly been performed using highly brilliant Synchrotron radiation, as it requires high coherence X-rays. Many research teams have demonstrated that it is also feasible using a laboratory source, bringing it one step closer to clinical use. Nevertheless, the requirement of fine gratings and high precision stepping motors when using a laboratory source prevents it from being widely used. Recently, a random phase object has been proposed as an analyzer. This method requires a much less robust experimental setup. However, previous studies were done using a particular X-ray source (liquid-metal jet micro-focus source) or high precision motors for stepping. We have been working on a much simpler setup with just small modification of a commercial bench-top micro-CT (computed tomography) scanner, by introducing a piece of sandpaper as the phase analyzer in front of the X-ray source. However, it needs a suitable algorithm for speckle tracking and 3D reconstructions. The precision and sensitivity of speckle tracking algorithm determine the resolution of the system, while the 3D reconstruction algorithm will affect the minimum number of projections required, thus limiting the temporal resolution. As phase contrast imaging methods usually require much longer exposure time than traditional absorption based X-ray imaging technologies, a dynamic phase contrast micro-CT with a high temporal resolution is particularly challenging. Different reconstruction methods, including neural network based techniques, will be evaluated in this project to increase the temporal resolution of the phase contrast micro-CT. A Monte Carlo ray tracing simulation (McXtrace) was used to generate a large dataset to train the neural network, in order to address the issue that neural networks require large amount of training data to get high-quality reconstructions.

Keywords: micro-ct, neural networks, reconstruction, speckle-based x-ray phase contrast

Procedia PDF Downloads 240
491 Copper Phthalocyanine Nanostructures: A Potential Material for Field Emission Display

Authors: Uttam Kumar Ghorai, Madhupriya Samanta, Subhajit Saha, Swati Das, Nilesh Mazumder, Kalyan Kumar Chattopadhyay

Abstract:

Organic semiconductors have gained potential interest in the last few decades for their significant contributions in the various fields such as solar cell, non-volatile memory devices, field effect transistors and light emitting diodes etc. The most important advantages of using organic materials are mechanically flexible, light weight and low temperature depositing techniques. Recently with the advancement of nanoscience and technology, one dimensional organic and inorganic nanostructures such as nanowires, nanorods, nanotubes have gained tremendous interests due to their very high aspect ratio and large surface area for electron transport etc. Among them, self-assembled organic nanostructures like Copper, Zinc Phthalocyanine have shown good transport property and thermal stability due to their π conjugated bonds and π-π stacking respectively. Field emission properties of inorganic and carbon based nanostructures are reported in literatures mostly. But there are few reports in case of cold cathode emission characteristics of organic semiconductor nanostructures. In this work, the authors report the field emission characteristics of chemically and physically synthesized Copper Phthalocyanine (CuPc) nanostructures such as nanowires, nanotubes and nanotips. The as prepared samples were characterized by X-Ray diffraction (XRD), Ultra Violet Visible Spectrometer (UV-Vis), Fourier Transform Infra-red Spectroscopy (FTIR), and Field Emission Scanning Electron Microscope (FESEM) and Transmission Electron Microscope (TEM). The field emission characteristics were measured in our home designed field emission set up. The registered turn-on field and local field enhancement factor are found to be less than 5 V/μm and greater than 1000 respectively. The field emission behaviour is also stable for 200 minute. The experimental results are further verified by theoretically using by a finite displacement method as implemented in ANSYS Maxwell simulation package. The obtained results strongly indicate CuPc nanostructures to be the potential candidate as an electron emitter for field emission based display device applications.

Keywords: organic semiconductor, phthalocyanine, nanowires, nanotubes, field emission

Procedia PDF Downloads 485
490 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding

Authors: Wenya Shu, Ilinca Stanciulescu

Abstract:

Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.

Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding

Procedia PDF Downloads 112
489 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 47
488 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System

Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu

Abstract:

Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.

Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model

Procedia PDF Downloads 93
487 Current Drainage Attack Correction via Adjusting the Attacking Saw-Function Asymmetry

Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap

Abstract:

Current drainage attack suggested previously is further studied in regular settings of closed-loop controlled Brushless DC (BLDC) motor with Kalman filter in the feedback loop. Modeling and simulation experiments are conducted in a Matlab environment, implementing the closed-loop control model of BLDC motor operation in position sensorless mode under Kalman filter drive. The current increase in the motor windings is caused by the controller (p-controller in our case) affected by false data injection of substitution of the angular velocity estimates with distorted values. Operation of multiplication to distortion coefficient, values of which are taken from the distortion function synchronized in its periodicity with the rotor’s position change. A saw function with a triangular tooth shape is studied herewith for the purpose of carrying out the bias injection with current drainage consequences. The specific focus here is on how the asymmetry of the tooth in the saw function affects the flow of current drainage. The purpose is two-fold: (i) to produce and collect the signature of an asymmetric saw in the attack for further pattern recognition process, and (ii) to determine conditions of improving stealthiness of such attack via regulating asymmetry in saw function used. It is found that modification of the symmetry in the saw tooth affects the periodicity of current drainage modulation. Specifically, the modulation frequency of the drained current for a fully asymmetric tooth shape coincides with the saw function modulation frequency itself. Increasing the symmetry parameter for the triangle tooth shape leads to an increase in the modulation frequency for the drained current. Moreover, such frequency reaches the switching frequency of the motor windings for fully symmetric triangular shapes, thus becoming undetectable and improving the stealthiness of the attack. Therefore, the collected signatures of the attack can serve for attack parameter identification via the pattern recognition route.

Keywords: bias injection attack, Kalman filter, BLDC motor, control system, closed loop, P-controller, PID-controller, current drainage, saw-function, asymmetry

Procedia PDF Downloads 63
486 Comparison of Extended Kalman Filter and Unscented Kalman Filter for Autonomous Orbit Determination of Lagrangian Navigation Constellation

Authors: Youtao Gao, Bingyu Jin, Tanran Zhao, Bo Xu

Abstract:

The history of satellite navigation can be dated back to the 1960s. From the U.S. Transit system and the Russian Tsikada system to the modern Global Positioning System (GPS) and the Globalnaya Navigatsionnaya Sputnikovaya Sistema (GLONASS), performance of satellite navigation has been greatly improved. Nowadays, the navigation accuracy and coverage of these existing systems have already fully fulfilled the requirement of near-Earth users, but these systems are still beyond the reach of deep space targets. Due to the renewed interest in space exploration, a novel high-precision satellite navigation system is becoming even more important. The increasing demand for such a deep space navigation system has contributed to the emergence of a variety of new constellation architectures, such as the Lunar Global Positioning System. Apart from a Walker constellation which is similar to the one adopted by GPS on Earth, a novel constellation architecture which consists of libration point satellites in the Earth-Moon system is also available to construct the lunar navigation system, which can be called accordingly, the libration point satellite navigation system. The concept of using Earth-Moon libration point satellites for lunar navigation was first proposed by Farquhar and then followed by many other researchers. Moreover, due to the special characteristics of Libration point orbits, an autonomous orbit determination technique, which is called ‘Liaison navigation’, can be adopted by the libration point satellites. Using only scalar satellite-to-satellite tracking data, both the orbits of the user and libration point satellites can be determined autonomously. In this way, the extensive Earth-based tracking measurement can be eliminated, and an autonomous satellite navigation system can be developed for future space exploration missions. The method of state estimate is an unnegligible factor which impacts on the orbit determination accuracy besides type of orbit, initial state accuracy and measurement accuracy. We apply the extended Kalman filter(EKF) and the unscented Kalman filter(UKF) to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The simulation results illustrate that UKF can improve the accuracy and z-axis convergence to some extent.

Keywords: extended Kalman filter, autonomous orbit determination, unscented Kalman filter, navigation constellation

Procedia PDF Downloads 266
485 Remote Sensing Reversion of Water Depths and Water Management for Waterbird Habitats: A Case Study on the Stopover Site of Siberian Cranes at Momoge, China

Authors: Chunyue Liu, Hongxing Jiang

Abstract:

Traditional water depth survey of wetland habitats used by waterbirds needs intensive labor, time and money. The optical remote sensing image relies on passive multispectral scanner data has been widely employed to study estimate water depth. This paper presents an innovative method for developing the water depth model based on the characteristics of visible and thermal infrared spectra of Landsat ETM+ image, combing with 441 field water depth data at Etoupao shallow wetland. The wetland is located at Momoge National Nature Reserve of Northeast China, where the largest stopover habitat along the eastern flyway of globally, critically-endangered Siberian Cranes are. The cranes mainly feed on the tubers of emergent aquatic plants such as Scirpus planiculmis and S. nipponicus. The effective water control is a critical step for maintaining the production of tubers and food availability for this crane. The model employing multi-band approach can effectively simulate water depth for this shallow wetland. The model parameters of NDVI and GREEN indicated the vegetation growth and coverage affecting the reflectance from water column change are uneven. Combining with the field-observed water level at the same date of image acquisition, the digital elevation model (DEM) for the underwater terrain was generated. The wetland area and water volume of different water levels were then calculated from the DEM using the function of Area and Volume Statistics under the 3D Analyst of ArcGIS 10.0. The findings provide good references to effectively monitor changes in water level and water demand, develop practical plan for water level regulation and water management, and to create best foraging habitats for the cranes. The methods here can be adopted for the bottom topography simulation and water management in waterbirds’ habitats, especially in the shallow wetlands.

Keywords: remote sensing, water depth reversion, shallow wetland habitat management, siberian crane

Procedia PDF Downloads 242
484 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process

Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.

Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization

Procedia PDF Downloads 90
483 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 57
482 Effect of Varying Zener-Hollomon Parameter (Temperature and Flow Stress) and Stress Relaxation on Creep Response of Hot Deformed AA3104 Can Body Stock

Authors: Oyindamola Kayode, Sarah George, Roberto Borrageiro, Mike Shirran

Abstract:

A phenomenon identified by our industrial partner has experienced sag on AA3104 can body stock (CBS) transfer bar during transportation of the slab from the breakdown mill to the finishing mill. Excessive sag results in bottom scuffing of the slab onto the roller table, resulting in surface defects on the final product. It has been found that increasing the strain rate on the breakdown mill final pass results in a slab resistant to sag. The creep response for materials hot deformed at different Zener–Holloman parameter values needs to be evaluated experimentally to gain better understanding of the operating mechanism. This study investigates this identified phenomenon through laboratory simulation of the breakdown mill conditions for various strain rates by utilizing the Gleeble at UCT Centre for Materials Engineering. The experiment will determine the creep response for a range of conditions as well as quantifying the associated material microstructure (sub-grain size, grain structure etc). The experimental matrices were determined based on experimental conditions approximate to industrial hot breakdown rolling and carried out on the Gleeble 3800 at the Centre for Materials Engineering, University of Cape Town. Plane strain compression samples were used for this series of tests at an applied load that allow for better contact and exaggerated creep displacement. A tantalum barrier layer was used for increased conductivity and decreased risk of anvil welding. One set of tests with no in-situ hold time was performed, where the samples were quenched after deformation. The samples were retained for microstructure analysis of the micrographs from the light microscopy (LM), quantitative data and images from scanning electron microscopy (SEM) and energy dispersive X-ray (EDX), sub-grain size and grain structure from electron back scattered diffraction (EBSD).

Keywords: aluminium alloy, can-body stock, hot rolling, creep response, Zener-Hollomon parameter

Procedia PDF Downloads 70
481 Using Mathematical Models to Predict the Academic Performance of Students from Initial Courses in Engineering School

Authors: Martín Pratto Burgos

Abstract:

The Engineering School of the University of the Republic in Uruguay offers an Introductory Mathematical Course from the second semester of 2019. This course has been designed to assist students in preparing themselves for math courses that are essential for Engineering Degrees, namely Math1, Math2, and Math3 in this research. The research proposes to build a model that can accurately predict the student's activity and academic progress based on their performance in the three essential Mathematical courses. Additionally, there is a need for a model that can forecast the incidence of the Introductory Mathematical Course in the three essential courses approval during the first academic year. The techniques used are Principal Component Analysis and predictive modelling using the Generalised Linear Model. The dataset includes information from 5135 engineering students and 12 different characteristics based on activity and course performance. Two models are created for a type of data that follows a binomial distribution using the R programming language. Model 1 is based on a variable's p-value being less than 0.05, and Model 2 uses the stepAIC function to remove variables and get the lowest AIC score. After using Principal Component Analysis, the main components represented in the y-axis are the approval of the Introductory Mathematical Course, and the x-axis is the approval of Math1 and Math2 courses as well as student activity three years after taking the Introductory Mathematical Course. Model 2, which considered student’s activity, performed the best with an AUC of 0.81 and an accuracy of 84%. According to Model 2, the student's engagement in school activities will continue for three years after the approval of the Introductory Mathematical Course. This is because they have successfully completed the Math1 and Math2 courses. Passing the Math3 course does not have any effect on the student’s activity. Concerning academic progress, the best fit is Model 1. It has an AUC of 0.56 and an accuracy rate of 91%. The model says that if the student passes the three first-year courses, they will progress according to the timeline set by the curriculum. Both models show that the Introductory Mathematical Course does not directly affect the student’s activity and academic progress. The best model to explain the impact of the Introductory Mathematical Course on the three first-year courses was Model 1. It has an AUC of 0.76 and 98% accuracy. The model shows that if students pass the Introductory Mathematical Course, it will help them to pass Math1 and Math2 courses without affecting their performance on the Math3 course. Matching the three predictive models, if students pass Math1 and Math2 courses, they will stay active for three years after taking the Introductory Mathematical Course, and also, they will continue following the recommended engineering curriculum. Additionally, the Introductory Mathematical Course helps students to pass Math1 and Math2 when they start Engineering School. Models obtained in the research don't consider the time students took to pass the three Math courses, but they can successfully assess courses in the university curriculum.

Keywords: machine-learning, engineering, university, education, computational models

Procedia PDF Downloads 64
480 Modelling Tyre Rubber Materials for High Frequency FE Analysis

Authors: Bharath Anantharamaiah, Tomas Bouda, Elke Deckers, Stijn Jonckheere, Wim Desmet, Juan J. Garcia

Abstract:

Automotive tyres are gaining importance recently in terms of their noise emission, not only with respect to reduction in noise, but also their perception and detection. Tyres exhibit a mechanical noise generation mechanism up to 1 kHz. However, owing to the fact that tyre is a composite of several materials, it has been difficult to model it using finite elements to predict noise at high frequencies. The currently available FE models have a reliability of about 500 Hz, the limit which, however, is not enough to perceive the roughness or sharpness of noise from tyre. These noise components are important in order to alert pedestrians on the street about passing by slow, especially electric vehicles. In order to model tyre noise behaviour up to 1 kHz, its dynamic behaviour must be accurately developed up to a 1 kHz limit using finite elements. Materials play a vital role in modelling the dynamic tyre behaviour precisely. Since tyre is a composition of several components, their precise definition in finite element simulations is necessary. However, during the tyre manufacturing process, these components are subjected to various pressures and temperatures, due to which these properties could change. Hence, material definitions are better described based on the tyre responses. In this work, the hyperelasticity of tyre component rubbers is calibrated, using the design of experiments technique from the tyre characteristic responses that are measured on a stiffness measurement machine. The viscoelasticity of rubbers are defined by the Prony series for rubbers, which are determined from the loss factor relationship between the loss and storage moduli, assuming that the rubbers are excited within the linear viscoelasticity ranges. These values of loss factor are measured and theoretically expressed as a function of rubber shore hardness or hyperelasticities. From the results of the work, there exists a good correlation between test and simulation vibrational transfer function up to 1 kHz. The model also allows flexibility, i.e., the frequency limit can also be extended, if required, by calibrating the Prony parameters of rubbers corresponding to the frequency of interest. As future work, these tyre models are used for noise generation at high frequencies and thus for tyre noise perception.

Keywords: tyre dynamics, rubber materials, prony series, hyperelasticity

Procedia PDF Downloads 178
479 Variables, Annotation, and Metadata Schemas for Early Modern Greek

Authors: Eleni Karantzola, Athanasios Karasimos, Vasiliki Makri, Ioanna Skouvara

Abstract:

Historical linguistics unveils the historical depth of languages and traces variation and change by analyzing linguistic variables over time. This field of linguistics usually deals with a closed data set that can only be expanded by the (re)discovery of previously unknown manuscripts or editions. In some cases, it is possible to use (almost) the entire closed corpus of a language for research, as is the case with the Thesaurus Linguae Graecae digital library for Ancient Greek, which contains most of the extant ancient Greek literature. However, concerning ‘dynamic’ periods when the production and circulation of texts in printed as well as manuscript form have not been fully mapped, representative samples and corpora of texts are needed. Such material and tools are utterly lacking for Early Modern Greek (16th-18th c.). In this study, the principles of the creation of EMoGReC, a pilot representative corpus of Early Modern Greek (16th-18th c.) are presented. Its design follows the fundamental principles of historical corpora. The selection of texts aims to create a representative and balanced corpus that gives insight into diachronic, diatopic and diaphasic variation. The pilot sample includes data derived from fully machine-readable vernacular texts, which belong to 4-5 different textual genres and come from different geographical areas. We develop a hierarchical linguistic annotation scheme, further customized to fit the characteristics of our text corpus. Regarding variables and their variants, we use as a point of departure the bundle of twenty-four features (or categories of features) for prose demotic texts of the 16th c. Tags are introduced bearing the variants [+old/archaic] or [+novel/vernacular]. On the other hand, further phenomena that are underway (cf. The Cambridge Grammar of Medieval and Early Modern Greek) are selected for tagging. The annotated texts are enriched with metalinguistic and sociolinguistic metadata to provide a testbed for the development of the first comprehensive set of tools for the Greek language of that period. Based on a relational management system with interconnection of data, annotations, and their metadata, the EMoGReC database aspires to join a state-of-the-art technological ecosystem for the research of observed language variation and change using advanced computational approaches.

Keywords: early modern Greek, variation and change, representative corpus, diachronic variables.

Procedia PDF Downloads 47
478 An Analysis of Pick Travel Distances for Non-Traditional Unit Load Warehouses with Multiple P/D Points

Authors: Subir S. Rao

Abstract:

Existing warehouse configurations use non-traditional aisle designs with a central P/D point in their models, which is mathematically simple but less practical. Many warehouses use multiple P/D points to avoid congestion for pickers, and different warehouses have different flow policies and infrastructure for using the P/D points. Many warehouses use multiple P/D points with non-traditional aisle designs in their analytical models. Standard warehouse models introduce one-sided multiple P/D points in a flying-V warehouse and minimize pick distance for a one-way travel between an active P/D point and a pick location with P/D points, assuming uniform flow rates. A simulation of the mathematical model generally uses four fixed configurations of P/D points which are on two different sides of the warehouse. It can be easily proved that if the source and destination P/D points are both chosen randomly, in a uniform way, then minimizing the one-way travel is the same as minimizing the two-way travel. Another warehouse configuration analytically models the warehouse for multiple one-sided P/D points while keeping the angle of the cross-aisles and picking aisles as a decision variable. The minimization of the one-way pick travel distance from the P/D point to the pick location by finding the optimal position/angle of the cross-aisle and picking aisle for warehouses having different numbers of multiple P/D points with variable flow rates is also one of the objectives. Most models of warehouses with multiple P/D points are one-way travel models and we extend these analytical models to minimize the two-way pick travel distance wherein the destination P/D is chosen optimally for the return route, which is not similar to minimizing the one-way travel. In most warehouse models, the return P/D is chosen randomly, but in our research, the return route P/D point is chosen optimally. Such warehouses are common in practice, where the flow rates at the P/D points are flexible and depend totally on the position of the picks. A good warehouse management system is efficient in consolidating orders over multiple P/D points in warehouses where the P/D is flexible in function. In the latter arrangement, pickers and shrink-wrap processes are not assigned to particular P/D points, which ultimately makes the P/D points more flexible and easy to use interchangeably for picking and deposits. The number of P/D points considered in this research uniformly increases from a single-central one to a maximum of each aisle symmetrically having a P/D point below it.

Keywords: non-traditional warehouse, V cross-aisle, multiple P/D point, pick travel distance

Procedia PDF Downloads 17
477 Stress-Strain Relation for Hybrid Fiber Reinforced Concrete at Elevated Temperature

Authors: Josef Novák, Alena Kohoutková

Abstract:

The performance of concrete structures in fire depends on several factors which include, among others, the change in material properties due to the fire. Today, fiber reinforced concrete (FRC) belongs to materials which have been widely used for various structures and elements. While the knowledge and experience with FRC behavior under ambient temperature is well-known, the effect of elevated temperature on its behavior has to be deeply investigated. This paper deals with an experimental investigation and stress‑strain relations for hybrid fiber reinforced concrete (HFRC) which contains siliceous aggregates, polypropylene and steel fibers. The main objective of the experimental investigation is to enhance a database of mechanical properties of concrete composites with addition of fibers subject to elevated temperature as well as to validate existing stress-strain relations for HFRC. Within the investigation, a unique heat transport test, compressive test and splitting tensile test were performed on 150 mm cubes heated up to 200, 400, and 600 °C with the aim to determine a time period for uniform heat distribution in test specimens and the mechanical properties of the investigated concrete composite, respectively. Both findings obtained from the presented experimental test as well as experimental data collected from scientific papers so far served for validating the computational accuracy of investigated stress-strain relations for HFRC which have been developed during last few years. Owing to the presence of steel and polypropylene fibers, HFRC becomes a unique material whose structural performance differs from conventional plain concrete when exposed to elevated temperature. Polypropylene fibers in HFRC lower the risk of concrete spalling as the fibers burn out shortly with increasing temperature due to low ignition point and as a consequence pore pressure decreases. On the contrary, the increase in the concrete porosity might affect the mechanical properties of the material. To validate this thought requires enhancing the existing result database which is very limited and does not contain enough data. As a result of the poor database, only few stress-strain relations have been developed so far to describe the structural performance of HFRC at elevated temperature. Moreover, many of them are inconsistent and need to be refined. Most of them also do not take into account the effect of both a fiber type and fiber content. Such approach might be vague especially when high amount of polypropylene fibers are used. Therefore, the existing relations should be validated in detail based on other experimental results.

Keywords: elevated temperature, fiber reinforced concrete, mechanical properties, stress strain relation

Procedia PDF Downloads 321
476 On the Optimality Assessment of Nano-Particle Size Spectrometry and Its Association to the Entropy Concept

Authors: A. Shaygani, R. Saifi, M. S. Saidi, M. Sani

Abstract:

Particle size distribution, the most important characteristics of aerosols, is obtained through electrical characterization techniques. The dynamics of charged nano-particles under the influence of electric field in electrical mobility spectrometer (EMS) reveals the size distribution of these particles. The accuracy of this measurement is influenced by flow conditions, geometry, electric field and particle charging process, therefore by the transfer function (transfer matrix) of the instrument. In this work, a wire-cylinder corona charger was designed and the combined field-diffusion charging process of injected poly-disperse aerosol particles was numerically simulated as a prerequisite for the study of a multi-channel EMS. The result, a cloud of particles with non-uniform charge distribution, was introduced to the EMS. The flow pattern and electric field in the EMS were simulated using computational fluid dynamics (CFD) to obtain particle trajectories in the device and therefore to calculate the reported signal by each electrometer. According to the output signals (resulted from bombardment of particles and transferring their charges as currents), we proposed a modification to the size of detecting rings (which are connected to electrometers) in order to evaluate particle size distributions more accurately. Based on the capability of the system to transfer information contents about size distribution of the injected particles, we proposed a benchmark for the assessment of optimality of the design. This method applies the concept of Von Neumann entropy and borrows the definition of entropy from information theory (Shannon entropy) to measure optimality. Entropy, according to the Shannon entropy, is the ''average amount of information contained in an event, sample or character extracted from a data stream''. Evaluating the responses (signals) which were obtained via various configurations of detecting rings, the best configuration which gave the best predictions about the size distributions of injected particles, was the modified configuration. It was also the one that had the maximum amount of entropy. A reasonable consistency was also observed between the accuracy of the predictions and the entropy content of each configuration. In this method, entropy is extracted from the transfer matrix of the instrument for each configuration. Ultimately, various clouds of particles were introduced to the simulations and predicted size distributions were compared to the exact size distributions.

Keywords: aerosol nano-particle, CFD, electrical mobility spectrometer, von neumann entropy

Procedia PDF Downloads 320
475 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows

Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid

Abstract:

Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.

Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil

Procedia PDF Downloads 115
474 Development of Immersive Virtual Reality System for Planning of Cargo Loading Operations

Authors: Eugene Y. C. Wong, Daniel Y. W. Mo, Cosmo T. Y. Ng, Jessica K. Y. Chan, Leith K. Y. Chan, Henry Y. K. Lau

Abstract:

The real-time planning visualisation, precise allocation and loading optimisation in air cargo load planning operations are increasingly important as more considerations are needed on dangerous cargo loading, locations of lithium batteries, weight declaration and limited aircraft capacity. The planning of the unit load devices (ULD) can often be carried out only in a limited number of hours before flight departure. A dynamic air cargo load planning system is proposed with the optimisation of cargo load plan and visualisation of planning results in virtual reality systems. The system aims to optimise the cargo load planning and visualise the simulated loading planning decision on air cargo terminal operations. Adopting simulation tools, Cave Automatic Virtual Environment (CAVE) and virtual reality technologies, the results of planning with reference to weight and balance, Unit Load Device (ULD) dimensions, gateway, cargo nature and aircraft capacity are optimised and presented. The virtual reality system facilities planning, operations, education and training. Staff in terminals are usually trained in a traditional push-approach demonstration with enormous manual paperwork. With the support of newly customized immersive visualization environment, users can master the complex air cargo load planning techniques in a problem based training with the instant result being immersively visualised. The virtual reality system is developed with three-dimensional (3D) projectors, screens, workstations, truss system, 3D glasses, and demonstration platform and software. The content will be focused on the cargo planning and loading operations in an air cargo terminal. The system can assist decision-making process during cargo load planning in the complex operations of air cargo terminal operations. The processes of cargo loading, cargo build-up, security screening, and system monitoring can be further visualised. Scenarios are designed to support and demonstrate the daily operations of the air cargo terminal, including dangerous goods, pets and animals, and some special cargos.

Keywords: air cargo load planning, optimisation, virtual reality, weight and balance, unit load device

Procedia PDF Downloads 329
473 Rapid Strategic Consensus Building in Land Readjustment in Kabul

Authors: Nangialai Yousufzai, Eysosiyas Etana, Ikuo Sugiyama

Abstract:

Kabul population has been growing continually since 2001 and reaching six million in 2025 due to the rapid inflow from the neighboring countries. As a result of the population growth, lack of living facilities supported by infrastructure services is becoming serious in social and economic aspects. However, about 70% of the city is still occupied illegally and the government has little information on the infrastructure demands. To improve this situation, land readjustment is one of the powerful development tools, because land readjustment does not need a high governmental budget of itself. Instead, the method needs cooperation between stakeholders such as landowners, developers and a local government. So it is becoming crucial for both government and citizens to implement land readjustment for providing tidy urban areas with enough public services to realize more livable city as a whole. On the contrary, the traditional land readjustment tends to spend a long time until now to get consensus on the new plan between stakeholders. One of the reasons is that individual land area (land parcel) is decreased due to the contribution to public such as roads/parks/squares for improving the urban environment. The second reason is that the new plan is difficult for dwellers to imagine new life after the readjustment. Because the paper-based plan is made by an authority not for dwellers but for specialists to precede the project. This paper aims to shorten the time to realize quick consensus between stakeholders. The first improvement is utilizing questionnaire(s) to assess the demand and preference of the landowners. The second one is utilizing 3D model for dwellers to visualize the new environment easily after the readjustment. In additions, the 3D model is reflecting the demand and preference of the resident so that they could select a land parcel according to their sense value of life. The above-mentioned two improvements are carried out after evaluating total land prices of the new plans to select for maximizing the project value. The land price forecasting formula is derived from the current market ones in Kabul. Finally, it is stressed that the rapid consensus-building of land readjustment utilizing ICT and open data analysis is essential to redevelop slums and illegal occupied areas in Kabul.

Keywords: land readjustment, consensus building, land price formula, 3D simulation

Procedia PDF Downloads 316
472 The Effect of Foot Progression Angle on Human Lower Extremity

Authors: Sungpil Ha, Ju Yong Kang, Sangbaek Park, Seung-Ju Lee, Soo-Won Chae

Abstract:

The growing number of obese patients in aging societies has led to an increase in the number of patients with knee medial osteoarthritis (OA). Artificial joint insertion is the most common treatment for knee medial OA. Surgery is effective for patients with serious arthritic symptoms, but it is costly and dangerous. It is also inappropriate way to prevent a disease as an early stage. Therefore Non-operative treatments such as toe-in gait are proposed recently. Toe-in gait is one of non-surgical interventions, which restrain the progression of arthritis and relieves pain by reducing knee adduction moment (KAM) to facilitate lateral distribution of load on to knee medial cartilage. Numerous studies have measured KAM in various foot progression angle (FPA), and KAM data could be obtained by motion analysis. However, variations in stress at knee cartilage could not be directly observed or evaluated by these experiments of measuring KAM. Therefore, this study applied motion analysis to major gait points (1st peak, mid –stance, 2nd peak) with regard to FPA, and to evaluate the effects of FPA on the human lower extremity, the finite element (FE) method was employed. Three types of gait analysis (toe-in, toe-out, baseline gait) were performed with markers placed at the lower extremity. Ground reaction forces (GRF) were obtained by the force plates. The forces associated with the major muscles were computed using GRF and marker trajectory data. MRI data provided by the Visible Human Project were used to develop a human lower extremity FE model. FE analyses for three types of gait simulations were performed based on the calculated muscle force and GRF. We observed the maximum stress point during toe-in gait was lower than the other types, by comparing the results of FE analyses at the 1st peak across gait types. This is the same as the trend exhibited by KAM, measured through motion analysis in other papers. This indicates that the progression of knee medial OA could be suppressed by adopting toe-in gait. This study integrated motion analysis with FE analysis. One advantage of this method is that re-modeling is not required even with changes in posture. Therefore another type of gait simulation or various motions of lower extremity can be easily analyzed using this method.

Keywords: finite element analysis, gait analysis, human model, motion capture

Procedia PDF Downloads 318