Search results for: software modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8016

Search results for: software modeling

6006 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology

Authors: Yonggu Jang, Jisong Ryu, Woosik Lee

Abstract:

The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.

Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities

Procedia PDF Downloads 49
6005 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 261
6004 A Two-Week and Six-Month Stability of Cancer Health Literacy Classification Using the CHLT-6

Authors: Levent Dumenci, Laura A. Siminoff

Abstract:

Health literacy has been shown to predict a variety of health outcomes. Reliable identification of persons with limited cancer health literacy (LCHL) has been proved questionable with existing instruments using an arbitrary cut point along a continuum. The CHLT-6, however, uses a latent mixture modeling approach to identify persons with LCHL. The purpose of this study was to estimate two-week and six-month stability of identifying persons with LCHL using the CHLT-6 with a discrete latent variable approach as the underlying measurement structure. Using a test-retest design, the CHLT-6 was administered to cancer patients with two-week (N=98) and six-month (N=51) intervals. The two-week and six-month latent test-retest agreements were 89% and 88%, respectively. The chance-corrected latent agreements estimated from Dumenci’s latent kappa were 0.62 (95% CI: 0.41 – 0.82) and .47 (95% CI: 0.14 – 0.80) for the two-week and six-month intervals, respectively. High levels of latent test-retest agreement between limited and adequate categories of cancer health literacy construct, coupled with moderate to good levels of change-corrected latent agreements indicated that the CHLT-6 classification of limited versus adequate cancer health literacy is relatively stable over time. In conclusion, the measurement structure underlying the instrument allows for estimating classification errors circumventing limitations due to arbitrary approaches adopted by all other instruments. The CHLT-6 can be used to identify persons with LCHL in oncology clinics and intervention studies to accurately estimate treatment effectiveness.

Keywords: limited cancer health literacy, the CHLT-6, discrete latent variable modeling, latent agreement

Procedia PDF Downloads 165
6003 One or More Building Information Modeling Managers in France: The Confusion of the Kind

Authors: S. Blanchard, D. Beladjine, K. Beddiar

Abstract:

Since 2015, the arrival of BIM in the building sector in France has turned the corporation world upside down. Not only constructive practices have been impacted, but also the uses and the men who have undergone important changes. Thus, the new collaborative mode generated by the BIM and the digital model has challenged the supremacy of some construction actors because the process involves working together taking into account the needs of other contributors. New BIM tools have emerged and actors in the act of building must take ownership of them. It is in this context that under the impetus of a European directive and the French government's encouragement of new missions and job profiles have. Moreover, concurrent engineering requires that each actor can advance at the same time as the others, at the whim of the information that reaches him, and the information he has to transmit. However, in the French legal system around public procurement, things are not planned in this direction. Also, a consequent evolution must take place to adapt to the methodology. The new missions generated by the BIM in France require a good mastery of the tools and the process. Also, to meet the objectives of the BIM approach, it is possible to define a typical job profile around the BIM, adapted to the various sectors concerned. The multitude of job offers using the same terms with very different objectives and the complexity of the proposed missions motivated by our approach. In order to reinforce exchanges with professionals or specialists, we carried out a statistical study to answer this problem. Five topics are discussed around the business area: the BIM in the company, the function (business), software used and BIM missions practiced (39 items). About 1400 professionals were interviewed. These people work in companies (micro businesses, SMEs, and Groups) of construction, engineering offices or, architectural agencies. 77% of respondents have the status of employees. All participants are graduated in their trade, the majority having level 1. Most people have less than a year of experience in BIM, but some have 10 years. The results of our survey help to understand why it is not possible to define a single type of BIM Manager. Indeed, the specificities of the companies are so numerous and complex and the missions so varied, that there is not a single model for a function. On the other hand, it was possible to define 3 main professions around the BIM (Manager, Coordinator and Modeler) and 3 main missions for the BIM Manager (deployment of the method, assistance to project management and management of a project).

Keywords: BIM manager, BIM modeler, BIM coordinator, project management

Procedia PDF Downloads 154
6002 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE

Procedia PDF Downloads 83
6001 Development and Verification of the Idom Shielding Optimization Tool

Authors: Omar Bouhassoun, Cristian Garrido, César Hueso

Abstract:

The radiation shielding design is an optimization problem with multiple -constrained- objective functions (radiation dose, weight, price, etc.) that depend on several parameters (material, thickness, position, etc.). The classical approach for shielding design consists of a brute force trial-and-error process subject to previous designer experience. Therefore, the result is an empirical solution but not optimal, which can degrade the overall performance of the shielding. In order to automate the shielding design procedure, the IDOM Shielding Optimization Tool (ISOT) has been developed. This software combines optimization algorithms with the capabilities to read/write input files, run calculations, as well as parse output files for different radiation transport codes. In the first stage, the software was established to adjust the input files for two well-known Monte Carlo codes (MCNP and Serpent) and optimize the result (weight, volume, price, dose rate) using multi-objective genetic algorithms. Nevertheless, its modular implementation easily allows the inclusion of more radiation transport codes and optimization algorithms. The work related to the development of ISOT and its verification on a simple 3D multi-layer shielding problem using both MCNP and Serpent will be presented. ISOT looks very promising for achieving an optimal solution to complex shielding problems.

Keywords: optimization, shielding, nuclear, genetic algorithm

Procedia PDF Downloads 98
6000 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)

Procedia PDF Downloads 205
5999 Analysis of Key Factors Influencing Muslim Women’s Buying Intentions of Clothes: A Study of UK’s Ethnic Minorities and Modest Fashion Industry

Authors: Nargis Ali

Abstract:

Since the modest fashion market is growing in the UK, there is still little understanding and more concerns found among researchers and marketers about Muslim consumers. Therefore, the present study is designed to explore critical factors influencing Muslim women’s intention to purchase clothing and to identify the differences in the purchase intention of ethnic minority groups in the UK. The conceptual framework is designed using the theory of planned behavior and social identity theory. In order to satisfy the research objectives, a structured online questionnaire was published on Facebook from 20 November to 21 March. As a result, 1087 usable questionnaires were received and used to assess the proposed model fit through structural equation modeling. Results revealed that social media does influence the purchase intention of Muslim women. Muslim women search for stylish clothes that provide comfort during summer while they prefer soft and subdued colors. Furthermore, religious knowledge and religious practice, and fashion uniqueness strongly influence their purchase intention, while hybrid identity is negatively related to the purchase intention of Muslim women. This research contributes to the literature linked to Muslim consumers at a time when the UK's large retailers were seeking to attract Muslim consumers through modestly designed outfits. Besides, it will be helpful to formulate or revise product and marketing strategies according to UK’s Muslim women’s tastes and needs.

Keywords: fashion uniqueness, hybrid identity, religiosity, social media, social identity theory, structural equation modeling, theory of planned behavior

Procedia PDF Downloads 214
5998 The Use of Rule-Based Cellular Automata to Track and Forecast the Dispersal of Classical Biocontrol Agents at Scale, with an Application to the Fopius arisanus Fruit Fly Parasitoid

Authors: Agboka Komi Mensah, John Odindi, Elfatih M. Abdel-Rahman, Onisimo Mutanga, Henri Ez Tonnang

Abstract:

Ecosystems are networks of organisms and populations that form a community of various species interacting within their habitats. Such habitats are defined by abiotic and biotic conditions that establish the initial limits to a population's growth, development, and reproduction. The habitat’s conditions explain the context in which species interact to access resources such as food, water, space, shelter, and mates, allowing for feeding, dispersal, and reproduction. Dispersal is an essential life-history strategy that affects gene flow, resource competition, population dynamics, and species distributions. Despite the importance of dispersal in population dynamics and survival, understanding the mechanism underpinning the dispersal of organisms remains challenging. For instance, when an organism moves into an ecosystem for survival and resource competition, its progression is highly influenced by extrinsic factors such as its physiological state, climatic variables and ability to evade predation. Therefore, greater spatial detail is necessary to understand organism dispersal dynamics. Understanding organisms dispersal can be addressed using empirical and mechanistic modelling approaches, with the adopted approach depending on the study's purpose Cellular automata (CA) is an example of these approaches that have been successfully used in biological studies to analyze the dispersal of living organisms. Cellular automata can be briefly described as occupied cells by an individual that evolves based on proper decisions based on a set of neighbours' rules. However, in the ambit of modelling individual organisms dispersal at the landscape scale, we lack user friendly tools that do not require expertise in mathematical models and computing ability; such as a visual analytics framework for tracking and forecasting the dispersal behaviour of organisms. The term "visual analytics" (VA) describes a semiautomated approach to electronic data processing that is guided by users who can interact with data via an interface. Essentially, VA converts large amounts of quantitative or qualitative data into graphical formats that can be customized based on the operator's needs. Additionally, this approach can be used to enhance the ability of users from various backgrounds to understand data, communicate results, and disseminate information across a wide range of disciplines. To support effective analysis of the dispersal of organisms at the landscape scale, we therefore designed Pydisp which is a free visual data analytics tool for spatiotemporal dispersal modeling built in Python. Its user interface allows users to perform a quick and interactive spatiotemporal analysis of species dispersal using bioecological and climatic data. Pydisp enables reuse and upgrade through the use of simple principles such as Fuzzy cellular automata algorithms. The potential of dispersal modeling is demonstrated in a case study by predicting the dispersal of Fopius arisanus (Sonan), endoparasitoids to control Bactrocera dorsalis (Hendel) (Diptera: Tephritidae) in Kenya. The results obtained from our example clearly illustrate the parasitoid's dispersal process at the landscape level and confirm that dynamic processes in an agroecosystem are better understood when designed using mechanistic modelling approaches. Furthermore, as demonstrated in the example, the built software is highly effective in portraying the dispersal of organisms despite the unavailability of detailed data on the species dispersal mechanisms.

Keywords: cellular automata, fuzzy logic, landscape, spatiotemporal

Procedia PDF Downloads 67
5997 Development of a Coupled Thermal-Mechanical-Biological Model to Simulate Impacts of Temperature on Waste Stabilization at a Landfill in Quebec, Canada

Authors: Simran Kaur, Paul J. Van Geel

Abstract:

A coupled Thermal-Mechanical-Biological (TMB) model was developed for the analysis of impacts of temperatures on waste stabilization at a Municipal Solid Waste (MSW) landfill in Quebec, Canada using COMSOL Multiphysics, a finite element-based software. For waste placed in landfills in Northern climates during winter months, it can take months or even years before the waste approaches ideal temperatures for biodegradation to occur. Therefore, the proposed model links biodegradation induced strain in MSW to waste temperatures and corresponding heat generation rates as a result of anaerobic degradation. This provides a link between the thermal-biological and mechanical behavior of MSW. The thermal properties of MSW are further linked to density which is tracked and updated in the mechanical component of the model, providing a mechanical-thermal link. The settlement of MSW is modelled based on the concept of viscoelasticity. The specific viscoelastic model used is a single Kelvin – Voight viscoelastic body in which the finite element response is controlled by the elastic material parameters – Young’s Modulus and Poisson’s ratio. The numerical model was validated with 10 years of temperature and settlement data collected from a landfill in Ste. Sophie, Quebec. The coupled TMB modelling framework, which simulates placement of waste lifts as they are placed progressively in the landfill, allows for optimization of several thermal and mechanical parameters throughout the depth of the waste profile and helps in better understanding of temperature dependence of MSW stabilization. The model is able to illustrate how waste placed in the winter months can delay biodegradation-induced settlement and generation of landfill gas. A delay in waste stabilization will impact the utilization of the approved airspace prior to the placement of a final cover and impact post-closure maintenance. The model provides a valuable tool to assess different waste placement strategies in order to increase airspace utilization within landfills operating under different climates, in addition to understanding conditions for increased gas generation for recovery as a green and renewable energy source.

Keywords: coupled model, finite element modeling, landfill, municipal solid waste, waste stabilization

Procedia PDF Downloads 121
5996 Numerical Investigation of Pressure Drop in Core Annular Horizontal Pipe Flow

Authors: John Abish, Bibin John

Abstract:

Liquid-liquid flow in horizontal pipe is investigated in order to reveal the flow patterns arising from the co-existed flow of oil and water. The main focus of the study is to identify the feasibility of reducing the pumping power requirements of petroleum transportation lines by having an annular flow of water around the thick oil core. This idea makes oil transportation cheaper and easier. The present study uses computational fluid dynamics techniques to model oil-water flows with liquids of similar density and varying viscosity. The simulation of the flow is conducted using commercial package Ansys Fluent. Flow domain modeling and grid generation accomplished through ICEM CFD. The horizontal pipe is modeled with two different inlets and meshed with O-Grid mesh. The standard k-ε turbulence scheme along with the volume of fluid (VOF) multiphase modeling method is used to simulate the oil-water flow. Transient flow simulations carried out for a total period of 30s showed significant reduction in pressure drop while employing core annular flow concept. This study also reveals the effect of viscosity ratio, mass flow rates of individual fluids and ration of superficial velocities on the pressure drop across the pipe length. Contours of velocity and volume fractions are employed along with pressure predictions to assess the effectiveness of this proposed concept quantitatively as well as qualitatively. The outcome of the present study is found to be very relevant for the petrochemical industries.

Keywords: computational fluid dynamics, core-annular flows, frictional flow resistance, oil transportation, pressure drop

Procedia PDF Downloads 388
5995 Lockit: A Logic Locking Automation Software

Authors: Nemanja Kajtez, Yue Zhan, Basel Halak

Abstract:

The significant rise in the cost of manufacturing of nanoscale integrated circuits (IC) has led the majority of IC design companies to outsource the fabrication of their products to other companies, often located in different countries. This multinational nature of the hardware supply chain has led to a host of security threats, including IP piracy, IC overproduction, and Trojan insertion. To combat that, researchers have proposed logic locking techniques to protect the intellectual properties of the design and increase the difficulty of malicious modification of its functionality. However, the adoption of logic locking approaches is rather slow due to the lack of the integration with IC production process and the lack of efficacy of existing algorithms. This work automates the logic locking process by developing software using Python that performs the locking on a gate-level netlist and can be integrated with the existing digital synthesis tools. Analysis of the latest logic locking algorithms has demonstrated that the SFLL-HD algorithm is one of the most secure and versatile in trading-off levels of protection against different types of attacks and was thus selected for implementation. The presented tool can also be expanded to incorporate the latest locking mechanisms to keep up with the fast-paced development in this field. The paper also presents a case study to demonstrate the functionality of the tool and how it could be used to explore the design space and compare different locking solutions. The source code of this tool is available freely from (https://www.researchgate.net/publication/353195333_Source_Code_for_The_Lockit_Tool).

Keywords: design automation, hardware security, IP piracy, logic locking

Procedia PDF Downloads 165
5994 Modeling Breathable Particulate Matter Concentrations over Mexico City Retrieved from Landsat 8 Satellite Imagery

Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Magnolia G. Martinez-Rivera, Pablo de J. Angeles-Salto, Carlos Herrera-Ventosa

Abstract:

In order to diminish health risks, it is of major importance to monitor air quality. However, this process is accompanied by the high costs of physical and human resources. In this context, this research is carried out with the main objective of developing a predictive model for concentrations of inhalable particles (PM10-2.5) using remote sensing. To develop the model, satellite images, mainly from Landsat 8, of the Mexico City’s Metropolitan Area were used. Using historical PM10 and PM2.5 measurements of the RAMA (Automatic Environmental Monitoring Network of Mexico City) and through the processing of the available satellite images, a preliminary model was generated in which it was possible to observe critical opportunity areas that will allow the generation of a robust model. Through the preliminary model applied to the scenes of Mexico City, three areas were identified that cause great interest due to the presumed high concentration of PM; the zones are those that present high plant density, bodies of water and soil without constructions or vegetation. To date, work continues on this line to improve the preliminary model that has been proposed. In addition, a brief analysis was made of six models, presented in articles developed in different parts of the world, this in order to visualize the optimal bands for the generation of a suitable model for Mexico City. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.

Keywords: air quality, modeling pollution, particulate matter, remote sensing

Procedia PDF Downloads 143
5993 Diversity for Safety and Security of Autonomous Vehicles against Accidental and Deliberate Faults

Authors: Anil Ranjitbhai Patel, Clement John Shaji, Peter Liggesmeyer

Abstract:

Safety and security of autonomous vehicles (AVs) is a growing concern, first, due to the increased number of safety-critical functions taken over by automotive embedded systems; second, due to the increased exposure of the software-intensive systems to potential attackers; third, due to dynamic interaction in an uncertain and unknown environment at runtime which results in changed functional and non-functional properties of the system. Frequently occurring environmental uncertainties, random component failures, and compromise security of the AVs might result in hazardous events, sometimes even in an accident, if left undetected. Beyond these technical issues, we argue that the safety and security of AVs against accidental and deliberate faults are poorly understood and rarely implemented. One possible way to overcome this is through a well-known diversity approach. As an effective approach to increase safety and security, diversity has been widely used in the aviation, railway, and aerospace industries. Thus, the paper proposes fault-tolerance by diversity model takes into consideration the mitigation of accidental and deliberate faults by application of structure and variant redundancy. The model can be used to design the AVs with various types of diversity in hardware and software-based multi-version system. The paper evaluates the presented approach by employing an example from adaptive cruise control, followed by discussing the case study with initial findings.

Keywords: autonomous vehicles, diversity, fault-tolerance, adaptive cruise control, safety, security

Procedia PDF Downloads 115
5992 Identification of Natural Liver X Receptor Agonists as the Treatments or Supplements for the Management of Alzheimer and Metabolic Diseases

Authors: Hsiang-Ru Lin

Abstract:

Cholesterol plays an essential role in the regulation of the progression of numerous important diseases including atherosclerosis and Alzheimer disease so the generation of suitable cholesterol-lowering reagents is urgent to develop. Liver X receptor (LXR) is a ligand-activated transcription factor whose natural ligands are cholesterols, oxysterols and glucose. Once being activated, LXR can transactivate the transcription action of various genes including CYP7A1, ABCA1, and SREBP1c, involved in the lipid metabolism, glucose metabolism and inflammatory pathway. Essentially, the upregulation of ABCA1 facilitates cholesterol efflux from the cells and attenuates the production of beta-amyloid (ABeta) 42 in brain so LXR is a promising target to develop the cholesterol-lowering reagents and preventative treatment of Alzheimer disease. Engelhardia roxburghiana is a deciduous tree growing in India, China, and Taiwan. However, its chemical composition is only reported to exhibit antitubercular and anti-inflammatory effects. In this study, four compounds, engelheptanoxides A, C, engelhardiol A, and B isolated from the root of Engelhardia roxburghiana were evaluated for their agonistic activity against LXR by the transient transfection reporter assays in the HepG2 cells. Furthermore, their interactive modes with LXR ligand binding pocket were generated by molecular modeling programs. By using the cell-based biological assays, engelheptanoxides A, C, engelhardiol A, and B showing no cytotoxic effect against the proliferation of HepG2 cells, exerted obvious LXR agonistic effects with similar activity as T0901317, a novel synthetic LXR agonist. Further modeling studies including docking and SAR (structure-activity relationship) showed that these compounds can locate in LXR ligand binding pocket in the similar manner as T0901317. Thus, LXR is one of nuclear receptors targeted by pharmaceutical industry for developing treatments of Alzheimer and atherosclerosis diseases. Importantly, the cell-based assays, together with molecular modeling studies suggesting a plausible binding mode, demonstrate that engelheptanoxides A, C, engelhardiol A, and B function as LXR agonists. This is the first report to demonstrate that the extract of Engelhardia roxburghiana contains LXR agonists. As such, these active components of Engelhardia roxburghiana or subsequent analogs may show important therapeutic effects through selective modulation of the LXR pathway.

Keywords: Liver X receptor (LXR), Engelhardia roxburghiana, CYP7A1, ABCA1, SREBP1c, HepG2 cells

Procedia PDF Downloads 410
5991 Urban Development Criteria with a Focus on Resilience to Pandemics: A Case Study of Corona Virus (Covid-19)

Authors: Elham Zabetian Targhi, Niusha Fardnava, Saba Saghafi

Abstract:

Urban resilience to Corona Virus has become a major concern for cities these days. Our country also has not been safe from the destructive effects of this virus in social, economic, physical, governance, and management dimensions; and according to official statistics, hundreds of thousands of people in Iran have been infected with this virus and tens of thousands have died so far. Therefore, to measure urban resilience to this pandemic, some criteria and sub-criteria were developed based on the authors’ documentary and field studies, and their significance or weights were determined using analytical-comparative research method using a questionnaire of paired or L-Saati comparisons from the viewpoint of experts in urban sciences and urban development using AHP hierarchical analysis in EXPERT CHOICE software. Then, designing a questionnaire with a five-point Likert scale, the satisfaction of Tehran residents with the extracted criteria and sub-criteria was measured and the correlation between the important criteria in each dimension was assessed using correlation tests in SPSS16 software. According to the obtained results of AHP analysis and the scores of each sub-criterion, the weight of all criteria was normal. In the next stage, according to the pairwise correlation tests between the important criteria in each dimension from the viewpoint of urban science experts and Tehran residents, it was concluded that the reliability of the correlation between the criteria is 99%. In all the cases, the P-value or the same significance level was less than 0.05, which indicated the significance of the pairwise relations between the variables.

Keywords: Urban Resilience, Pandemics, Corona Virus (Covid-19), Criteria.

Procedia PDF Downloads 72
5990 3D Steady and Transient Centrifugal Pump Flow within Ansys CFX and OpenFOAM

Authors: Clement Leroy, Guillaume Boitel

Abstract:

This paper presents a comparative benchmarking review of a steady and transient three-dimensional (3D) flow computations in centrifugal pump using commercial (AnsysCFX) and open source (OpenFOAM) computational fluid dynamics (CFD) software. In centrifugal rotor-dynamic pump, the fluid enters in the impeller along to the rotating axis to be accelerated in order to increase the pressure, flowing radially outward into another stage, vaned diffuser or volute casing, from where it finally exits into a downstream pipe. Simulations are carried out at the best efficiency point (BEP) and part load, for single-phase flow with several turbulence models. The results are compared with overall performance report from experimental data. The use of CFD technology in industry is still limited by the high computational costs, and even more by the high cost of commercial CFD software and high-performance computing (HPC) licenses. The main objectives of the present study are to define OpenFOAM methodology for high-quality 3D steady and transient turbomachinery CFD simulation to conduct a thorough time-accurate performance analysis. On the other hand a detailed comparisons between computational methods, features on latest Ansys release 18 and OpenFOAM is investigated to assess the accuracy and industrial applications of those solvers. Finally an automated connected workflow (IoT) for turbine blade applications is presented.

Keywords: benchmarking, CFX, internet of things, openFOAM, time-accurate, turbomachinery

Procedia PDF Downloads 191
5989 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations

Procedia PDF Downloads 163
5988 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling

Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines

Abstract:

The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.

Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances

Procedia PDF Downloads 147
5987 Modeling Route Selection Using Real-Time Information and GPS Data

Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento

Abstract:

Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.

Keywords: behavior choice model, human factors, hybrid model, real time data

Procedia PDF Downloads 137
5986 A Geosynchronous Orbit Synthetic Aperture Radar Simulator for Moving Ship Targets

Authors: Linjie Zhang, Baifen Ren, Xi Zhang, Genwang Liu

Abstract:

Ship detection is of great significance for both military and civilian applications. Synthetic aperture radar (SAR) with all-day, all-weather, ultra-long-range characteristics, has been used widely. In view of the low time resolution of low orbit SAR and the needs for high time resolution SAR data, GEO (Geosynchronous orbit) SAR is getting more and more attention. Since GEO SAR has short revisiting period and large coverage area, it is expected to be well utilized in marine ship targets monitoring. However, the height of the orbit increases the time of integration by almost two orders of magnitude. For moving marine vessels, the utility and efficacy of GEO SAR are still not sure. This paper attempts to find the feasibility of GEO SAR by giving a GEO SAR simulator of moving ships. This presented GEO SAR simulator is a kind of geometrical-based radar imaging simulator, which focus on geometrical quality rather than high radiometric. Inputs of this simulator are 3D ship model (.obj format, produced by most 3D design software, such as 3D Max), ship's velocity, and the parameters of satellite orbit and SAR platform. Its outputs are simulated GEO SAR raw signal data and SAR image. This simulating process is accomplished by the following four steps. (1) Reading 3D model, including the ship rotations (pitch, yaw, and roll) and velocity (speed and direction) parameters, extract information of those little primitives (triangles) which is visible from the SAR platform. (2) Computing the radar scattering from the ship with physical optics (PO) method. In this step, the vessel is sliced into many little rectangles primitives along the azimuth. The radiometric calculation of each primitive is carried out separately. Since this simulator only focuses on the complex structure of ships, only single-bounce reflection and double-bounce reflection are considered. (3) Generating the raw data with GEO SAR signal modeling. Since the normal ‘stop and go’ model is not available for GEO SAR, the range model should be reconsidered. (4) At last, generating GEO SAR image with improved Range Doppler method. Numerical simulation of fishing boat and cargo ship will be given. GEO SAR images of different posture, velocity, satellite orbit, and SAR platform will be simulated. By analyzing these simulated results, the effectiveness of GEO SAR for the detection of marine moving vessels is evaluated.

Keywords: GEO SAR, radar, simulation, ship

Procedia PDF Downloads 163
5985 Estimation of Source Parameters and Moment Tensor Solution through Waveform Modeling of 2013 Kishtwar Earthquake

Authors: Shveta Puri, Shiv Jyoti Pandey, G. M. Bhat, Neha Raina

Abstract:

TheJammu and Kashmir region of the Northwest Himalaya had witnessed many devastating earthquakes in the recent past and has remained unexplored for any kind of seismic investigations except scanty records of the earthquakes that occurred in this region in the past. In this study, we have used local seismic data of year 2013 that was recorded by the network of Broadband Seismographs in J&K. During this period, our seismic stations recorded about 207 earthquakes including two moderate events of Mw 5.7 on 1st May, 2013 and Mw 5.1 of 2nd August, 2013.We analyzed the events of Mw 3-4.6 and the main events only (for minimizing the error) for source parameters, b value and sense of movement through waveform modeling for understanding seismotectonic and seismic hazard of the region. It has been observed that most of the events are bounded between 32.9° N – 33.3° N latitude and 75.4° E – 76.1° E longitudes, Moment Magnitude (Mw) ranges from Mw 3 to 5.7, Source radius (r), from 0.21 to 3.5 km, stress drop, from 1.90 bars to 71.1 bars and Corner frequency, from 0.39 – 6.06 Hz. The b-value for this region was found to be 0.83±0 from these events which are lower than the normal value (b=1), indicating the area is under high stress. The travel time inversion and waveform inversion method suggest focal depth up to 10 km probably above the detachment depth of the Himalayan region. Moment tensor solution of the (Mw 5.1, 02:32:47 UTC) main event of 2ndAugust suggested that the source fault is striking at 295° with dip of 33° and rake value of 85°. It was found that these events form intense clustering of small to moderate events within a narrow zone between Panjal Thrust and Kishtwar Window. Moment tensor solution of the main events and their aftershocks indicating thrust type of movement is occurring in this region.

Keywords: b-value, moment tensor, seismotectonics, source parameters

Procedia PDF Downloads 304
5984 Insight2OSC: Using Electroencephalography (EEG) Rhythms from the Emotiv Insight for Musical Composition via Open Sound Control (OSC)

Authors: Constanza Levicán, Andrés Aparicio, Rodrigo F. Cádiz

Abstract:

The artistic usage of Brain-computer interfaces (BCI), initially intended for medical purposes, has increased in the past few years as they become more affordable and available for the general population. One interesting question that arises from this practice is whether it is possible to compose or perform music by using only the brain as a musical instrument. In order to approach this question, we propose a BCI for musical composition, based on the representation of some mental states as the musician thinks about sounds. We developed software, called Insight2OSC, that allows the usage of the Emotiv Insight device as a musical instrument, by sending the EEG data to audio processing software such as MaxMSP through the OSC protocol. We provide two compositional applications bundled with the software, which we call Mapping your Mental State and Thinking On. The signals produced by the brain have different frequencies (or rhythms) depending on the level of activity, and they are classified as one of the following waves: delta (0.5-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13-30 Hz), gamma (30-50 Hz). These rhythms have been found to be related to some recognizable mental states. For example, the delta rhythm is predominant in a deep sleep, while beta and gamma rhythms have higher amplitudes when the person is awake and very concentrated. Our first application (Mapping your Mental State) produces different sounds representing the mental state of the person: focused, active, relaxed or in a state similar to a deep sleep by the selection of the dominants rhythms provided by the EEG device. The second application relies on the physiology of the brain, which is divided into several lobes: frontal, temporal, parietal and occipital. The frontal lobe is related to abstract thinking and high-level functions, the parietal lobe conveys the stimulus of the body senses, the occipital lobe contains the primary visual cortex and processes visual stimulus, the temporal lobe processes auditory information and it is important for memory tasks. In consequence, our second application (Thinking On) processes the audio output depending on the users’ brain activity as it activates a specific area of the brain that can be measured using the Insight device.

Keywords: BCI, music composition, emotiv insight, OSC

Procedia PDF Downloads 303
5983 Manual Wheelchair Propulsion Efficiency on Different Slopes

Authors: A. Boonpratatong, J. Pantong, S. Kiattisaksophon, W. Senavongse

Abstract:

In this study, an integrated sensing and modeling system for manual wheelchair propulsion measurement and propulsion efficiency calculation was used to indicate the level of overuse. Seven subjects participated in the measurement. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. By contrast, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5. The results are supported by previously reported wheeling resistance and propulsion torque relationships implying margin of the overuse. Upper limb musculoskeletal injuries and syndromes in manual wheelchair riders are common, chronic, and may be caused at different levels by the overuse i.e. repetitive riding on steep incline. The qualitative analysis such as the mechanical effectiveness on manual wheeling to establish the relationship between the riding difficulties, mechanical efforts and propulsion outputs is scarce, possibly due to the challenge of simultaneous measurement of those factors in conventional manual wheelchairs and everyday environments. In this study, the integrated sensing and modeling system were used to measure manual wheelchair propulsion efficiency in conventional manual wheelchairs and everyday environments. The sensing unit is comprised of the contact pressure and inertia sensors which are portable and universal. Four healthy male and three healthy female subjects participated in the measurement on level and 15-degree incline surface. Subjects were asked to perform manual wheelchair ridings with three different self-selected speeds on level surface and only preferred speed on the 15-degree incline. Five trials were performed in each condition. The kinematic data of the subject’s dominant hand and a spoke and the trunk of the wheelchair were collected through the inertia sensors. The compression force applied from the thumb of the dominant hand to the push rim was collected through the contact pressure sensors. The signals from all sensors were recorded synchronously. The subject-selected speeds for slow, preferred and fast riding on level surface and subject-preferred speed on 15-degree incline were recorded. The propulsion efficiency as a ratio between the pushing force in tangential direction to the push rim and the net force as a result of the three-dimensional riding motion were derived by inverse dynamic problem solving in the modeling unit. The intra-subject variability of the riding speed was not different significantly as the self-selected speed increased on the level surface. Since the riding speed on the 15-degree incline was difficult to regulate, the intra-subject variability was not applied. On the level surface, the propulsion efficiencies were not different significantly as the riding speed increased. However, the propulsion efficiencies on the 15-degree incline were restricted to around 0.5 for all subjects on their preferred speed. The results are supported by the previously reported relationship between the wheeling resistance and propulsion torque in which the wheelchair axle torque increased but the muscle activities were not increased when the resistance is high. This implies the margin of dynamic efforts on the relatively high resistance being similar to the margin of the overuse indicated by the restricted propulsion efficiency on the 15-degree incline.

Keywords: contact pressure sensor, inertia sensor, integrating sensing and modeling system, manual wheelchair propulsion efficiency, manual wheelchair propulsion measurement, tangential force, resultant force, three-dimensional riding motion

Procedia PDF Downloads 281
5982 Statistical Model of Water Quality in Estero El Macho, Machala-El Oro

Authors: Rafael Zhindon Almeida

Abstract:

Surface water quality is an important concern for the evaluation and prediction of water quality conditions. The objective of this study is to develop a statistical model that can accurately predict the water quality of the El Macho estuary in the city of Machala, El Oro province. The methodology employed in this study is of a basic type that involves a thorough search for theoretical foundations to improve the understanding of statistical modeling for water quality analysis. The research design is correlational, using a multivariate statistical model involving multiple linear regression and principal component analysis. The results indicate that water quality parameters such as fecal coliforms, biochemical oxygen demand, chemical oxygen demand, iron and dissolved oxygen exceed the allowable limits. The water of the El Macho estuary is determined to be below the required water quality criteria. The multiple linear regression model, based on chemical oxygen demand and total dissolved solids, explains 99.9% of the variance of the dependent variable. In addition, principal component analysis shows that the model has an explanatory power of 86.242%. The study successfully developed a statistical model to evaluate the water quality of the El Macho estuary. The estuary did not meet the water quality criteria, with several parameters exceeding the allowable limits. The multiple linear regression model and principal component analysis provide valuable information on the relationship between the various water quality parameters. The findings of the study emphasize the need for immediate action to improve the water quality of the El Macho estuary to ensure the preservation and protection of this valuable natural resource.

Keywords: statistical modeling, water quality, multiple linear regression, principal components, statistical models

Procedia PDF Downloads 78
5981 Design and Construction Demeanor of a Very High Embankment Using Geosynthetics

Authors: Mariya Dayana, Budhmal Jain

Abstract:

Kannur International Airport Ltd. (KIAL) is a new Greenfield airport project with airside development on an undulating terrain with an average height of 90m above Mean Sea Level (MSL) and a maximum height of 142m. To accommodate the desired Runway length and Runway End Safety Area (RESA) at both the ends along the proposed alignment, it resulted in 45.5 million cubic meters in cutting and filling. The insufficient availability of land for the construction of free slope embankment at RESA 07 end resulted in the design and construction of Reinforced Soil Slope (RSS) with a maximum slope of 65 degrees. An embankment fill of average 70m height with steep slopes located in high rainfall area is a unique feature of this project. The design and construction was challenging being asymmetrical with curves and bends. The fill was reinforced with high strength Uniaxial geogrids laid perpendicular to the slope. Weld mesh wrapped with coir mat acted as the facia units to protect it against surface failure. Face anchorage were also provided by wrapping the geogrids along the facia units where the slope angle was steeper than 45 degrees. Considering high rainfall received on this table top airport site, extensive drainage system was designed for the high embankment fill. Gabion wall up to 10m height were also designed and constructed along the boundary to accommodate the toe of the RSS fill beside the jeepable track at the base level. The design of RSS fill was done using ReSSA software and verified in PLAXIS 2D modeling. Both slip surface failure and wedge failure cases were considered in static and seismic analysis for local and global failure cases. The site won excavated laterite soil was used as the fill material for the construction. Extensive field and laboratory tests were conducted during the construction of RSS system for quality assurance. This paper represents a case study detailing the design and construction of a very high embankment using geosynthetics for the provision of Runway length and RESA area.

Keywords: airport, embankment, gabion, high strength uniaxial geogrid, kial, laterite soil, plaxis 2d

Procedia PDF Downloads 150
5980 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties

Authors: G. Martino, F. Silva, E. Marchal

Abstract:

The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.

Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization

Procedia PDF Downloads 117
5979 Finite Element Modeling of Global Ti-6Al-4V Mechanical Behavior in Relationship with Microstructural Parameters

Authors: Fatna Benmessaoud, Mohammed Cheikh, Vencent Velay, Vanessa Vedal, Farhad Rezai-Aria, Christine Boher

Abstract:

The global mechanical behavior of materials is strongly linked to their microstructure, especially their crystallographic texture and their grains morphology. These material aspects determine the mechanical fields character (heterogeneous or homogeneous), thus, they give to the global behavior a degree of anisotropy according the initial microstructure. For these reasons, the prediction of global behavior of materials in relationship with the microstructure must be performed with a multi-scale approach. Therefore, multi-scale modeling in the context of crystal plasticity is widely used. In this present contribution, a phenomenological elasto-viscoplastic model developed in the crystal plasticity context and finite element method are used to investigate the effects of crystallographic texture and grains sizes on global behavior of a polycrystalline equiaxed Ti-6Al-4V alloy. The constitutive equations of this model are written on local scale for each slip system within each grain while the strain and stress mechanical fields are investigated at the global scale via finite element scale transition. The beta phase of Ti-6Al-4V alloy modeled is negligible; its percent is less than 10%. Three families of slip systems of alpha phase are considered: basal and prismatic families with a burgers vector and pyramidal family with a burgers vector. The twinning mechanism of plastic strain is not observed in Ti-6Al-4V, therefore, it is not considered in the present modeling. Nine representative elementary volumes (REV) are generated with Voronoi tessellations. For each individual equiaxed grain, the own crystallographic orientation vis-à-vis the loading is taken into account. The meshing strategy is optimized in a way to eliminate the meshing effects and at the same time to allow calculating the individual grain size. The stress and strain fields are determined in each Gauss point of the mesh element. A post-treatment is used to calculate the local behavior (in each grain) and then by appropriate homogenization, the macroscopic behavior is calculated. The developed model is validated by comparing the numerical simulation results with an experimental data reported in the literature. It is observed that the present model is able to predict the global mechanical behavior of Ti-6Al-4V alloy and investigate the microstructural parameters' effects. According to the simulations performed on the generated volumes (REV), the macroscopic mechanical behavior of Ti-6Al-4V is strongly linked to the active slip systems family (prismatic, basal or pyramidal). The crystallographic texture determines which family of slip systems can be activated; therefore it gives to the plastic strain a heterogeneous character thus an anisotropic macroscopic mechanical behavior. The average grains size influences also the Ti-6Al-4V mechanical proprieties, especially the yield stress; by decreasing of the average grains size, the yield strength increases according to Hall-Petch relationship. The grains sizes' distribution gives to the strain fields considerable heterogeneity. By increasing grain sizes, the scattering in the localization of plastic strain is observed, thus, in certain areas the stress concentrations are stronger than other regions.

Keywords: microstructural parameters, multi-scale modeling, crystal plasticity, Ti-6Al-4V alloy

Procedia PDF Downloads 115
5978 The Design Optimization for Sound Absorption Material of Multi-Layer Structure

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Tae-Hyeon Oh, Dae-Kyu Park

Abstract:

Sound absorbing material is used as automotive interior material. Sound absorption coefficient should be predicted to design it. But it is difficult to predict sound absorbing coefficient because it is comprised of several material layers. So, its targets are achieved through many experimental tunings. It causes a lot of cost and time. In this paper, we propose the process to estimate the sound absorption coefficient with multi-layer structure. In order to estimate the coefficient, physical properties of each material are used. These properties also use predicted values by Foam-X software using the sound absorption coefficient data measured by impedance tube. Since there are many physical properties and the measurement equipment is expensive, the values predicted by software are used. Through the measurement of the sound absorption coefficient of each material, its physical properties are calculated inversely. The properties of each material are used to calculate the sound absorption coefficient of the multi-layer material. Since the absorption coefficient of multi-layer can be calculated, optimization design is possible through simulation. Then, we will compare and analyze the calculated sound absorption coefficient with the data measured by scaled reverberation chamber and impedance tubes for a prototype. If this method is used when developing automotive interior materials with multi-layer structure, the development effort can be reduced because it can be optimized by simulation. So, cost and time can be saved.

Keywords: sound absorption material, sound impedance tube, sound absorption coefficient, optimization design

Procedia PDF Downloads 277
5977 An Investigation into the Impacts of High-Frequency Electromagnetic Fields Utilized in the 5G Technology on Insects

Authors: Veriko Jeladze, Besarion Partsvania, Levan Shoshiashvili

Abstract:

This paper addresses a very topical issue today. The frequency range 2.5-100 GHz contains frequencies that have already been used or will be used in modern 5G technologies. The wavelengths used in 5G systems will be close to the body dimensions of small size biological objects, particularly insects. Because the body and body parts dimensions of insects at these frequencies are comparable with the wavelength, the high absorption of EMF energy in the body tissues can occur(body resonance) and therefore can cause harmful effects, possibly the extinction of some of them. An investigation into the impact of radio-frequency nonionizing electromagnetic field (EMF) utilized in the future 5G on insects is of great importance as a very high number of 5G network components will increase the total EMF exposure in the environment. All ecosystems of the earth are interconnected. If one component of an ecosystem is disrupted, the whole system will be affected (which could cause cascading effects). The study of these problems is an important challenge for scientists today because the existing studies are incomplete and insufficient. Consequently, the purpose of this proposed research is to investigate the possible hazardous impact of RF-EMFs (including 5G EMFs) on insects. The project will study the effects of these EMFs on various insects that have different body sizes through computer modeling at frequencies from 2.5 to 100 GHz. The selected insects are honey bee, wasp, and ladybug. For this purpose, the detailed 3D discrete models of insects are created for EM and thermal modeling through FDTD and will be evaluated whole-body Specific Absorption Rates (SAR) at selected frequencies. All these studies represent a novelty. The proposed study will promote new investigations about the bio-effects of 5G-EMFs and will contribute to the harmonization of safe exposure levels and frequencies of 5G-EMFs'.

Keywords: electromagnetic field, insect, FDTD, specific absorption rate (SAR)

Procedia PDF Downloads 79