Search results for: computational lexicography
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2035

Search results for: computational lexicography

145 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 158
144 Numerical and Experimental Investigation of Air Distribution System of Larder Type Refrigerator

Authors: Funda Erdem Şahnali, Ş. Özgür Atayılmaz, Tolga N. Aynur

Abstract:

Almost all of the domestic refrigerators operate on the principle of the vapor compression refrigeration cycle and removal of heat from the refrigerator cabinets is done via one of the two methods: natural convection or forced convection. In this study, airflow and temperature distributions inside a 375L no-frost type larder cabinet, in which cooling is provided by forced convection, are evaluated both experimentally and numerically. Airflow rate, compressor capacity and temperature distribution in the cooling chamber are known to be some of the most important factors that affect the cooling performance and energy consumption of a refrigerator. The objective of this study is to evaluate the original temperature distribution in the larder cabinet, and investigate for better temperature distribution solutions throughout the refrigerator domain via system optimizations that could provide uniform temperature distribution. The flow visualization and airflow velocity measurements inside the original refrigerator are performed via Stereoscopic Particle Image Velocimetry (SPIV). In addition, airflow and temperature distributions are investigated numerically with Ansys Fluent. In order to study the heat transfer inside the aforementioned refrigerator, forced convection theories covering the following cases are applied: closed rectangular cavity representing heat transfer inside the refrigerating compartment. The cavity volume has been represented with finite volume elements and is solved computationally with appropriate momentum and energy equations (Navier-Stokes equations). The 3D model is analyzed as transient, with k-ε turbulence model and SIMPLE pressure-velocity coupling for turbulent flow situation. The results obtained with the 3D numerical simulations are in quite good agreement with the experimental airflow measurements using the SPIV technique. After Computational Fluid Dynamics (CFD) analysis of the baseline case, the effects of three parameters: compressor capacity, fan rotational speed and type of shelf (glass or wire) are studied on the energy consumption; pull down time, temperature distributions in the cabinet. For each case, energy consumption based on experimental results is calculated. After the analysis, the main effective parameters for temperature distribution inside a cabin and energy consumption based on CFD simulation are determined and simulation results are supplied for Design of Experiments (DOE) as input data for optimization. The best configuration with minimum energy consumption that provides minimum temperature difference between the shelves inside the cabinet is determined.

Keywords: air distribution, CFD, DOE, energy consumption, experimental, larder cabinet, refrigeration, uniform temperature

Procedia PDF Downloads 108
143 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 266
142 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves

Authors: Piotr Wisniewski, Sławomir Dykas

Abstract:

Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.

Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows

Procedia PDF Downloads 116
141 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications

Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini

Abstract:

This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.

Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy

Procedia PDF Downloads 110
140 Energy Atlas: Geographic Information Systems-Based Energy Analysis and Planning Tool

Authors: Katarina Pogacnik, Ursa Zakrajsek, Nejc Sirk, Ziga Lampret

Abstract:

Due to an increase in living standards along with global population growth and a trend of urbanization, municipalities and regions are faced with an ever rising energy demand. A challenge has arisen for cities around the world to modify the energy supply chain in order to reduce its consumption and CO₂ emissions. The aim of our work is the development of a computational-analytical platform for dynamic support in decision-making and the determination of economic and technical indicators of energy efficiency in a smart city, named Energy Atlas. Similar products in this field focuse on a narrower approach, whereas in order to achieve its aim, this platform encompasses a wider spectrum of beneficial and important information for energy planning on a local or regional scale. GIS based interactive maps provide an extensive database on the potential, use and supply of energy and renewable energy sources along with climate, transport and spatial data of the selected municipality. Beneficiaries of Energy atlas are local communities, companies, investors, contractors as well as residents. The Energy Atlas platform consists of three modules named E-Planning, E-Indicators and E-Cooperation. The E-Planning module is a comprehensive data service, which represents a support towards optimal decision-making and offers a sum of solutions and feasibility of measures and their effects in the area of efficient use of energy and renewable energy sources. The E-Indicators module identifies, collects and develops optimal data and key performance indicators and develops an analytical application service for dynamic support in managing a smart city in regards to energy use and sustainable environment. In order to support cooperation and direct involvement of citizens of the smart city, the E-cooperation is developed with the purpose of integrating the interdisciplinary and sociological aspects of energy end-users. Interaction of all the above-described modules contributes to regional development because it enables for a precise assessment of the current situation, strategic planning, detection of potential future difficulties and also the possibility of public involvement in decision-making. From the implementation of the technology in Slovenian municipalities of Ljubljana, Piran, and Novo mesto, there is evidence to suggest that the set goals are to be achieved to a great extent. Such thorough urban energy planning tool is viewed as an important piece of the puzzle towards achieving a low-carbon society, circular economy and therefore, sustainable society.

Keywords: circular economy, energy atlas, energy management, energy planning, low-carbon society

Procedia PDF Downloads 304
139 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 151
138 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 113
137 Evolutionary Advantages of Loneliness with an Agent-Based Model

Authors: David Gottlieb, Jason Yoder

Abstract:

The feeling of loneliness is not uncommon in modern society, and yet, there is a fundamental lack of understanding in its origins and purpose in nature. One interpretation of loneliness is that it is a subjective experience that punishes a lack of social behavior, and thus its emergence in human evolution is seemingly tied to the survival of early human tribes. Still, a common counterintuitive response to loneliness is a state of hypervigilance, resulting in social withdrawal, which may appear maladaptive to modern society. So far, no computational model of loneliness’ effect during evolution yet exists; however, agent-based models (ABM) can be used to investigate social behavior, and applying evolution to agents’ behaviors can demonstrate selective advantages for particular behaviors. We propose an ABM where each agent contains four social behaviors, and one goal-seeking behavior, letting evolution select the best behavioral patterns for resource allocation. In our paper, we use an algorithm similar to the boid model to guide the behavior of agents, but expand the set of rules that govern their behavior. While we use cohesion, separation, and alignment for simple social movement, our expanded model adds goal-oriented behavior, which is inspired by particle swarm optimization, such that agents move relative to their personal best position. Since agents are given the ability to form connections by interacting with each other, our final behavior guides agent movement toward its social connections. Finally, we introduce a mechanism to represent a state of loneliness, which engages when an agent's perceived social involvement does not meet its expected social involvement. This enables us to investigate a minimal model of loneliness, and using evolution we attempt to elucidate its value in human survival. Agents are placed in an environment in which they must acquire resources, as their fitness is based on the total resource collected. With these rules in place, we are able to run evolution under various conditions, including resource-rich environments, and when disease is present. Our simulations indicate that there is strong selection pressure for social behavior under circumstances where there is a clear discrepancy between initial resource locations, and against social behavior when disease is present, mirroring hypervigilance. This not only provides an explanation for the emergence of loneliness, but also reflects the diversity of response to loneliness in the real world. In addition, there is evidence of a richness of social behavior when loneliness was present. By introducing just two resource locations, we observed a divergence in social motivation after agents became lonely, where one agent learned to move to the other, who was in a better resource position. The results and ongoing work from this project show that it is possible to glean insight into the evolutionary advantages of even simple mechanisms of loneliness. The model we developed has produced unexpected results and has led to more questions, such as the impact loneliness would have at a larger scale, or the effect of creating a set of rules governing interaction beyond adjacency.

Keywords: agent-based, behavior, evolution, loneliness, social

Procedia PDF Downloads 94
136 Online Allocation and Routing for Blood Delivery in Conditions of Variable and Insufficient Supply: A Case Study in Thailand

Authors: Pornpimol Chaiwuttisak, Honora Smith, Yue Wu

Abstract:

Blood is a perishable product which suffers from physical deterioration with specific fixed shelf life. Although its value during the shelf life is constant, fresh blood is preferred for treatment. However, transportation costs are a major factor to be considered by administrators of Regional Blood Centres (RBCs) which act as blood collection and distribution centres. A trade-off must therefore be reached between transportation costs and short-term holding costs. In this paper we propose a number of algorithms for online allocation and routing of blood supplies, for use in conditions of variable and insufficient blood supply. A case study in northern Thailand provides an application of the allocation and routing policies tested. The plan proposed for daily allocation and distribution of blood supplies consists of two components: firstly, fixed routes are determined for the supply of hospitals which are far from an RBC. Over the planning period of one week, each hospital on the fixed routes is visited once. A robust allocation of blood is made to hospitals on the fixed routes that can be guaranteed on a suitably high percentage of days, despite variable supplies. Secondly, a variable daily route is employed for close-by hospitals, for which more than one visit per week may be needed to fulfil targets. The variable routing takes into account the amount of blood available for each day’s deliveries, which is only known on the morning of delivery. For hospitals on the variables routes, the day and amounts of deliveries cannot be guaranteed but are designed to attain targets over the six-day planning horizon. In the conditions of blood shortage encountered in Thailand, and commonly in other developing countries, it is often the case that hospitals request more blood than is needed, in the knowledge that only a proportion of all requests will be met. Our proposal is for blood supplies to be allocated and distributed to each hospital according to equitable targets based on historical demand data, calculated with regard to expected daily blood supplies. We suggest several policies that could be chosen by the decision makes for the daily distribution of blood. The different policies provide different trade-offs between transportation and holding costs. Variations in the costs of transportation, such as the price of petrol, could make different policies the most beneficial at different times. We present an application of the policies applied to a realistic case study in the RBC at Chiang Mai province which is located in Northern region of Thailand. The analysis includes a total of more than 110 hospitals, with 29 hospitals considered in the variable route. The study is expected to be a pilot for other regions of Thailand. Computational experiments are presented. Concluding remarks include the benefits gained by the online methods and future recommendations.

Keywords: online algorithm, blood distribution, developing country, insufficient blood supply

Procedia PDF Downloads 330
135 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 230
134 Computational Code for Solving the Navier-Stokes Equations on Unstructured Meshes Applied to the Leading Edge of the Brazilian Hypersonic Scramjet 14-X

Authors: Jayme R. T. Silva, Paulo G. P. Toro, Angelo Passaro, Giannino P. Camillo, Antonio C. Oliveira

Abstract:

An in-house C++ code has been developed, at the Prof. Henry T. Nagamatsu Laboratory of Aerothermodynamics and Hypersonics from the Institute of Advanced Studies (Brazil), to estimate the aerothermodynamic properties around the Hypersonic Vehicle Integrated to the Scramjet. In the future, this code will be applied to the design of the Brazilian Scramjet Technological Demonstrator 14-X B. The first step towards accomplishing this objective, is to apply the in-house C++ code at the leading edge of a flat plate, simulating the leading edge of the 14-X Hypersonic Vehicle, making possible the wave phenomena of oblique shock and boundary layer to be analyzed. The development of modern hypersonic space vehicles requires knowledge regarding the characteristics of hypersonic flows in the vicinity of a leading edge of lifting surfaces. The strong interaction between a shock wave and a boundary layer, in a high supersonic Mach number 4 viscous flow, close to the leading edge of the plate, considering no slip condition, is numerically investigated. The small slip region is neglecting. The study consists of solving the fluid flow equations for unstructured meshes applying the SIMPLE algorithm for Finite Volume Method. Unstructured meshes are generated by the in-house software ‘Modeler’ that was developed at Virtual’s Engineering Laboratory from the Institute of Advanced Studies, initially developed for Finite Element problems and, in this work, adapted to the resolution of the Navier-Stokes equations based on the SIMPLE pressure-correction scheme for all-speed flows, Finite Volume Method based. The in-house C++ code is based on the two-dimensional Navier-Stokes equations considering non-steady flow, with nobody forces, no volumetric heating, and no mass diffusion. Air is considered as calorically perfect gas, with constant Prandtl number and Sutherland's law for the viscosity. Solutions of the flat plate problem for Mach number 4 include pressure, temperature, density and velocity profiles as well as 2-D contours. Also, the boundary layer thickness, boundary conditions, and mesh configurations are presented. The same problem has been solved by the academic license of the software Ansys Fluent and for another C++ in-house code, which solves the fluid flow equations in structured meshes, applying the MacCormack method for Finite Difference Method, and the results will be compared.

Keywords: boundary-layer, scramjet, simple algorithm, shock wave

Procedia PDF Downloads 487
133 Approach on Conceptual Design and Dimensional Synthesis of the Linear Delta Robot for Additive Manufacturing

Authors: Efrain Rodriguez, Cristhian Riano, Alberto Alvares

Abstract:

In recent years, robots manipulators with parallel architectures are used in additive manufacturing processes – 3D printing. These robots have advantages such as speed and lightness that make them suitable to help with the efficiency and productivity of these processes. Consequently, the interest for the development of parallel robots for additive manufacturing applications has increased. This article deals with the conceptual design and dimensional synthesis of the linear delta robot for additive manufacturing. Firstly, a methodology based on structured processes for the development of products through the phases of informational design, conceptual design and detailed design is adopted: a) In the informational design phase the Mudge diagram and the QFD matrix are used to aid a set of technical requirements, to define the form, functions and features of the robot. b) In the conceptual design phase, the functional modeling of the system through of an IDEF0 diagram is performed, and the solution principles for the requirements are formulated using a morphological matrix. This phase includes the description of the mechanical, electro-electronic and computational subsystems that constitute the general architecture of the robot. c) In the detailed design phase, a digital model of the robot is drawn on CAD software. A list of commercial and manufactured parts is detailed. Tolerances and adjustments are defined for some parts of the robot structure. The necessary manufacturing processes and tools are also listed, including: milling, turning and 3D printing. Secondly, a dimensional synthesis method applied on design of the linear delta robot is presented. One of the most important key factors in the design of a parallel robot is the useful workspace, which strongly depends on the joint space, the dimensions of the mechanism bodies and the possible interferences between these bodies. The objective function is based on the verification of the kinematic model for a prescribed cylindrical workspace, considering geometric constraints that possibly lead to singularities of the mechanism. The aim is to determine the minimum dimensional parameters of the mechanism bodies for the proposed workspace. A method based on genetic algorithms was used to solve this problem. The method uses a cloud of points with the cylindrical shape of the workspace and checks the kinematic model for each of the points within the cloud. The evolution of the population (point cloud) provides the optimal parameters for the design of the delta robot. The development process of the linear delta robot with optimal dimensions for additive manufacture is presented. The dimensional synthesis enabled to design the mechanism of the delta robot in function of the prescribed workspace. Finally, the implementation of the robotic platform developed based on a linear delta robot in an additive manufacturing application using the Fused Deposition Modeling (FDM) technique is presented.

Keywords: additive manufacturing, delta parallel robot, dimensional synthesis, genetic algorithms

Procedia PDF Downloads 189
132 Numerical Simulation of the Heat Transfer Process in a Double Pipe Heat Exchanger

Authors: J. I. Corcoles, J. D. Moya-Rico, A. Molina, J. F. Belmonte, J. A. Almendros-Ibanez

Abstract:

One of the most common heat exchangers technology in engineering processes is the use of double-pipe heat exchangers (DPHx), mainly in the food industry. To improve the heat transfer performance, several passive geometrical devices can be used, such as the wall corrugation of tubes, which increases the wet perimeter maintaining a constant cross-section area, increasing consequently the convective surface area. It contributes to enhance heat transfer in forced convection, promoting secondary recirculating flows. One of the most extended tools to analyse heat exchangers' efficiency is the use of computational fluid dynamic techniques (CFD), a complementary activity to the experimental studies as well as a previous step for the design of heat exchangers. In this study, a double pipe heat exchanger behaviour with two different inner tubes, smooth and spirally corrugated tube, have been analysed. Hence, experimental analysis and steady 3-D numerical simulations using the commercial code ANSYS Workbench v. 17.0 are carried out to analyse the influence of geometrical parameters for spirally corrugated tubes at turbulent flow. To validate the numerical results, an experimental setup has been used. To heat up or cool down the cold fluid as it passes through the heat exchanger, the installation includes heating and cooling loops served by an electric boiler with a heating capacity of 72 kW and a chiller, with a cooling capacity of 48 kW. Two tests have been carried out for the smooth tube and for the corrugated one. In all the tests, the hot fluid has a constant flowrate of 50 l/min and inlet temperature of 59.5°C. For the cold fluid, the flowrate range from 25 l/min (Test 1) and 30 l/min (Test 2) with an inlet temperature of 22.1°C. The heat exchanger is made of stainless steel, with an external diameter of 35 mm and wall thickness of 1.5 mm. Both inner tubes have an external diameter of 24 mm and 1 mm thickness of stainless steel with a length of 2.8 m. The corrugated tube has a corrugation height (H) of 1.1 mm and helical pitch (P) of 25 mm. It is characterized using three non-dimensional parameters, the ratio of the corrugation shape and the diameter (H/D), the helical pitch (P/D) and the severity index (SI = H²/P x D). The results showed good agreement between the numerical and the experimental results. Hence, the lowest differences were shown for the fluid temperatures. In all the analysed tests and for both analysed tubes, the temperature obtained numerically was slightly higher than the experimental results, with values ranged between 0.1% and 0.7%. Regarding the pressure drop, the maximum differences between the values obtained numerically, and the experimental values were close to 16%. Based on the experimental and the numerical results, for the corrugated tube, it can be highlighted that the temperature difference between the inlet and the outlet of the cold fluid is 42%, higher than the smooth tube.

Keywords: corrugated tube, heat exchanger, heat transfer, numerical simulation

Procedia PDF Downloads 144
131 Explaining Irregularity in Music by Entropy and Information Content

Authors: Lorena Mihelac, Janez Povh

Abstract:

In 2017, we conducted a research study using data consisting of 160 musical excerpts from different musical styles, to analyze the impact of entropy of the harmony on the acceptability of music. In measuring the entropy of harmony, we were interested in unigrams (individual chords in the harmonic progression) and bigrams (the connection of two adjacent chords). In this study, it has been found that 53 musical excerpts out from 160 were evaluated by participants as very complex, although the entropy of the harmonic progression (unigrams and bigrams) was calculated as low. We have explained this by particularities of chord progression, which impact the listener's feeling of complexity and acceptability. We have evaluated the same data twice with new participants in 2018 and with the same participants for the third time in 2019. These three evaluations have shown that the same 53 musical excerpts, found to be difficult and complex in the study conducted in 2017, are exhibiting a high feeling of complexity again. It was proposed that the content of these musical excerpts, defined as “irregular,” is not meeting the listener's expectancy and the basic perceptual principles, creating a higher feeling of difficulty and complexity. As the “irregularities” in these 53 musical excerpts seem to be perceived by the participants without being aware of it, affecting the pleasantness and the feeling of complexity, they have been defined as “subliminal irregularities” and the 53 musical excerpts as “irregular.” In our recent study (2019) of the same data (used in previous research works), we have proposed a new measure of the complexity of harmony, “regularity,” based on the irregularities in the harmonic progression and other plausible particularities in the musical structure found in previous studies. We have in this study also proposed a list of 10 different particularities for which we were assuming that they are impacting the participant’s perception of complexity in harmony. These ten particularities have been tested in this paper, by extending the analysis in our 53 irregular musical excerpts from harmony to melody. In the examining of melody, we have used the computational model “Information Dynamics of Music” (IDyOM) and two information-theoretic measures: entropy - the uncertainty of the prediction before the next event is heard, and information content - the unexpectedness of an event in a sequence. In order to describe the features of melody in these musical examples, we have used four different viewpoints: pitch, interval, duration, scale degree. The results have shown that the texture of melody (e.g., multiple voices, homorhythmic structure) and structure of melody (e.g., huge interval leaps, syncopated rhythm, implied harmony in compound melodies) in these musical excerpts are impacting the participant’s perception of complexity. High information content values were found in compound melodies in which implied harmonies seem to have suggested additional harmonies, affecting the participant’s perception of the chord progression in harmony by creating a sense of an ambiguous musical structure.

Keywords: entropy and information content, harmony, subliminal (ir)regularity, IDyOM

Procedia PDF Downloads 131
130 Inherent Difficulties in Countering Islamophobia

Authors: Imbesat Daudi

Abstract:

Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.

Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam

Procedia PDF Downloads 47
129 Molecular Modeling and Prediction of the Physicochemical Properties of Polyols in Aqueous Solution

Authors: Maria Fontenele, Claude-Gilles Dussap, Vincent Dumouilla, Baptiste Boit

Abstract:

Roquette Frères is a producer of plant-based ingredients that employs many processes to extract relevant molecules and often transforms them through chemical and physical processes to create desired ingredients with specific functionalities. In this context, Roquette encounters numerous multi-component complex systems in their processes, including fibers, proteins, and carbohydrates, in an aqueous environment. To develop, control, and optimize both new and old processes, Roquette aims to develop new in silico tools. Currently, Roquette uses process modelling tools which include specific thermodynamic models and is willing to develop computational methodologies such as molecular dynamics simulations to gain insights into the complex interactions in such complex media, and especially hydrogen bonding interactions. The issue at hand concerns aqueous mixtures of polyols with high dry matter content. The polyols mannitol and sorbitol molecules are diastereoisomers that have nearly identical chemical structures but very different physicochemical properties: for example, the solubility of sorbitol in water is 2.5 kg/kg of water, while mannitol has a solubility of 0.25 kg/kg of water at 25°C. Therefore, predicting liquid-solid equilibrium properties in this case requires sophisticated solution models that cannot be based solely on chemical group contributions, knowing that for mannitol and sorbitol, the chemical constitutive groups are the same. Recognizing the significance of solvation phenomena in polyols, the GePEB (Chemical Engineering, Applied Thermodynamics, and Biosystems) team at Institut Pascal has developed the COSMO-UCA model, which has the structural advantage of using quantum mechanics tools to predict formation and phase equilibrium properties. In this work, we use molecular dynamics simulations to elucidate the behavior of polyols in aqueous solution. Specifically, we employ simulations to compute essential metrics such as radial distribution functions and hydrogen bond autocorrelation functions. Our findings illuminate a fundamental contrast: sorbitol and mannitol exhibit disparate hydrogen bond lifetimes within aqueous environments. This observation serves as a cornerstone in elucidating the divergent physicochemical properties inherent to each compound, shedding light on the nuanced interplay between their molecular structures and water interactions. We also present a methodology to predict the physicochemical properties of complex solutions, taking as sole input the three-dimensional structure of the molecules in the medium. Finally, by developing knowledge models, we represent some physicochemical properties of aqueous solutions of sorbitol and mannitol.

Keywords: COSMO models, hydrogen bond, molecular dynamics, thermodynamics

Procedia PDF Downloads 41
128 Computational Investigation on Structural and Functional Impact of Oncogenes and Tumor Suppressor Genes on Cancer

Authors: Abdoulie K. Ceesay

Abstract:

Within the sequence of the whole genome, it is known that 99.9% of the human genome is similar, whilst our difference lies in just 0.1%. Among these minor dissimilarities, the most common type of genetic variations that occurs in a population is SNP, which arises due to nucleotide substitution in a protein sequence that leads to protein destabilization, alteration in dynamics, and other physio-chemical properties’ distortions. While causing variations, they are equally responsible for our difference in the way we respond to a treatment or a disease, including various cancer types. There are two types of SNPs; synonymous single nucleotide polymorphism (sSNP) and non-synonymous single nucleotide polymorphism (nsSNP). sSNP occur in the gene coding region without causing a change in the encoded amino acid, while nsSNP is deleterious due to its replacement of a nucleotide residue in the gene sequence that results in a change in the encoded amino acid. Predicting the effects of cancer related nsSNPs on protein stability, function, and dynamics is important due to the significance of phenotype-genotype association of cancer. In this thesis, Data of 5 oncogenes (ONGs) (AKT1, ALK, ERBB2, KRAS, BRAF) and 5 tumor suppressor genes (TSGs) (ESR1, CASP8, TET2, PALB2, PTEN) were retrieved from ClinVar. Five common in silico tools; Polyphen, Provean, Mutation Assessor, Suspect, and FATHMM, were used to predict and categorize nsSNPs as deleterious, benign, or neutral. To understand the impact of each variation on the phenotype, Maestro, PremPS, Cupsat, and mCSM-NA in silico structural prediction tools were used. This study comprises of in-depth analysis of 10 cancer gene variants downloaded from Clinvar. Various analysis of the genes was conducted to derive a meaningful conclusion from the data. Research done indicated that pathogenic variants are more common among ONGs. Our research also shows that pathogenic and destabilizing variants are more common among ONGs than TSGs. Moreover, our data indicated that ALK(409) and BRAF(86) has higher benign count among ONGs; whilst among TSGs, PALB2(1308) and PTEN(318) genes have higher benign counts. Looking at the individual cancer genes predisposition or frequencies of causing cancer according to our research data, KRAS(76%), BRAF(55%), and ERBB2(36%) among ONGs; and PTEN(29%) and ESR1(17%) among TSGs have higher tendencies of causing cancer. Obtained results can shed light to the future research in order to pave new frontiers in cancer therapies.

Keywords: tumor suppressor genes (TSGs), oncogenes (ONGs), non synonymous single nucleotide polymorphism (nsSNP), single nucleotide polymorphism (SNP)

Procedia PDF Downloads 84
127 Modeling of Anisotropic Hardening Based on Crystal Plasticity Theory and Virtual Experiments

Authors: Bekim Berisha, Sebastian Hirsiger, Pavel Hora

Abstract:

Advanced material models involving several sets of model parameters require a big experimental effort. As models are getting more and more complex like e.g. the so called “Homogeneous Anisotropic Hardening - HAH” model for description of the yielding behavior in the 2D/3D stress space, the number and complexity of the required experiments are also increasing continuously. In the context of sheet metal forming, these requirements are even more pronounced, because of the anisotropic behavior or sheet materials. In addition, some of the experiments are very difficult to perform e.g. the plane stress biaxial compression test. Accordingly, tensile tests in at least three directions, biaxial tests and tension-compression or shear-reverse shear experiments are performed to determine the parameters of the macroscopic models. Therefore, determination of the macroscopic model parameters based on virtual experiments is a very promising strategy to overcome these difficulties. For this purpose, in the framework of multiscale material modeling, a dislocation density based crystal plasticity model in combination with a FFT-based spectral solver is applied to perform virtual experiments. Modeling of the plastic behavior of metals based on crystal plasticity theory is a well-established methodology. However, in general, the computation time is very high and therefore, the computations are restricted to simplified microstructures as well as simple polycrystal models. In this study, a dislocation density based crystal plasticity model – including an implementation of the backstress – is used in a spectral solver framework to generate virtual experiments for three deep drawing materials, DC05-steel, AA6111-T4 and AA4045 aluminum alloys. For this purpose, uniaxial as well as multiaxial loading cases, including various pre-strain histories, has been computed and validated with real experiments. These investigations showed that crystal plasticity modeling in the framework of Representative Volume Elements (RVEs) can be used to replace most of the expensive real experiments. Further, model parameters of advanced macroscopic models like the HAH model can be determined from virtual experiments, even for multiaxial deformation histories. It was also found that crystal plasticity modeling can be used to model anisotropic hardening more accurately by considering the backstress, similar to well-established macroscopic kinematic hardening models. It can be concluded that an efficient coupling of crystal plasticity models and the spectral solver leads to a significant reduction of the amount of real experiments needed to calibrate macroscopic models. This advantage leads also to a significant reduction of computational effort needed for the optimization of metal forming process. Further, due to the time efficient spectral solver used in the computation of the RVE models, detailed modeling of the microstructure are possible.

Keywords: anisotropic hardening, crystal plasticity, micro structure, spectral solver

Procedia PDF Downloads 313
126 Slope Stability and Landslides Hazard Analysis, Limitations of Existing Approaches, and a New Direction

Authors: Alisawi Alaa T., Collins P. E. F.

Abstract:

The analysis and evaluation of slope stability and landslide hazards are landslide hazards are critically important in civil engineering projects and broader considerations of safety. The level of slope stability risk should be identified due to its significant and direct financial and safety effects. Slope stability hazard analysis is performed considering static and/or dynamic loading circumstances. To reduce and/or prevent the failure hazard caused by landslides, a sophisticated and practical hazard analysis method using advanced constitutive modeling should be developed and linked to an effective solution that corresponds to the specific type of slope stability and landslides failure risk. Previous studies on slope stability analysis methods identify the failure mechanism and its corresponding solution. The commonly used approaches include used approaches include limit equilibrium methods, empirical approaches for rock slopes (e.g., slope mass rating and Q-slope), finite element or finite difference methods, and district element codes. This study presents an overview and evaluation of these analysis techniques. Contemporary source materials are used to examine these various methods on the basis of hypotheses, the factor of safety estimation, soil types, load conditions, and analysis conditions and limitations. Limit equilibrium methods play a key role in assessing the level of slope stability hazard. The slope stability safety level can be defined by identifying the equilibrium of the shear stress and shear strength. The slope is considered stable when the movement resistance forces are greater than those that drive the movement with a factor of safety (ratio of the resistance of the resistance of the driving forces) that is greater than 1.00. However, popular and practical methods, including limit equilibrium approaches, are not effective when the slope experiences complex failure mechanisms, such as progressive failure, liquefaction, internal deformation, or creep. The present study represents the first episode of an ongoing project that involves the identification of the types of landslides hazards, assessment of the level of slope stability hazard, development of a sophisticated and practical hazard analysis method, linkage of the failure type of specific landslides conditions to the appropriate solution and application of an advanced computational method for mapping the slope stability properties in the United Kingdom, and elsewhere through geographical information system (GIS) and inverse distance weighted spatial interpolation(IDW) technique. This study investigates and assesses the different assesses the different analysis and solution techniques to enhance the knowledge on the mechanism of slope stability and landslides hazard analysis and determine the available solutions for each potential landslide failure risk.

Keywords: slope stability, finite element analysis, hazard analysis, landslides hazard

Procedia PDF Downloads 98
125 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring

Authors: Zheng Wang, Zhenhong Li, Jon Mills

Abstract:

Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.

Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring

Procedia PDF Downloads 159
124 Skull Extraction for Quantification of Brain Volume in Magnetic Resonance Imaging of Multiple Sclerosis Patients

Authors: Marcela De Oliveira, Marina P. Da Silva, Fernando C. G. Da Rocha, Jorge M. Santos, Jaime S. Cardoso, Paulo N. Lisboa-Filho

Abstract:

Multiple Sclerosis (MS) is an immune-mediated disease of the central nervous system characterized by neurodegeneration, inflammation, demyelination, and axonal loss. Magnetic resonance imaging (MRI), due to the richness in the information details provided, is the gold standard exam for diagnosis and follow-up of neurodegenerative diseases, such as MS. Brain atrophy, the gradual loss of brain volume, is quite extensive in multiple sclerosis, nearly 0.5-1.35% per year, far off the limits of normal aging. Thus, the brain volume quantification becomes an essential task for future analysis of the occurrence atrophy. The analysis of MRI has become a tedious and complex task for clinicians, who have to manually extract important information. This manual analysis is prone to errors and is time consuming due to various intra- and inter-operator variability. Nowadays, computerized methods for MRI segmentation have been extensively used to assist doctors in quantitative analyzes for disease diagnosis and monitoring. Thus, the purpose of this work was to evaluate the brain volume in MRI of MS patients. We used MRI scans with 30 slices of the five patients diagnosed with multiple sclerosis according to the McDonald criteria. The computational methods for the analysis of images were carried out in two steps: segmentation of the brain and brain volume quantification. The first image processing step was to perform brain extraction by skull stripping from the original image. In the skull stripper for MRI images of the brain, the algorithm registers a grayscale atlas image to the grayscale patient image. The associated brain mask is propagated using the registration transformation. Then this mask is eroded and used for a refined brain extraction based on level-sets (edge of the brain-skull border with dedicated expansion, curvature, and advection terms). In the second step, the brain volume quantification was performed by counting the voxels belonging to the segmentation mask and converted in cc. We observed an average brain volume of 1469.5 cc. We concluded that the automatic method applied in this work can be used for the brain extraction process and brain volume quantification in MRI. The development and use of computer programs can contribute to assist health professionals in the diagnosis and monitoring of patients with neurodegenerative diseases. In future works, we expect to implement more automated methods for the assessment of cerebral atrophy and brain lesions quantification, including machine-learning approaches. Acknowledgements: This work was supported by a grant from Brazilian agency Fundação de Amparo à Pesquisa do Estado de São Paulo (number 2019/16362-5).

Keywords: brain volume, magnetic resonance imaging, multiple sclerosis, skull stripper

Procedia PDF Downloads 145
123 USBware: A Trusted and Multidisciplinary Framework for Enhanced Detection of USB-Based Attacks

Authors: Nir Nissim, Ran Yahalom, Tomer Lancewiki, Yuval Elovici, Boaz Lerner

Abstract:

Background: Attackers increasingly take advantage of innocent users who tend to use USB devices casually, assuming these devices benign when in fact they may carry an embedded malicious behavior or hidden malware. USB devices have many properties and capabilities that have become the subject of malicious operations. Many of the recent attacks targeting individuals, and especially organizations, utilize popular and widely used USB devices, such as mice, keyboards, flash drives, printers, and smartphones. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched via USB devices. Significance: We propose USBWARE, a project that focuses on the vulnerabilities of USB devices and centers on the development of a comprehensive detection framework that relies upon a crucial attack repository. USBWARE will allow researchers and companies to better understand the vulnerabilities and attacks associated with USB devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The framework of USBWARE is aimed at accurate detection of both known and unknown USB-based attacks by a process that efficiently enhances the framework's detection capabilities over time. The framework will integrate two main security approaches in order to enhance the detection of USB-based attacks associated with a variety of USB devices. The first approach is aimed at the detection of known attacks and their variants, whereas the second approach focuses on the detection of unknown attacks. USBWARE will consist of six independent but complimentary detection modules, each detecting attacks based on a different approach or discipline. These modules include novel ideas and algorithms inspired from or already developed within our team's domains of expertise, including cyber security, electrical and signal processing, machine learning, and computational biology. The establishment and maintenance of the USBWARE’s dynamic and up-to-date attack repository will strengthen the capabilities of the USBWARE detection framework. The attack repository’s infrastructure will enable researchers to record, document, create, and simulate existing and new USB-based attacks. This data will be used to maintain the detection framework’s updatability by incorporating knowledge regarding new attacks. Based on our experience in the cyber security domain, we aim to design the USBWARE framework so that it will have several characteristics that are crucial for this type of cyber-security detection solution. Specifically, the USBWARE framework should be: Novel, Multidisciplinary, Trusted, Lightweight, Extendable, Modular and Updatable and Adaptable. Major Findings: Based on our initial survey, we have already found more than 23 types of USB-based attacks, divided into six major categories. Our preliminary evaluation and proof of concepts showed that our detection modules can be used for efficient detection of several basic known USB attacks. Further research, development, and enhancements are required so that USBWARE will be capable to cover all of the major known USB attacks and to detect unknown attacks. Conclusion: USBWARE is a crucial detection framework that must be further enhanced and developed.

Keywords: USB, device, cyber security, attack, detection

Procedia PDF Downloads 396
122 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model

Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle

Abstract:

In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.

Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model

Procedia PDF Downloads 101
121 Investigating the Flow Physics within Vortex-Shockwave Interactions

Authors: Frederick Ferguson, Dehua Feng, Yang Gao

Abstract:

No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.

Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme

Procedia PDF Downloads 136
120 Pre-Cooling Strategies for the Refueling of Hydrogen Cylinders in Vehicular Transport

Authors: C. Hall, J. Ramos, V. Ramasamy

Abstract:

Hydrocarbon-based fuel vehicles are a major contributor to air pollution due to harmful emissions produced, leading to a demand for cleaner fuel types. A leader in this pursuit is hydrogen, with its application in vehicles producing zero harmful emissions and the only by-product being water. To compete with the performance of conventional vehicles, hydrogen gas must be stored on-board of vehicles in cylinders at high pressures (35–70 MPa) and have a short refueling duration (approximately 3 mins). However, the fast-filling of hydrogen cylinders causes a significant rise in temperature due to the combination of the negative Joule-Thompson effect and the compression of the gas. This can lead to structural failure and therefore, a maximum allowable internal temperature of 85°C has been imposed by the International Standards Organization. The technological solution to tackle the issue of rapid temperature rise during the refueling process is to decrease the temperature of the gas entering the cylinder. Pre-cooling of the gas uses a heat exchanger and requires energy for its operation. Thus, it is imperative to determine the least amount of energy input that is required to lower the gas temperature for cost savings. A validated universal thermodynamic model is used to identify an energy-efficient pre-cooling strategy. The model requires negligible computational time and is applied to previously validated experimental cases to optimize pre-cooling requirements. The pre-cooling characteristics include the location within the refueling timeline and its duration. A constant pressure-ramp rate is imposed to eliminate the effects of rapid changes in mass flow rate. A pre-cooled gas temperature of -40°C is applied, which is the lowest allowable temperature. The heat exchanger is assumed to be ideal with no energy losses. The refueling of the cylinders is modeled with the pre-cooling split in ten percent time intervals. Furthermore, varying burst durations are applied in both the early and late stages of the refueling procedure. The model shows that pre-cooling in the later stages of the refuelling process is more energy-efficient than early pre-cooling. In addition, the efficiency of pre-cooling towards the end of the refueling process is independent of the pressure profile at the inlet. This leads to the hypothesis that pre-cooled gas should be applied as late as possible in the refueling timeline and at very low temperatures. The model had shown a 31% reduction in energy demand whilst achieving the same final gas temperature for a refueling scenario when pre-cooling was applied towards the end of the process. The identification of the most energy-efficient refueling approaches whilst adhering to the safety guidelines is imperative to reducing the operating cost of hydrogen refueling stations. Heat exchangers are energy-intensive and thus, reducing the energy requirement would lead to cost reduction. This investigation shows that pre-cooling should be applied as late as possible and for short durations.

Keywords: cylinder, hydrogen, pre-cooling, refueling, thermodynamic model

Procedia PDF Downloads 95
119 A Brazilian Study Applied to the Regulatory Environmental Issues of Nanomaterials

Authors: Luciana S. Almeida

Abstract:

Nanotechnology has revolutionized the world of science and technology bringing great expectations due to its great potential of application in the most varied industrial sectors. The same characteristics that make nanoparticles interesting from the point of view of the technological application, these may be undesirable when released into the environment. The small size of nanoparticles facilitates their diffusion and transport in the atmosphere, water, and soil and facilitates the entry and accumulation of nanoparticles in living cells. The main objective of this study is to evaluate the environmental regulatory process of nanomaterials in the Brazilian scenario. Three specific objectives were outlined. The first is to carry out a global scientometric study, in a research platform, with the purpose of identifying the main lines of study of nanomaterials in the environmental area. The second is to verify how environmental agencies in other countries have been working on this issue by means of a bibliographic review. And the third is to carry out an assessment of the Brazilian Nanotechnology Draft Law 6741/2013 with the state environmental agencies. This last one has the aim of identifying the knowledge of the subject by the environmental agencies and necessary resources available in the country for the implementation of the Policy. A questionnaire will be used as a tool for this evaluation to identify the operational elements and build indicators through the Environment of Evaluation Application, a computational application developed for the development of questionnaires. At the end will be verified the need to propose changes in the Draft Law of the National Nanotechnology Policy. Initial studies, in relation to the first specific objective, have already identified that Brazil stands out in the production of scientific publications in the area of nanotechnology, although the minority is in studies focused on environmental impact studies. Regarding the general panorama of other countries, some findings have also been raised. The United States has included the nanoform of the substances in an existing program in the EPA (Environmental Protection Agency), the TSCA (Toxic Substances Control Act). The European Union issued a draft of a document amending Regulation 1907/2006 of the European Parliament and Council to cover the nanoform of substances. Both programs are based on the study and identification of environmental risks associated with nanomaterials taking into consideration the product life cycle. In relation to Brazil, regarding the third specific objective, it is notable that the country does not have any regulations applicable to nanostructures, although there is a Draft Law in progress. In this document, it is possible to identify some requirements related to the environment, such as environmental inspection and licensing; industrial waste management; notification of accidents and application of sanctions. However, it is not known if these requirements are sufficient for the prevention of environmental impacts and if national environmental agencies will know how to apply them correctly. This study intends to serve as a basis for future actions regarding environmental management applied to the use of nanotechnology in Brazil.

Keywords: environment; management; nanotecnology; politics

Procedia PDF Downloads 121
118 Topology Optimization Design of Transmission Structure in Flapping-Wing Micro Aerial Vehicle via 3D Printing

Authors: Zuyong Chen, Jianghao Wu, Yanlai Zhang

Abstract:

Flapping-wing micro aerial vehicle (FMAV) is a new type of aircraft by mimicking the flying behavior to that of small birds or insects. Comparing to the traditional fixed wing or rotor-type aircraft, FMAV only needs to control the motion of flapping wings, by changing the size and direction of lift to control the flight attitude. Therefore, its transmission system should be designed very compact. Lightweight design can effectively extend its endurance time, while engineering experience alone is difficult to simultaneously meet the requirements of FMAV for structural strength and quality. Current researches still lack the guidance of considering nonlinear factors of 3D printing material when carrying out topology optimization, especially for the tiny FMAV transmission system. The coupling of non-linear material properties and non-linear contact behaviors of FMAV transmission system is a great challenge to the reliability of the topology optimization result. In this paper, topology optimization design based on FEA solver package Altair Optistruct for the transmission system of FMAV manufactured by 3D Printing was carried out. Firstly, the isotropic constitutive behavior of the Ultraviolet (UV) Cureable Resin used to fabricate the structure of FMAV was evaluated and confirmed through tensile test. Secondly, a numerical computation model describing the mechanical behavior of FMAV transmission structure was established and verified by experiments. Then topology optimization modeling method considering non-linear factors were presented, and optimization results were verified by dynamic simulation and experiments. Finally, detail discussions of different load status and constraints were carried out to explore the leading factors affecting the optimization results. The contributions drawn from this article helpful for guiding the lightweight design of FMAV are summarizing as follow; first, a dynamic simulation modeling method used to obtain the load status is presented. Second, verification method of optimized results considering non-linear factors is introduced. Third, based on or can achieve a better weight reduction effect and improve the computational efficiency rather than taking multi-states into account. Fourth, basing on makes for improving the ability to resist bending deformation. Fifth, constraint of displacement helps to improve the structural stiffness of optimized result. Results and engineering guidance in this paper may shed lights on the structural optimization and light-weight design for future advanced FMAV.

Keywords: flapping-wing micro aerial vehicle, 3d printing, topology optimization, finite element analysis, experiment

Procedia PDF Downloads 167
117 The Influence of Thermal Radiation and Chemical Reaction on MHD Micropolar Fluid in The Presence of Heat Generation/Absorption

Authors: Binyam Teferi

Abstract:

Numerical and theoretical analysis of mixed convection flow of magneto- hydrodynamics micropolar fluid with stretching capillary in the presence of thermal radiation, chemical reaction, viscous dissipation, and heat generation/ absorption have been studied. The non-linear partial differential equations of momentum, angular velocity, energy, and concentration are converted into ordinary differential equations using similarity transformations which can be solved numerically. The dimensionless governing equations are solved by using Runge Kutta fourth and fifth order along with the shooting method. The effect of physical parameters viz., micropolar parameter, unsteadiness parameter, thermal buoyancy parameter, concentration buoyancy parameter, Hartmann number, spin gradient viscosity parameter, microinertial density parameter, thermal radiation parameter, Prandtl number, Eckert number, heat generation or absorption parameter, Schmidt number and chemical reaction parameter on flow variables viz., the velocity of the micropolar fluid, microrotation, temperature, and concentration has been analyzed and discussed graphically. MATLAB code is used to analyze numerical and theoretical facts. From the simulation study, it can be concluded that an increment of micropolar parameter, Hartmann number, unsteadiness parameter, thermal and concentration buoyancy parameter results in decrement of velocity flow of micropolar fluid; microrotation of micropolar fluid decreases with an increment of micropolar parameter, unsteadiness parameter, microinertial density parameter, and spin gradient viscosity parameter; temperature profile of micropolar fluid decreases with an increment of thermal radiation parameter, Prandtl number, micropolar parameter, unsteadiness parameter, heat absorption, and viscous dissipation parameter; concentration of micropolar fluid decreases as unsteadiness parameter, Schmidt number and chemical reaction parameter increases. Furthermore, computational values of local skin friction coefficient, local wall coupled coefficient, local Nusselt number, and local Sherwood number for different values of parameters have been investigated. In this paper, the following important results are obtained; An increment of micropolar parameter and Hartmann number results in a decrement of velocity flow of micropolar fluid. Microrotation decreases with an increment of the microinertial density parameter. Temperature decreases with an increasing value of the thermal radiation parameter and viscous dissipation parameter. Concentration decreases as the values of Schmidt number and chemical reaction parameter increases. The coefficient of local skin friction is enhanced with an increase in values of both the unsteadiness parameter and micropolar parameter. Increasing values of unsteadiness parameter and micropolar parameter results in an increment of the local couple stress. An increment of values of unsteadiness parameter and thermal radiation parameter results in an increment of the rate of heat transfer. As the values of Schmidt number and unsteadiness parameter increases, Sherwood number decreases.

Keywords: thermal radiation, chemical reaction, viscous dissipation, heat absorption/ generation, similarity transformation

Procedia PDF Downloads 125
116 A Computational Approach to Screen Antagonist’s Molecule against Mycobacterium tuberculosis Lipoprotein LprG (Rv1411c)

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Tuberculosis (TB) caused by bacillus Mycobacterium tuberculosis (Mtb) continues to take a disturbing toll on human life and healthcare facility worldwide. The global burden of TB remains enormous. The alarming rise of multi-drug resistant strains of Mycobacterium tuberculosis calls for an increase in research efforts towards the development of new target specific therapeutics against diverse strains of M. tuberculosis. Therefore, the discovery of new molecular scaffolds targeting new drug sites should be a priority for a workable plan for fighting resistance in Mycobacterium tuberculosis (Mtb). Mtb non-acylated lipoprotein LprG (Rv1411c) has a Toll-like receptor 2 (TLR2) agonist actions that depend on its association with triacylated glycolipids binding specifically with the hydrophobic pocket of Mtb LprG lipoprotein. The detection of a glycolipid carrier function has important implications for the role of LprG in Mycobacterial physiology and virulence. Therefore, considering the pivotal role of glycolipids in mycobacterial physiology and host-pathogen interactions, designing competitive antagonist (chemotherapeutics) ligands that competitively bind to glycolipid binding domain in LprG lipoprotein, will lead to inhibition of tuberculosis infection in humans. In this study, a unified approach involving ligand-based virtual screening protocol USRCAT (Ultra Shape Recognition) software and molecular docking studies using Auto Dock Vina 1.1.2 using the X-ray crystal structure of Mtb LprG protein was implemented. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the Ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has the higher hypothetical affinity, also has greater negative value. Based on the USRCAT, Lipinski’s values and molecular docking results, [(2R)-2,3-di(hexadecanoyl oxy)propyl][(2S,3S,5S,6R)-3,4,5-trihydroxy-2,6-bis[[(2R,3S,4S,5R,6S)-3,4,5-trihydroxy-6 (hydroxymethyl)tetrahydropyran-2-yl]oxy]cyclohexyl] phosphate (XPX) was confirmed as a promising drug-like lead compound (antagonist) binding specifically to the hydrophobic domain of LprG protein with affinity greater than that of PIM2 (agonist of LprG protein) with a free binding energy of -9.98e+006 Kcal/mol and binding affinity of -132 Kcal/mol, respectively. A further, in vitro assay of this compound is required to establish its potency in inhibiting molecular evasion mechanism of MTB within the infected host macrophages. These results will certainly be helpful in future anti-TB drug discovery efforts against Multidrug-Resistance Tuberculosis (MDR-TB).

Keywords: antagonist, agonist, binding affinity, chemotherapeutics, drug-like, multi drug resistance tuberculosis (MDR-TB), RV1411c protein, toll-like receptor (TLR2)

Procedia PDF Downloads 269