Search results for: computational linguistics
314 Two-Level Graph Causality to Detect and Predict Random Cyber-Attacks
Authors: Van Trieu, Shouhuai Xu, Yusheng Feng
Abstract:
Tracking attack trajectories can be difficult, with limited information about the nature of the attack. Even more difficult as attack information is collected by Intrusion Detection Systems (IDSs) due to the current IDSs having some limitations in identifying malicious and anomalous traffic. Moreover, IDSs only point out the suspicious events but do not show how the events relate to each other or which event possibly cause the other event to happen. Because of this, it is important to investigate new methods capable of performing the tracking of attack trajectories task quickly with less attack information and dependency on IDSs, in order to prioritize actions during incident responses. This paper proposes a two-level graph causality framework for tracking attack trajectories in internet networks by leveraging observable malicious behaviors to detect what is the most probable attack events that can cause another event to occur in the system. Technically, given the time series of malicious events, the framework extracts events with useful features, such as attack time and port number, to apply to the conditional independent tests to detect the relationship between attack events. Using the academic datasets collected by IDSs, experimental results show that the framework can quickly detect the causal pairs that offer meaningful insights into the nature of the internet network, given only reasonable restrictions on network size and structure. Without the framework’s guidance, these insights would not be able to discover by the existing tools, such as IDSs. It would cost expert human analysts a significant time if possible. The computational results from the proposed two-level graph network model reveal the obvious pattern and trends. In fact, more than 85% of causal pairs have the average time difference between the causal and effect events in both computed and observed data within 5 minutes. This result can be used as a preventive measure against future attacks. Although the forecast may be short, from 0.24 seconds to 5 minutes, it is long enough to be used to design a prevention protocol to block those attacks.Keywords: causality, multilevel graph, cyber-attacks, prediction
Procedia PDF Downloads 156313 Design and Development of an Autonomous Beach Cleaning Vehicle
Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk
Abstract:
In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics
Procedia PDF Downloads 27312 Applying Computer Simulation Methods to a Molecular Understanding of Flaviviruses Proteins towards Differential Serological Diagnostics and Therapeutic Intervention
Authors: Sergio Alejandro Cuevas, Catherine Etchebest, Fernando Luis Barroso Da Silva
Abstract:
The flavivirus genus has several organisms responsible for generating various diseases in humans. Special in Brazil, Zika (ZIKV), Dengue (DENV) and Yellow Fever (YFV) viruses have raised great health concerns due to the high number of cases affecting the area during the last years. Diagnostic is still a difficult issue since the clinical symptoms are highly similar. The understanding of their common structural/dynamical and biomolecular interactions features and differences might suggest alternative strategies towards differential serological diagnostics and therapeutic intervention. Due to their immunogenicity, the primary focus of this study was on the ZIKV, DENV and YFV non-structural proteins 1 (NS1) protein. By means of computational studies, we calculated the main physical chemical properties of this protein from different strains that are directly responsible for the biomolecular interactions and, therefore, can be related to the differential infectivity of the strains. We also mapped the electrostatic differences at both the sequence and structural levels for the strains from Uganda to Brazil that could suggest possible molecular mechanisms for the increase of the virulence of ZIKV. It is interesting to note that despite the small changes in the protein sequence due to the high sequence identity among the studied strains, the electrostatic properties are strongly impacted by the pH which also impact on their biomolecular interactions with partners and, consequently, the molecular viral biology. African and Asian strains are distinguishable. Exploring the interfaces used by NS1 to self-associate in different oligomeric states, and to interact with membranes and the antibody, we could map the strategy used by the ZIKV during its evolutionary process. This indicates possible molecular mechanisms that can explain the different immunological response. By the comparison with the known antibody structure available for the West Nile virus, we demonstrated that the antibody would have difficulties to neutralize the NS1 from the Brazilian strain. The present study also opens up perspectives to computationally design high specificity antibodies.Keywords: zika, biomolecular interactions, electrostatic interactions, molecular mechanisms
Procedia PDF Downloads 132311 Investigating Anti-Tumourigenic and Anti-Angiogenic Effects of Resveratrol in Breast Carcinogenesis Using in-Silico Algorithms
Authors: Asma Zaib, Saeed Khan, Ayaz Ahmed Noonari, Sehrish Bint-e-Mohsin
Abstract:
Breast cancer is the most common cancer among females worldwide and is estimated that more than 450,000 deaths are reported each year. It accounts for about 14% of all female cancer deaths. Angiogenesis plays an essential role in Breast cancer development, invasion, and metastasis. Breast cancer predominantly begins in luminal epithelial cells lining the normal breast ducts. Breast carcinoma likely requires coordinated efforts of both increased proliferation and increased motility to progress to metastatic stages.Resveratrol: a natural stilbenoid, has anti-inflammatory and anticancer effects that inhibits proliferation of variety of human cancer cell lines, including breast, prostate, stomach, colon, pancreatic, and thyroid cancers.The objective of this study is:To investigate anti-neoangiogenesis effects of Resveratrol in breast cancer and to analyze inhibitory effects of resveratrol on aromatase, Erα, HER2/neu, and VEGFR.Docking is the computational determination of binding affinity between molecule (protein structure and ligand).We performed molecular docking using Swiss-Dock and to determine docking effects of (1) Resveratrol with Aromatase, (2) Resveratrol with ERα (3) Resveratrol with HER2/neu and (4) Resveratrol with VEGFR2.Docking results of resveratrol determined inhibitory effects on aromatase with binding energy of -7.28 kcal/mol which shows anticancerous effects on estrogen dependent breast tumors. Resveratrol also show inhibitory effects on ERα and HER2/new with binging energy -8.02, and -6.74 respectively; which revealed anti-cytoproliferative effects upon breast cancer. On the other hand resveratrol v/s VEGFR showed potential inhibitory effects on neo-angiogenesis with binding energy -7.68 kcal/mol, angiogenesis is the important phenomenon that promote tumor development and metastasis. Resveratrol is an anti-breast cancer agent conformed by in silico studies, it has been identified that resveratrol can inhibit breast cancer cells proliferation by acting as competitive inhibitor of aromatase, ERα and HER2 neo, while neo-angiogemesis is restricted by binding to VEGFR which authenticates the anti-carcinogenic effects of resveratrol against breast cancer.Keywords: angiogenesis, anti-cytoproliferative, molecular docking, resveratrol
Procedia PDF Downloads 326310 Simulation of the FDA Centrifugal Blood Pump Using High Performance Computing
Authors: Mehdi Behbahani, Sebastian Rible, Charles Moulinec, Yvan Fournier, Mike Nicolai, Paolo Crosetto
Abstract:
Computational Fluid Dynamics blood-flow simulations are increasingly used to develop and validate blood-contacting medical devices. This study shows that numerical simulations can provide additional and accurate estimates of relevant hemodynamic indicators (e.g., recirculation zones or wall shear stresses), which may be difficult and expensive to obtain from in-vivo or in-vitro experiments. The most recent FDA (Food and Drug Administration) benchmark consisted of a simplified centrifugal blood pump model that contains fluid flow features as they are commonly found in these devices with a clear focus on highly turbulent phenomena. The FDA centrifugal blood pump study is composed of six test cases with different volumetric flow rates ranging from 2.5 to 7.0 liters per minute, pump speeds, and Reynolds numbers ranging from 210,000 to 293,000. Within the frame of this study different turbulence models were tested including RANS models, e.g. k-omega, k-epsilon and a Reynolds Stress Model (RSM) and, LES. The partitioners Hilbert, METIS, ParMETIS and SCOTCH were used to create an unstructured mesh of 76 million elements and compared in their efficiency. Computations were performed on the JUQUEEN BG/Q architecture applying the highly parallel flow solver Code SATURNE and typically using 32768 or more processors in parallel. Visualisations were performed by means of PARAVIEW. Different turbulence models including all six flow situations could be successfully analysed and validated against analytical considerations and from comparison to other data-bases. It showed that an RSM represents an appropriate choice with respect to modeling high-Reynolds number flow cases. Especially, the Rij-SSG (Speziale, Sarkar, Gatzki) variant turned out to be a good approach. Visualisation of complex flow features could be obtained and the flow situation inside the pump could be characterized.Keywords: blood flow, centrifugal blood pump, high performance computing, scalability, turbulence
Procedia PDF Downloads 382309 Numerical Analysis of the Response of Thin Flexible Membranes to Free Surface Water Flow
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This work is part of a major research project concerning the design of a light temporary installable textile flood control structure. The motivation for this work is the great need of applying light structures for the protection of coastal areas from detrimental effects of rapid water runoff. The prime objective of the study is the numerical analysis of the interaction among free surface water flow and slender shaped pliable structures, playing a key role in safety performance of the intended system. First, the behavior of down scale membrane is examined under hydrostatic pressure by the Abaqus explicit solver, which is part of the finite element based commercially available SIMULIA software. Then the procedure to achieve a stable and convergent solution for strongly coupled media including fluids and structures is explained. A partitioned strategy is imposed to make both structures and fluids be discretized and solved with appropriate formulations and solvers. In this regard, finite element method is again selected to analyze the structural domain. Moreover, computational fluid dynamics algorithms are introduced for solutions in flow domains by means of a commercial package of Star CCM+. Likewise, SIMULIA co-simulation engine and an implicit coupling algorithm, which are available communication tools in commercial package of the Star CCM+, enable powerful transmission of data between two applied codes. This approach is discussed for two different cases and compared with available experimental records. In one case, the down scale membrane interacts with open channel flow, where the flow velocity increases with time. The second case illustrates, how the full scale flexible flood barrier behaves when a massive flotsam is accelerated towards it.Keywords: finite element formulation, finite volume algorithm, fluid-structure interaction, light pliable structure, VOF multiphase model
Procedia PDF Downloads 186308 A Damage-Plasticity Concrete Model for Damage Modeling of Reinforced Concrete Structures
Authors: Thanh N. Do
Abstract:
This paper addresses the modeling of two critical behaviors of concrete material in reinforced concrete components: (1) the increase in strength and ductility due to confining stresses from surrounding transverse steel reinforcements, and (2) the progressive deterioration in strength and stiffness due to high strain and/or cyclic loading. To improve the state-of-the-art, the author presents a new 3D constitutive model of concrete material based on plasticity and continuum damage mechanics theory to simulate both the confinement effect and the strength deterioration in reinforced concrete components. The model defines a yield function of the stress invariants and a compressive damage threshold based on the level of confining stresses to automatically capture the increase in strength and ductility when subjected to high compressive stresses. The model introduces two damage variables to describe the strength and stiffness deterioration under tensile and compressive stress states. The damage formulation characterizes well the degrading behavior of concrete material, including the nonsymmetric strength softening in tension and compression, as well as the progressive strength and stiffness degradation under primary and follower load cycles. The proposed damage model is implemented in a general purpose finite element analysis program allowing an extensive set of numerical simulations to assess its ability to capture the confinement effect and the degradation of the load-carrying capacity and stiffness of structural elements. It is validated against a collection of experimental data of the hysteretic behavior of reinforced concrete columns and shear walls under different load histories. These correlation studies demonstrate the ability of the model to describe vastly different hysteretic behaviors with a relatively consistent set of parameters. The model shows excellent consistency in response determination with very good accuracy. Its numerical robustness and computational efficiency are also very good and will be further assessed with large-scale simulations of structural systems.Keywords: concrete, damage-plasticity, shear wall, confinement
Procedia PDF Downloads 169307 Investigation of the Technological Demonstrator 14x B in Different Angle of Attack in Hypersonic Velocity
Authors: Victor Alves Barros Galvão, Israel Da Silveira Rego, Antonio Carlos Oliveira, Paulo Gilberto De Paula Toro
Abstract:
The Brazilian hypersonic aerospace vehicle 14-X B, VHA 14-X B, is a vehicle integrated with the hypersonic airbreathing propulsion system based on supersonic combustion (scramjet), developing in Aerothermodynamics and hypersonic Prof. Henry T. Nagamatsu Laboratory, to conduct demonstration in atmospheric flight at the speed corresponding to Mach number 7 at an altitude of 30km. In the experimental procedure the hypersonic shock tunnel T3 was used, installed in that laboratory. This device simulates the flow over a model is fixed in the test section and can also simulate different atmospheric conditions. The scramjet technology offers substantial advantages to improve aerospace vehicle performance which flies at a hypersonic speed through the Earth's atmosphere by reducing fuel consumption on board. Basically, the scramjet is an aspirated aircraft engine fully integrated that uses oblique/conic shock waves generated during hypersonic flight, to promote the deceleration and compression of atmospheric air in scramjet inlet. During the hypersonic flight, the vehicle VHA 14-X will suffer atmospheric influences, promoting changes in the vehicle's angles of attack (angle that the mean line of vehicle makes with respect to the direction of the flow). Based on this information, a study is conducted to analyze the influences of changes in the vehicle's angle of attack during the atmospheric flight. Analytical theoretical analysis, simulation computational fluid dynamics and experimental investigation are the methodologies used to design a technological demonstrator prior to the flight in the atmosphere. This paper considers analysis of the thermodynamic properties (pressure, temperature, density, sound velocity) in lower surface of the VHA 14-X B. Also, it considers air as an ideal gas and chemical equilibrium, with and without boundary layer, considering changes in the vehicle's angle of attack (positive and negative in relation to the flow) and bi-dimensional expansion wave theory at the expansion section (Theory of Prandtl-Meyer).Keywords: angle of attack, experimental hypersonic, hypersonic airbreathing propulsion, Scramjet
Procedia PDF Downloads 409306 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine
Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko
Abstract:
This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system
Procedia PDF Downloads 219305 Classification of Forest Types Using Remote Sensing and Self-Organizing Maps
Authors: Wanderson Goncalves e Goncalves, José Alberto Silva de Sá
Abstract:
Human actions are a threat to the balance and conservation of the Amazon forest. Therefore the environmental monitoring services play an important role as the preservation and maintenance of this environment. This study classified forest types using data from a forest inventory provided by the 'Florestal e da Biodiversidade do Estado do Pará' (IDEFLOR-BIO), located between the municipalities of Santarém, Juruti and Aveiro, in the state of Pará, Brazil, covering an area approximately of 600,000 hectares, Bands 3, 4 and 5 of the TM-Landsat satellite image, and Self - Organizing Maps. The information from the satellite images was extracted using QGIS software 2.8.1 Wien and was used as a database for training the neural network. The midpoints of each sample of forest inventory have been linked to images. Later the Digital Numbers of the pixels have been extracted, composing the database that fed the training process and testing of the classifier. The neural network was trained to classify two forest types: Rain Forest of Lowland Emerging Canopy (Dbe) and Rain Forest of Lowland Emerging Canopy plus Open with palm trees (Dbe + Abp) in the Mamuru Arapiuns glebes of Pará State, and the number of examples in the training data set was 400, 200 examples for each class (Dbe and Dbe + Abp), and the size of the test data set was 100, with 50 examples for each class (Dbe and Dbe + Abp). Therefore, total mass of data consisted of 500 examples. The classifier was compiled in Orange Data Mining 2.7 Software and was evaluated in terms of the confusion matrix indicators. The results of the classifier were considered satisfactory, and being obtained values of the global accuracy equal to 89% and Kappa coefficient equal to 78% and F1 score equal to 0,88. It evaluated also the efficiency of the classifier by the ROC plot (receiver operating characteristics), obtaining results close to ideal ratings, showing it to be a very good classifier, and demonstrating the potential of this methodology to provide ecosystem services, particularly in anthropogenic areas in the Amazon.Keywords: artificial neural network, computational intelligence, pattern recognition, unsupervised learning
Procedia PDF Downloads 361304 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels
Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina
Abstract:
The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.Keywords: contamination, gas transfer, surfactants, turbulence
Procedia PDF Downloads 300303 Transient Response of Elastic Structures Subjected to a Fluid Medium
Authors: Helnaz Soltani, J. N. Reddy
Abstract:
Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response
Procedia PDF Downloads 567302 Conjunctive Management of Surface and Groundwater Resources under Uncertainty: A Retrospective Optimization Approach
Authors: Julius M. Ndambuki, Gislar E. Kifanyi, Samuel N. Odai, Charles Gyamfi
Abstract:
Conjunctive management of surface and groundwater resources is a challenging task due to the spatial and temporal variability nature of hydrology as well as hydrogeology of the water storage systems. Surface water-groundwater hydrogeology is highly uncertain; thus it is imperative that this uncertainty is explicitly accounted for, when managing water resources. Various methodologies have been developed and applied by researchers in an attempt to account for the uncertainty. For example, simulation-optimization models are often used for conjunctive water resources management. However, direct application of such an approach in which all realizations are considered at each iteration of the optimization process leads to a very expensive optimization in terms of computational time, particularly when the number of realizations is large. The aim of this paper, therefore, is to introduce and apply an efficient approach referred to as Retrospective Optimization Approximation (ROA) that can be used for optimizing conjunctive use of surface water and groundwater over a multiple hydrogeological model simulations. This work is based on stochastic simulation-optimization framework using a recently emerged technique of sample average approximation (SAA) which is a sampling based method implemented within the Retrospective Optimization Approximation (ROA) approach. The ROA approach solves and evaluates a sequence of generated optimization sub-problems in an increasing number of realizations (sample size). Response matrix technique was used for linking simulation model with optimization procedure. The k-means clustering sampling technique was used to map the realizations. The methodology is demonstrated through the application to a hypothetical example. In the example, the optimization sub-problems generated were solved and analysed using “Active-Set” core optimizer implemented under MATLAB 2014a environment. Through k-means clustering sampling technique, the ROA – Active Set procedure was able to arrive at a (nearly) converged maximum expected total optimal conjunctive water use withdrawal rate within a relatively few number of iterations (6 to 7 iterations). Results indicate that the ROA approach is a promising technique for optimizing conjunctive water use of surface water and groundwater withdrawal rates under hydrogeological uncertainty.Keywords: conjunctive water management, retrospective optimization approximation approach, sample average approximation, uncertainty
Procedia PDF Downloads 231301 Numerical Study of Natural Convection in Isothermal Open Cavities
Authors: Gaurav Prabhudesai, Gaetan Brill
Abstract:
The sun's energy source comes from a hydrogen-to-helium thermonuclear reaction, generating a temperature of about 5760 K on its outer layer. On account of this high temperature, energy is radiated by the sun, a part of which reaches the earth. This sunlight, even after losing part of its energy en-route to scattering and absorption, provides a time and space averaged solar flux of 174.7 W/m^2 striking the earth’s surface. According to one study, the solar energy striking earth’s surface in one and a half hour is more than the energy consumption that was recorded in the year 2001 from all sources combined. Thus, technology for extraction of solar energy holds much promise for solving energy crisis. Of the many technologies developed in this regard, Concentrating Solar Power (CSP) plants with central solar tower and receiver system are very impressive because of their capability to provide a renewable energy that can be stored in the form of heat. One design of central receiver towers is an open cavity where sunlight is concentrated into by using mirrors (also called heliostats). This concentrated solar flux produces high temperature inside the cavity which can be utilized in an energy conversion process. The amount of energy captured is reduced by losses occurring at the cavity through all three modes viz., radiation to the atmosphere, conduction to the adjoining structure and convection. This study investigates the natural convection losses to the environment from the receiver. Computational fluid dynamics were used to simulate the fluid flow and heat transfer of the receiver; since no analytical solution can be obtained and no empirical correlations exist for the given geometry. The results provide guide lines for predicting natural convection losses for hexagonal and circular shaped open cavities. Additionally, correlations are given for various inclination angles and aspect ratios. These results provide methods to minimize natural convection through careful design of receiver geometry and modification of the inclination angle, and aspect ratio of the cavity.Keywords: concentrated solar power (CSP), central receivers, natural convection, CFD, open cavities
Procedia PDF Downloads 288300 CFD Simulation of the Pressure Distribution in the Upper Airway of an Obstructive Sleep Apnea Patient
Authors: Christina Hagen, Pragathi Kamale Gurmurthy, Thorsten M. Buzug
Abstract:
CFD simulations are performed in the upper airway of a patient suffering from obstructive sleep apnea (OSA) that is a sleep related breathing disorder characterized by repetitive partial or complete closures of the upper airways. The simulations are aimed at getting a better understanding of the pathophysiological flow patterns in an OSA patient. The simulation is compared to medical data of a sleep endoscopic examination under sedation. A digital model consisting of surface triangles of the upper airway is extracted from the MR images by a region growing segmentation process and is followed by a careful manual refinement. The computational domain includes the nasal cavity with the nostrils as the inlet areas and the pharyngeal volume with an outlet underneath the larynx. At the nostrils a flat inflow velocity profile is prescribed by choosing the velocity such that a volume flow rate of 150 ml/s is reached. Behind the larynx at the outlet a pressure of -10 Pa is prescribed. The stationary incompressible Navier-Stokes equations are numerically solved using finite elements. A grid convergence study has been performed. The results show an amplification of the maximal velocity of about 2.5 times the inlet velocity at a constriction of the pharyngeal volume in the area of the tongue. It is the same region that also shows the highest pressure drop from about 5 Pa. This is in agreement with the sleep endoscopic examinations of the same patient under sedation showing complete contractions in the area of the tongue. CFD simulations can become a useful tool in the diagnosis and therapy of obstructive sleep apnea by giving insight into the patient’s individual fluid dynamical situation in the upper airways giving a better understanding of the disease where experimental measurements are not feasible. Within this study, it could been shown on one hand that constriction areas within the upper airway lead to a significant pressure drop and on the other hand a good agreement of the area of pressure drop and the area of contraction could be shown.Keywords: biomedical engineering, obstructive sleep apnea, pharynx, upper airways
Procedia PDF Downloads 306299 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation
Authors: Kausar Harun, Ahmad Azmin Mohamad
Abstract:
Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles
Procedia PDF Downloads 309298 Four-Electron Auger Process for Hollow Ions
Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola
Abstract:
A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method
Procedia PDF Downloads 153297 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine
Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi
Abstract:
Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.Keywords: diesel fuel, CFD, evaporation, multiphase
Procedia PDF Downloads 343296 Adsorption and Desorption Behavior of Ionic and Nonionic Surfactants on Polymer Surfaces
Authors: Giulia Magi Meconi, Nicholas Ballard, José M. Asua, Ronen Zangi
Abstract:
Experimental and computational studies are combined to elucidate the adsorption proprieties of ionic and nonionic surfactants on hydrophobic polymer surface such us poly(styrene). To present these two types of surfactants, sodium dodecyl sulfate and poly(ethylene glycol)-block-poly(ethylene), commonly utilized in emulsion polymerization, are chosen. By applying quartz crystal microbalance with dissipation monitoring it is found that, at low surfactant concentrations, it is easier to desorb (as measured by rate) ionic surfactants than nonionic surfactants. From molecular dynamics simulations, the effective, attractive force of these nonionic surfactants to the surface increases with the decrease of their concentration, whereas, the ionic surfactant exhibits mildly the opposite trend. The contrasting behavior of ionic and nonionic surfactants critically relies on two observations obtained from the simulations. The first is that there is a large degree of interweavement between head and tails groups in the adsorbed layer formed by the nonionic surfactant (PEO/PE systems). The second is that water molecules penetrate this layer. In the disordered layer, these nonionic surfactants generate at the surface, only oxygens of the head groups present at the interface with the water phase or oxygens next to the penetrating waters can form hydrogen bonds. Oxygens inside this layer lose this favorable energy, with a magnitude that increases with the surfactants density at the interface. This reduced stability of the surfactants diminishes their driving force for adsorption. All that is shown to be in accordance with experimental results on the dynamics of surfactants desorption. Ionic surfactants assemble into an ordered structure and the attraction to the surface was even slightly augmented at higher surfactant concentration, in agreement with the experimentally determined adsorption isotherm. The reason these two types of surfactants behave differently is because the ionic surfactant has a small head group that is strongly hydrophilic, whereas the head groups of the nonionic surfactants are large and only weakly attracted to water.Keywords: emulsion polymerization process, molecular dynamics simulations, polymer surface, surfactants adsorption
Procedia PDF Downloads 343295 Comparison of the Thermal Behavior of Different Crystal Forms of Manganese(II) Oxalate
Authors: B. Donkova, M. Nedyalkova, D. Mehandjiev
Abstract:
Sparingly soluble manganese oxalate is an appropriate precursor for the preparation of nanosized manganese oxides, which have a wide range of technological application. During the precipitation of manganese oxalate, three crystal forms could be obtained – α-MnC₂O₄.2H₂O (SG C2/c), γ-MnC₂O₄.2H₂O (SG P212121) and orthorhombic MnC₂O₄.3H₂O (SG Pcca). The thermolysis of α-MnC₂O₄.2H₂O has been extensively studied during the years, while the literature data for the other two forms has been quite scarce. The aim of the present communication is to highlight the influence of the initial crystal structure on the decomposition mechanism of these three forms, their magnetic properties, the structure of the anhydrous oxalates, as well as the nature of the obtained oxides. For the characterization of the samples XRD, SEM, DTA, TG, DSC, nitrogen adsorption, and in situ magnetic measurements were used. The dehydration proceeds in one step with α-MnC₂O₄.2H2O and γ-MnC₂O₄.2H₂O, and in three steps with MnC₂O₄.3H2O. The values of dehydration enthalpy are 97, 149 and 132 kJ/mol, respectively, and the last two were reported for the first time, best to our knowledge. The magnetic measurements show that at room temperature all samples are antiferomagnetic, however during the dehydration of α-MnC₂O₄.2H₂O the exchange interaction is preserved, for MnC₂O₄.3H₂O it changes to ferromagnetic above 35°C, and for γ-MnC₂O₄.2H₂O it changes twice from antiferomagnetic to ferromagnetic above 70°C. The experimental results for magnetic properties are in accordance with the computational results obtained with Wien2k code. The difference in the initial crystal structure of the forms used determines different changes in the specific surface area during dehydration and different extent of Mn(II) oxidation during decomposition in the air; both being highest at α-MnC₂O₄.2H₂O. The isothermal decomposition of the different oxalate forms shows that the type and physicochemical properties of the oxides, obtained at the same annealing temperature depend on the precursor used. Based on the results from the non-isothermal and isothermal experiments, and from different methods used for characterization of the sample, a comparison of the nature, mechanism and peculiarities of the thermolysis of the different crystal forms of manganese oxalate was made, which clearly reveals the influence of the initial crystal structure. Acknowledgment: 'Science and Education for Smart Growth', project BG05M2OP001-2.009-0028, COST Action MP1306 'Modern Tools for Spectroscopy on Advanced Materials', and project DCOST-01/18 (Bulgarian Science Fund).Keywords: crystal structure, magnetic properties, manganese oxalate, thermal behavior
Procedia PDF Downloads 171294 Three Issues for Integrating Artificial Intelligence into Legal Reasoning
Authors: Fausto Morais
Abstract:
Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning
Procedia PDF Downloads 145293 Construction of Ovarian Cancer-on-Chip Model by 3D Bioprinting and Microfluidic Techniques
Authors: Zakaria Baka, Halima Alem
Abstract:
Cancer is a major worldwide health problem that has caused around ten million deaths in 2020. In addition, efforts to develop new anti-cancer drugs still face a high failure rate. This is partly due to the lack of preclinical models that recapitulate in-vivo drug responses. Indeed conventional cell culture approach (known as 2D cell culture) is far from reproducing the complex, dynamic and three-dimensional environment of tumors. To set up more in-vivo-like cancer models, 3D bioprinting seems to be a promising technology due to its ability to achieve 3D scaffolds containing different cell types with controlled distribution and precise architecture. Moreover, the introduction of microfluidic technology makes it possible to simulate in-vivo dynamic conditions through the so-called “cancer-on-chip” platforms. Whereas several cancer types have been modeled through the cancer-on-chip approach, such as lung cancer and breast cancer, only a few works describing ovarian cancer models have been described. The aim of this work is to combine 3D bioprinting and microfluidic technics with setting up a 3D dynamic model of ovarian cancer. In the first phase, alginate-gelatin hydrogel containing SKOV3 cells was used to achieve tumor-like structures through an extrusion-based bioprinter. The desired form of the tumor-like mass was first designed on 3D CAD software. The hydrogel composition was then optimized for ensuring good and reproducible printability. Cell viability in the bioprinted structures was assessed using Live/Dead assay and WST1 assay. In the second phase, these bioprinted structures will be included in a microfluidic device that allows simultaneous testing of different drug concentrations. This microfluidic dispositive was first designed through computational fluid dynamics (CFD) simulations for fixing its precise dimensions. It was then be manufactured through a molding method based on a 3D printed template. To confirm the results of CFD simulations, doxorubicin (DOX) solutions were perfused through the dispositive and DOX concentration in each culture chamber was determined. Once completely characterized, this model will be used to assess the efficacy of anti-cancer nanoparticles developed in the Jean Lamour institute.Keywords: 3D bioprinting, ovarian cancer, cancer-on-chip models, microfluidic techniques
Procedia PDF Downloads 196292 Structural Strength Evaluation and Wear Prediction of Double Helix Steel Wire Ropes for Heavy Machinery
Authors: Krunal Thakar
Abstract:
Wire ropes combine high tensile strength and flexibility as compared to other general steel products. They are used in various application areas such as cranes, mining, elevators, bridges, cable cars, etc. The earliest reported use of wire ropes was for mining hoist application in 1830s. Over the period, there have been substantial advancement in the design of wire ropes for various application areas. Under operational conditions, wire ropes are subjected to varying tensile loads and bending loads resulting in material wear and eventual structural failure due to fretting fatigue. The conventional inspection methods to determine wire failure is only limited to outer wires of rope. However, till date, there is no effective mathematical model to examine the inter wire contact forces and wear characteristics. The scope of this paper is to present a computational simulation technique to evaluate inter wire contact forces and wear, which are in many cases responsible for rope failure. Two different type of ropes, IWRC-6xFi(29) and U3xSeS(48) were taken for structural strength evaluation and wear prediction. Both ropes have a double helix twisted wire profile as per JIS standards and are mainly used in cranes. CAD models of both ropes were developed in general purpose design software using in house developed formulation to generate double helix profile. Numerical simulation was done under two different load cases (a) Axial Tension and (b) Bending over Sheave. Different parameters such as stresses, contact forces, wear depth, load-elongation, etc., were investigated and compared between both ropes. Numerical simulation method facilitates the detailed investigation of inter wire contact and wear characteristics. In addition, various selection parameters like sheave diameter, rope diameter, helix angle, swaging, maximum load carrying capacity, etc., can be quickly analyzed.Keywords: steel wire ropes, numerical simulation, material wear, structural strength, axial tension, bending over sheave
Procedia PDF Downloads 152291 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 121290 Text as Reader Device Improving Subjectivity on the Role of Attestation between Interpretative Semiotics and Discursive Linguistics
Authors: Marco Castagna
Abstract:
Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.Keywords: attestation, meaning, reader, text
Procedia PDF Downloads 237289 Multi-omics Integrative Analysis with Genome-Scale Metabolic Model Simulation Reveals Reaction Essentiality data in Human Astrocytes Under the Lipotoxic Effect of Palmitic Acid
Authors: Janneth Gonzalez, Andres Pinzon Velasco, Maria Angarita, Nicolas Mendoza
Abstract:
Astrocytes play an important role in various processes in the brain, including pathological conditions such as neurodegenerative diseases. Recent studies have shown that the increase in saturated fatty acids such as palmitic acid (PA) triggers pro-inflammatory pathways in the brain. The use of synthetic neurosteroids such as tibolone has demonstrated neuro-protective mechanisms. However, there are few studies on the neuro-protective mechanisms of tibolone, especially at the systemic (omic) level. In this study, we performed the integration of multi-omic data (transcriptome and proteome) into a human astrocyte genomic scale metabolic model to study the astrocytic response during palmitate treatment. We evaluated metabolic fluxes in three scenarios (healthy, induced inflammation by PA, and tibolone treatment under PA inflammation). We also use control theory to identify those reactions that control the astrocytic system. Our results suggest that PA generates a modulation of central and secondary metabolism, showing a change in energy source use through inhibition of folate cycle and fatty acid β-oxidation and upregulation of ketone bodies formation.We found 25 metabolic switches under PA-mediated cellular regulation, 9 of which were critical only in the inflammatory scenario but not in the protective tibolone one. Within these reactions, inhibitory, total, and directional coupling profiles were key findings, playing a fundamental role in the (de)regulation in metabolic pathways that increase neurotoxicity and represent potential treatment targets. Finally, this study framework facilitates the understanding of metabolic regulation strategies, andit can be used for in silico exploring the mechanisms of astrocytic cell regulation, directing a more complex future experimental work in neurodegenerative diseases.Keywords: astrocytes, data integration, palmitic acid, computational model, multi-omics, control theory
Procedia PDF Downloads 121288 Experimental and Numerical Investigation of Micro-Welding Process and Applications in Digital Manufacturing
Authors: Khaled Al-Badani, Andrew Norbury, Essam Elmshawet, Glynn Rotwell, Ian Jenkinson , James Ren
Abstract:
Micro welding procedures are widely used for joining materials, developing duplex components or functional surfaces, through various methods such as Micro Discharge Welding or Spot Welding process, which can be found in the engineering, aerospace, automotive, biochemical, biomedical and numerous other industries. The relationship between the material properties, structure and processing is very important to improve the structural integrity and the final performance of the welded joints. This includes controlling the shape and the size of the welding nugget, state of the heat affected zone, residual stress, etc. Nowadays, modern high volume productions require the welding of much versatile shapes/sizes and material systems that are suitable for various applications. Hence, an improved understanding of the micro welding process and the digital tools, which are based on computational numerical modelling linking key welding parameters, dimensional attributes and functional performance of the weldment, would directly benefit the industry in developing products that meet current and future market demands. This paper will introduce recent work on developing an integrated experimental and numerical modelling code for micro welding techniques. This includes similar and dissimilar materials for both ferrous and non-ferrous metals, at different scales. The paper will also produce a comparative study, concerning the differences between the micro discharge welding process and the spot welding technique, in regards to the size effect of the welding zone and the changes in the material structure. Numerical modelling method for the micro welding processes and its effects on the material properties, during melting and cooling progression at different scales, will also be presented. Finally, the applications of the integrated numerical modelling and the material development for the digital manufacturing of welding, is discussed with references to typical application cases such as sensors (thermocouples), energy (heat exchanger) and automotive structures (duplex steel structures).Keywords: computer modelling, droplet formation, material distortion, materials forming, welding
Procedia PDF Downloads 255287 Health Risk Assessment of Exposing to Benzene in Office Building around a Chemical Industry Based on Numerical Simulation
Authors: Majid Bayatian, Mohammadreza Ashouri
Abstract:
Releasing hazardous chemicals is one of the major problems for office buildings in the chemical industry and, therefore, environmental risks are inherent to these environments. The adverse health effects of the airborne concentration of benzene have been a matter of significant concern, especially in oil refineries. The chronic and acute adverse health effects caused by benzene exposure have attracted wide attention. Acute exposure to benzene through inhalation could cause headaches, dizziness, drowsiness, and irritation of the skin. Chronic exposures have reported causing aplastic anemia and leukemia at the occupational settings. Association between chronic occupational exposure to benzene and the development of aplastic anemia and leukemia were documented by several epidemiological studies. Numerous research works have investigated benzene emissions and determined benzene concentration at different locations of the refinery plant and stated considerable health risks. The high cost of industrial control measures requires justification through lifetime health risk assessment of exposed workers and the public. In the present study, a Computational Fluid Dynamics (CFD) model has been proposed to assess the exposure risk of office building around a refinery due to its release of benzene. For simulation, GAMBIT, FLUENT, and CFD Post software were used as pre-processor, processor, and post-processor, and the model was validated based on comparison with experimental results of benzene concentration and wind speed. Model validation results showed that the model is highly validated, and this model can be used for health risk assessment. The simulation and risk assessment results showed that benzene could be dispersion to an office building nearby, and the exposure risk has been unacceptable. According to the results of this study, a validated CFD model, could be very useful for decision-makers for control measures and possibly support them for emergency planning of probable accidents. Also, this model can be used to assess exposure to various types of accidents as well as other pollutants such as toluene, xylene, and ethylbenzene in different atmospheric conditions.Keywords: health risk assessment, office building, Benzene, numerical simulation, CFD
Procedia PDF Downloads 130286 Investigation of Heat Conduction through Particulate Filled Polymer Composite
Authors: Alok Agrawal, Alok Satapathy
Abstract:
In this paper, an attempt to determine the effective thermal conductivity (keff) of particulate filled polymer composites using finite element method (FEM) a powerful computational technique is made. A commercially available finite element package ANSYS is used for this numerical analysis. Three-dimensional spheres-in-cube lattice array models are constructed to simulate the microstructures of micro-sized particulate filled polymer composites with filler content ranging from 2.35 to 26.8 vol %. Based on the temperature profiles across the composite body, the keff of each composition is estimated theoretically by FEM. Composites with similar filler contents are than fabricated using compression molding technique by reinforcing micro-sized aluminium oxide (Al2O3) in polypropylene (PP) resin. Thermal conductivities of these composite samples are measured according to the ASTM standard E-1530 by using the Unitherm™ Model 2022 tester, which operates on the double guarded heat flow principle. The experimentally measured conductivity values are compared with the numerical values and also with those obtained from existing empirical models. This comparison reveals that the FEM simulated values are found to be in reasonable good agreement with the experimental data. Values obtained from the theoretical model proposed by the authors are also found to be in even closer approximation with the measured values within percolation limit. Further, this study shows that there is gradual enhancement in the conductivity of PP resin with increase in filler percentage and thereby its heat conduction capability is improved. It is noticed that with addition of 26.8 vol % of filler, the keff of composite increases to around 6.3 times that of neat PP. This study validates the proposed model for PP-Al2O3 composite system and proves that finite element analysis can be an excellent methodology for such investigations. With such improved heat conduction ability, these composites can find potential applications in micro-electronics, printed circuit boards, encapsulations etc.Keywords: analytical modelling, effective thermal conductivity, finite element method, polymer matrix composite
Procedia PDF Downloads 321285 Impacts on Marine Ecosystems Using a Multilayer Network Approach
Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade
Abstract:
Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management
Procedia PDF Downloads 113