Search results for: stochastic gradient boosting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1302

Search results for: stochastic gradient boosting

132 Risk Assessment of Flood Defences by Utilising Condition Grade Based Probabilistic Approach

Authors: M. Bahari Mehrabani, Hua-Peng Chen

Abstract:

Management and maintenance of coastal defence structures during the expected life cycle have become a real challenge for decision makers and engineers. Accurate evaluation of the current condition and future performance of flood defence structures is essential for effective practical maintenance strategies on the basis of available field inspection data. Moreover, as coastal defence structures age, it becomes more challenging to implement maintenance and management plans to avoid structural failure. Therefore, condition inspection data are essential for assessing damage and forecasting deterioration of ageing flood defence structures in order to keep the structures in an acceptable condition. The inspection data for flood defence structures are often collected using discrete visual condition rating schemes. In order to evaluate future condition of the structure, a probabilistic deterioration model needs to be utilised. However, existing deterioration models may not provide a reliable prediction of performance deterioration for a long period due to uncertainties. To tackle the limitation, a time-dependent condition-based model associated with a transition probability needs to be developed on the basis of condition grade scheme for flood defences. This paper presents a probabilistic method for predicting future performance deterioration of coastal flood defence structures based on condition grading inspection data and deterioration curves estimated by expert judgement. In condition-based deterioration modelling, the main task is to estimate transition probability matrices. The deterioration process of the structure related to the transition states is modelled according to Markov chain process, and a reliability-based approach is used to estimate the probability of structural failure. Visual inspection data according to the United Kingdom Condition Assessment Manual are used to obtain the initial condition grade curve of the coastal flood defences. The initial curves then modified in order to develop transition probabilities through non-linear regression based optimisation algorithms. The Monte Carlo simulations are then used to evaluate the future performance of the structure on the basis of the estimated transition probabilities. Finally, a case study is given to demonstrate the applicability of the proposed method under no-maintenance and medium-maintenance scenarios. Results show that the proposed method can provide an effective predictive model for various situations in terms of available condition grading data. The proposed model also provides useful information on time-dependent probability of failure in coastal flood defences.

Keywords: condition grading, flood defense, performance assessment, stochastic deterioration modelling

Procedia PDF Downloads 211
131 Bioefficiency of Cinnamomum verum Loaded Niosomes and Its Microbicidal and Mosquito Larvicidal Activity against Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus

Authors: Aasaithambi Kalaiselvi, Michael Gabriel Paulraj, Ekambaram Nakkeeran

Abstract:

Emergences of mosquito vector-borne diseases are considered as a perpetual problem globally in tropical countries. The outbreak of several diseases such as chikungunya, zika virus infection and dengue fever has created a massive threat towards the living population. Frequent usage of synthetic insecticides like Dichloro Diphenyl Trichloroethane (DDT) eventually had its adverse harmful effects on humans as well as the environment. Since there are no perennial vaccines, prevention, treatment or drugs available for these pathogenic vectors, WHO is more concerned in eradicating their breeding sites effectively without any side effects on humans and environment by approaching plant-derived natural eco-friendly bio-insecticides. The aim of this study is to investigate the larvicidal potency of Cinnamomum verum essential oil (CEO) loaded niosomes. Cholesterol and surfactant variants of Span 20, 60 and 80 were used in synthesizing CEO loaded niosomes using Transmembrane pH gradient method. The synthesized CEO loaded niosomes were characterized by Zeta potential, particle size, Fourier Transform Infrared Spectroscopy (FT-IR), GC-MS and SEM analysis to evaluate charge, size, functional properties, the composition of secondary metabolites and morphology. The Z-average size of the formed niosomes was 1870.84 nm and had good stability with zeta potential -85.3 meV. The entrapment efficiency of the CEO loaded niosomes was determined by UV-Visible Spectrophotometry. The bio-potency of CEO loaded niosomes was treated and assessed against gram-positive (Bacillus subtilis) and gram-negative (Escherichia coli) bacteria and fungi (Aspergillus fumigatus and Candida albicans) at various concentrations. The larvicidal activity was evaluated against II to IV instar larvae of Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus at various concentrations for 24 h. The mortality rate of LC₅₀ and LC₉₀ values were calculated. The results exhibited that CEO loaded niosomes have greater efficiency against mosquito larvicidal activity. The results suggest that niosomes could be used in various applications of biotechnology and drug delivery systems with greater stability by altering the drug of interest.

Keywords: Cinnamomum verum, niosomes, entrapment efficiency, bactericidal and fungicidal, mosquito larvicidal activity

Procedia PDF Downloads 134
130 Effects of Temperature and the Use of Bacteriocins on Cross-Contamination from Animal Source Food Processing: A Mathematical Model

Authors: Benjamin Castillo, Luis Pastenes, Fernando Cerdova

Abstract:

The contamination of food by microbial agents is a common problem in the industry, especially regarding the elaboration of animal source products. Incorrect manipulation of the machinery or on the raw materials can cause a decrease in production or an epidemiological outbreak due to intoxication. In order to improve food product quality, different methods have been used to reduce or, at least, to slow down the growth of the pathogens, especially deteriorated, infectious or toxigenic bacteria. These methods are usually carried out under low temperatures and short processing time (abiotic agents), along with the application of antibacterial substances, such as bacteriocins (biotic agents). This, in a controlled and efficient way that fulfills the purpose of bacterial control without damaging the final product. Therefore, the objective of the present study is to design a secondary mathematical model that allows the prediction of both the biotic and abiotic factor impact associated with animal source food processing. In order to accomplish this objective, the authors propose a three-dimensional differential equation model, whose components are: bacterial growth, release, production and artificial incorporation of bacteriocins and changes in pH levels of the medium. These three dimensions are constantly being influenced by the temperature of the medium. Secondly, this model adapts to an idealized situation of cross-contamination animal source food processing, with the study agents being both the animal product and the contact surface. Thirdly, the stochastic simulations and the parametric sensibility analysis are compared with referential data. The main results obtained from the analysis and simulations of the mathematical model were to discover that, although bacterial growth can be stopped in lower temperatures, even lower ones are needed to eradicate it. However, this can be not only expensive, but counterproductive as well in terms of the quality of the raw materials and, on the other hand, higher temperatures accelerate bacterial growth. In other aspects, the use and efficiency of bacteriocins are an effective alternative in the short and medium terms. Moreover, an indicator of bacterial growth is a low-level pH, since lots of deteriorating bacteria are lactic acids. Lastly, the processing times are a secondary agent of concern when the rest of the aforementioned agents are under control. Our main conclusion is that when acclimating a mathematical model within the context of the industrial process, it can generate new tools that predict bacterial contamination, the impact of bacterial inhibition, and processing method times. In addition, the mathematical modeling proposed logistic input of broad application, which can be replicated on non-meat food products, other pathogens or even on contamination by crossed contact of allergen foods.

Keywords: bacteriocins, cross-contamination, mathematical model, temperature

Procedia PDF Downloads 119
129 Multi-Agent Searching Adaptation Using Levy Flight and Inferential Reasoning

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how to achieve knowledge understanding and prediction (Situation Awareness (SA)) for multiple-agents conducting searching activity using Bayesian inferential reasoning and learning. Bayesian Belief Network was used to monitor agents' knowledge about their environment, and cases are recorded for the network training using expectation-maximisation or gradient descent algorithm. The well trained network will be used for decision making and environmental situation prediction. Forest fire searching by multiple UAVs was the use case. UAVs are tasked to explore a forest and find a fire for urgent actions by the fire wardens. The paper focused on two problems: (i) effective agents’ path planning strategy and (ii) knowledge understanding and prediction (SA). The path planning problem by inspiring animal mode of foraging using Lévy distribution augmented with Bayesian reasoning was fully described in this paper. Results proof that the Lévy flight strategy performs better than the previous fixed-pattern (e.g., parallel sweeps) approaches in terms of energy and time utilisation. We also introduced a waypoint assessment strategy called k-previous waypoints assessment. It improves the performance of the ordinary levy flight by saving agent’s resources and mission time through redundant search avoidance. The agents (UAVs) are to report their mission knowledge at the central server for interpretation and prediction purposes. Bayesian reasoning and learning were used for the SA and results proof effectiveness in different environments scenario in terms of prediction and effective knowledge representation. The prediction accuracy was measured using learning error rate, logarithm loss, and Brier score and the result proves that little agents mission that can be used for prediction within the same or different environment. Finally, we described a situation-based knowledge visualization and prediction technique for heterogeneous multi-UAV mission. While this paper proves linkage of Bayesian reasoning and learning with SA and effective searching strategy, future works is focusing on simplifying the architecture.

Keywords: Levy flight, distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence

Procedia PDF Downloads 117
128 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River

Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko

Abstract:

Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.

Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling

Procedia PDF Downloads 240
127 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 188
126 Effects of Different Load on Physiological, Hematological, Biochemical, Cytokines Indices of Zanskar Ponies at High Altitude

Authors: Prince Vivek, Vijay Kumar Bharti, Deepak Kumar, Rohit Kumar, Kapil Nehra, Dhananjay Singh, Om Prakash Chaurasia, Bhuvnesh Kumar

Abstract:

High altitude native people still rely heavily on animal transport for logistic support at eastern and northern Himalayas regions. The prevalent mountainous terrains and rugged region are not suitable for the motorized vehicle to use in logistic transport. Therefore, people required high endurance pack animals for load carrying and riding. So far to the best of our knowledge, no studies have been taken to evaluate the effect of loads on the physiology of ponies in high altitude region. So, in this view, we evaluated variation in physiological, hematological, biochemical, and cytokines indices of Zanskar ponies during load carrying at high altitude. Total twelve (12) of Zanskar ponies, mare, age 4-6 years selected for this study, Feed was offered at 2% of body weight, and water ad libitum. Ponies were divided into three groups; group-A (without load), group-B (60 kg), and group-C (80 kg) of backpack loads. The track was very narrow and slippery with gravel, uneven with a rocky surface and has a steep gradient of 4 km uphill at altitude 3291 to 3500m. When we evaluate these parameters, it is understood that the heart rate, pulse rate, and respiration rate was significantly increased in 80 kg group among the three groups. The hematology parameters viz. hemoglobin significantly increased in 80 kg group on 1st day after load carrying among the three groups which was followed by control and 60 kg whereas, PCV, lymphocytes, monocytes percentage significantly increased however, ESR and eosinophil % significantly decreased in 80 kg group after load carrying on 7th day after load carrying among the three groups which were followed by control and 60 kg group. In biochemical parameters viz. LA, LDH, TP, hexokinase (HK), cortisol (CORT), T3, GPx, FRAP and IL-6 significantly increased in 80 kg group on the 7th day after load carrying among the three groups which were followed by control and 60 kg group. The ALT, ALB, GLB, UR, and UA significantly increased in 80 kg group on the 7th day before and after load carrying among the three groups which were followed by control and 60 kg group. The CRT, AST, and CK-MB were significantly increased in 80 kg group on the 1st and 7th day after load carrying among the three groups which were followed by control and 60 kg group. It has been concluded that, heart rate, respiration rate, hematological indices like PCV, lymphocytes, monocytes, Hb and ESR, biochemical indices like lactic acid, LDH, TP, HK, CORT, T3, ALT, AST and CRT, ALB, GLB, UR, UA, GPx, FRAP and IL-6 are important biomarkers to assess effect of load on animal physiology and endurance. Further, this result has revealed the strong correlation of change in biomarkers level with performance in ponies during load carry. Hence, these parameters might be used for the performance of endurance of Zanskar ponies in the high mountain region.

Keywords: biochemical, endurance, high altitude, load, ponies

Procedia PDF Downloads 254
125 Isotope Effects on Inhibitors Binding to HIV Reverse Transcriptase

Authors: Agnieszka Krzemińska, Katarzyna Świderek, Vicente Molinier, Piotr Paneth

Abstract:

In order to understand in details the interactions between ligands and the enzyme isotope effects were studied between clinically used drugs that bind in the active site of Human Immunodeficiency Virus Reverse Transcriptase, HIV-1 RT, as well as triazole-based inhibitor that binds in the allosteric pocket of this enzyme. The magnitudes and origins of the resulting binding isotope effects were analyzed. Subsequently, binding isotope effect of the same triazole-based inhibitor bound in the active site were analyzed and compared. Together, these results show differences in binding origins in two sites of the enzyme and allow to analyze binding mode and place of newly synthesized inhibitors. Typical protocol is described below on the example of triazole ligand in the allosteric pocket. Triazole was docked into allosteric cavity of HIV-1 RT with Glide using extra-precision mode as implemented in Schroedinger software. The structure of HIV-1 RT was obtained from Protein Data Bank as structure of PDB ID 2RKI. The pKa for titratable amino acids was calculated using PROPKA software, and in order to neutralize the system 15 Cl- were added using tLEaP package implemented in AMBERTools ver.1.5. Also N-terminals and C-terminals were build using tLEaP. The system was placed in 144x160x144Å3 orthorhombic box of water molecules using NAMD program. Missing parameters for triazole were obtained at the AM1 level using Antechamber software implemented in AMBERTools. The energy minimizations were carried out by means of a conjugate gradient algorithm using NAMD. Then system was heated from 0 to 300 K with temperature increment 0.001 K. Subsequently 2 ns Langevin−Verlet (NVT) MM MD simulation with AMBER force field implemented in NAMD was carried out. Periodic Boundary Conditions and cut-offs for the nonbonding interactions, range radius from 14.5 to 16 Å, are used. After 2 ns relaxation 200 ps of QM/MM MD at 300 K were simulated. The triazole was treated quantum mechanically at the AM1 level, protein was described using AMBER and water molecules were described using TIP3P, as implemented in fDynamo library. Molecules 20 Å apart from the triazole were kept frozen, with cut-offs established on range radius from 14.5 to 16 Å. In order to describe interactions between triazole and RT free energy of binding using Free Energy Perturbation method was done. The change in frequencies from ligand in solution to ligand bounded in enzyme was used to calculate binding isotope effects.

Keywords: binding isotope effects, molecular dynamics, HIV, reverse transcriptase

Procedia PDF Downloads 409
124 Revealing Thermal Degradation Characteristics of Distinctive Oligo-and Polisaccharides of Prebiotic Relevance

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

As natural prebiotic (non-digestible) carbohydrates stimulate the growth of colon microflora and contribute to maintain the health of the host, analytical studies aiming at revealing the chemical behavior of these beneficial food components came to the forefront of interest. Food processing (especially baking) may lead to a significant conversion of the parent compounds, hence it is of utmost importance to characterize the transformation patterns and the plausible decomposition products formed by thermal degradation. The relevance of this work is confirmed by the wide-spread use of these carbohydrates (fructo-oligosaccharides, cyclodextrins, raffinose and resistant starch) in the food industry. More and more functional foodstuffs are being developed based on prebiotics as bioactive components. 12 different types of oligosaccharides have been investigated in order to reveal their thermal degradation characteristics. Different carbohydrate derivatives (D-fructose and D-glucose oligomers and polymers) have been exposed to elevated temperatures (150 °C 170 °C, 190 °C, 210 °C, and 220 °C) for 10 min. An advanced HPLC method was developed and used to identify the decomposition products of carbohydrates formed as a consequence of thermal treatment. Gradient elution was applied with binary solvent elution (acetonitrile, water) through amine based carbohydrate column. Evaporative light scattering (ELS) proved to be suitable for the reliable detection of the UV/VIS inactive carbohydrate degradation products. These experimental conditions and applied advanced techniques made it possible to survey all the formed intermediers. Change in oligomer distribution was established in cases of all studied prebiotics throughout the thermal treatments. The obtained results indicate increased extent of chain degradation of the carbohydrate moiety at elevated temperatures. Prevalence of oligomers with shorter chain length and even the formation of monomer sugars (D-glucose and D-fructose) might be observed at higher temperatures. Unique oligomer distributions, which have not been described previously are revealed in the case of each studied, specific carbohydrate, which might result in various prebiotic activities. Resistant starches exhibited high stability when being thermal treated. The degradation process has been modeled by a plausible reaction mechanism, in which proton catalyzed degradation and chain cleavage take place.

Keywords: prebiotics, thermal degradation, fructo-oligosaccharide, HPLC, ELS detection

Procedia PDF Downloads 278
123 Tumor Cell Detection, Isolation and Monitoring Using Bi-Layer Magnetic Microfluidic Chip

Authors: Amir Seyfoori, Ehsan Samiei, Mohsen Akbari

Abstract:

The use of microtechnology for detection and high yield isolation of circulating tumor cells (CTCs) has shown enormous promise as an indication of clinical metastasis prognosis and cancer treatment monitoring. The Immunomagnetic assay has been also coupled to microtechnology to improve the selectivity and efficiency of the current methods of cancer biomarker isolation. In this way, generation and configuration of the local high gradient magnetic field play essential roles in such assay. Additionally, considering the intrinsic heterogeneity of cancer cells, real-time analysis of isolated cells is necessary to characterize their responses to therapy. Totally, on-chip isolation and monitoring of the specific tumor cells is considered as a pressing need in the way of modified cancer therapy. To address these challenges, we have developed a bi-layer magnetic-based microfluidic chip for enhanced CTC detection and capturing. Micromagnet arrays at the bottom layer of the chip were fabricated using a new method of magnetic nanoparticle paste deposition so that they were arranged at the center of the chain microchannel with the lowest fluid velocity zone. Breast cancer cells labelled with EPCAM-conjugated smart microgels were immobilized on the tip of the micromagnets with greater localized magnetic field and stronger cell-micromagnet interaction. Considering different magnetic nano-powder usage (MnFe2O4 & gamma-Fe2O3) and micromagnet shapes (ellipsoidal & arrow), the capture efficiency of the systems was adjusted while the higher CTC capture efficiency was acquired for MnFe2O4 arrow micromagnet as around 95.5%. As a proof of concept of on-chip tumor cell monitoring, magnetic smart microgels made of thermo-responsive poly N-isopropylacrylamide-co-acrylic acid (PNIPAM-AA) composition were used for both purposes of targeted cell capturing as well as cell monitoring using antibody conjugation and fluorescent dye loading at the same time. In this regard, magnetic microgels were successfully used as cell tracker after isolation process so that by raising the temperature up to 37⁰ C, they released the contained dye and stained the targeted cell just after capturing. This microfluidic device was able to provide a platform for detection, isolation and efficient real-time analysis of specific CTCs in the liquid biopsy of breast cancer patients.

Keywords: circulating tumor cells, microfluidic, immunomagnetic, cell isolation

Procedia PDF Downloads 119
122 Investigation of Mechanical and Tribological Property of Graphene Reinforced SS-316L Matrix Composite Prepared by Selective Laser Melting

Authors: Ajay Mandal, Jitendar Kumar Tiwari, N. Sathish, A. K. Srivastava

Abstract:

A fundamental investigation is performed on the development of graphene (Gr) reinforced stainless steel 316L (SS 316L) metal matrix composite via selective laser melting (SLM) in order to improve specific strength and wear resistance property of SS 316L. Firstly, SS 316L powder and graphene were mixed in a fixed ratio using low energy planetary ball milling. The milled powder is then subjected to the SLM process to fabricate composite samples at a laser power of 320 W and exposure time of 100 µs. The prepared composite was mechanically tested (hardness and tensile test) at ambient temperature, and obtained results indicate that the properties of the composite increased significantly with the addition of 0.2 wt. % Gr. Increment of about 25% (from 194 to 242 HV) and 70% (from 502 to 850 MPa) is obtained in hardness and yield strength of composite, respectively. Raman mapping and XRD were performed to see the distribution of Gr in the matrix and its effect on the formation of carbide, respectively. Results of Raman mapping show the uniform distribution of graphene inside the matrix. Electron back scatter diffraction (EBSD) map of the prepared composite was analyzed under FESEM in order to understand the microstructure and grain orientation. Due to thermal gradient, elongated grains were observed along the building direction, and grains get finer with the addition of Gr. Most of the mechanical components are subjected to several types of wear conditions. Therefore, it is very necessary to improve the wear property of the component, and hence apart from strength and hardness, a tribological property of composite was also measured under dry sliding condition. Solid lubrication property of Gr plays an important role during the sliding process due to which the wear rate of composite reduces up to 58%. Also, the surface roughness of worn surface reduces up to 70% as measured by 3D surface profilometry. Finally, it can be concluded that SLM is an efficient method of fabricating cutting edge metal matrix nano-composite having Gr like reinforcement, which was very difficult to fabricate through conventional manufacturing techniques. Prepared composite has superior mechanical and tribological properties and can be used for a wide variety of engineering applications. However, due to the unavailability of a considerable amount of literature in a similar domain, more experimental works need to perform, such as thermal property analysis, and is a part of ongoing study.

Keywords: selective laser melting, graphene, composite, mechanical property, tribological property

Procedia PDF Downloads 110
121 Modeling and Analysis Of Occupant Behavior On Heating And Air Conditioning Systems In A Higher Education And Vocational Training Building In A Mediterranean Climate

Authors: Abderrahmane Soufi

Abstract:

The building sector is the largest consumer of energy in France, accounting for 44% of French consumption. To reduce energy consumption and improve energy efficiency, France implemented an energy transition law targeting 40% energy savings by 2030 in the tertiary building sector. Building simulation tools are used to predict the energy performance of buildings but the reliability of these tools is hampered by discrepancies between the real and simulated energy performance of a building. This performance gap lies in the simplified assumptions of certain factors, such as the behavior of occupants on air conditioning and heating, which is considered deterministic when setting a fixed operating schedule and a fixed interior comfort temperature. However, the behavior of occupants on air conditioning and heating is stochastic, diverse, and complex because it can be affected by many factors. Probabilistic models are an alternative to deterministic models. These models are usually derived from statistical data and express occupant behavior by assuming a probabilistic relationship to one or more variables. In the literature, logistic regression has been used to model the behavior of occupants with regard to heating and air conditioning systems by considering univariate logistic models in residential buildings; however, few studies have developed multivariate models for higher education and vocational training buildings in a Mediterranean climate. Therefore, in this study, occupant behavior on heating and air conditioning systems was modeled using logistic regression. Occupant behavior related to the turn-on heating and air conditioning systems was studied through experimental measurements collected over a period of one year (June 2023–June 2024) in three classrooms occupied by several groups of students in engineering schools and professional training. Instrumentation was provided to collect indoor temperature and indoor relative humidity in 10-min intervals. Furthermore, the state of the heating/air conditioning system (off or on) and the set point were determined. The outdoor air temperature, relative humidity, and wind speed were collected as weather data. The number of occupants, age, and sex were also considered. Logistic regression was used for modeling an occupant turning on the heating and air conditioning systems. The results yielded a proposed model that can be used in building simulation tools to predict the energy performance of teaching buildings. Based on the first months (summer and early autumn) of the investigations, the results illustrate that the occupant behavior of the air conditioning systems is affected by the indoor relative humidity and temperature in June, July, and August and by the indoor relative humidity, temperature, and number of occupants in September and October. Occupant behavior was analyzed monthly, and univariate and multivariate models were developed.

Keywords: occupant behavior, logistic regression, behavior model, mediterranean climate, air conditioning, heating

Procedia PDF Downloads 39
120 Managing Shallow Gas for Offshore Platforms via Fit-For-Purpose Solutions: Case Study for Offshore Malaysia

Authors: Noorizal Huang, Christian Girsang, Mohamad Razi Mansoor

Abstract:

Shallow gas seepage was first spotted at a central processing platform offshore Malaysia in 2010, acknowledged as Platform T in this paper. Frequent monitoring of the gas seepage was performed through remotely operated vehicle (ROV) baseline survey and a comprehensive geophysical survey was conducted to understand the characteristics of the gas seepage and to ensure that the integrity of the foundation at Platform T was not compromised. The origin of the gas back then was unknown. A soil investigation campaign was performed in 2016 to study the origin of the gas seepage. Two boreholes were drilled; a composite borehole to 150m below seabed for the purpose of soil sampling and in-situ testing and a pilot hole to 155m below the seabed, which was later converted to a fit-for-purpose relief well as an alternate migration path for the gas. During the soil investigation campaign, dissipation tests were performed at several layers which were potentially the source or migration path for the gas. Five (5) soil samples were segregated for headspace test, to identify the gas type which subsequently can be used to identify the origin of the gas. Dissipation tests performed at four depth intervals indicates pore water pressure less than 20 % of the effective vertical stress and appear to continue decreasing if the test had not been stopped. It was concluded that a low to a negligible amount of excess pore pressure exist in clayey silt layers. Results from headspace test show presence of methane corresponding to the clayey silt layers as reported in the boring logs. The gas most likely comes from biogenic sources, feeding on organic matter in situ over a large depth range. It is unlikely that there are large pockets of gas in the soil due to its homogeneous clayey nature and the lack of excess pore pressure in other permeable clayey silt layers encountered. Instead, it is more likely that when pore water at certain depth encounters a more permeable path, such as a borehole, it rises up through this path due to the temperature gradient in the soil. As the water rises the pressure decreases, which could cause gases dissolved in the water to come out of solution and form bubbles. As a result, the gas will have no impact on the integrity of the foundation at Platform T. The fit-for-purpose relief well design as well as adopting headspace testing can be used to address the shallow gas issue at Platform T in a cost effective and efficient manners.

Keywords: dissipation test, headspace test, excess pore pressure, relief well, shallow gas

Procedia PDF Downloads 249
119 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods

Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak

Abstract:

Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.

Keywords: replanting, geospatial, precision agriculture, blueprint

Procedia PDF Downloads 53
118 Groundwater Numerical Modeling, an Application of Remote Sensing, and GIS Techniques in South Darb El Arbaieen, Western Desert, Egypt

Authors: Abdallah M. Fayed

Abstract:

The study area is located in south Darb El Arbaieen, western desert of Egypt. It occupies the area between latitudes 22° 00/ and 22° 30/ North and Longitudes 29° 30/ and 30° 00/ East, from southern border of Egypt to the area north Bir Kuraiym and from the area East of East Owienat to the area west Tushka district, its area about 2750 Km2. The famous features; southern part of Darb El Arbaieen road, G Baraqat El Scab El Qarra, Bir Dibis, Bir El Shab and Bir Kuraiym, Interpretation of soil stratification shows layers that are related to Quaternary and Upper-Lower Cretaceous eras. It is dissected by a series of NE-SW striking faults. The regional groundwater flow direction is in SW-NE direction with a hydraulic gradient is 1m / 2km. Mathematical model program has been applied for evaluation of groundwater potentials in the main Aquifer –Nubian Sandstone- in the area of study and Remote sensing technique is considered powerful, accurate and saving time in this respect. These techniques are widely used for illustrating and analysis different phenomenon such as the new development in the desert (land reclamation), residential development (new communities), urbanization, etc. The major issues concerning water development objective of this work is to determine the new development areas in western desert of Egypt during the period from 2003 to 2015 using remote sensing technique, the impacts of the present and future development have been evaluated by using the two-dimensional numerical groundwater flow Simulation Package (visual modflow 4.2). The package was used to construct and calibrate a numerical model that can be used to simulate the response of the aquifer in the study area under implementing different management alternatives in the form of changes in piezometric levels and salinity. Total period of simulation is 100 years. After steady state calibration, two different scenarios are simulated for groundwater development. 21 production wells are installed at the study area and used in the model, with the total discharge for the two scenarios were 105000 m3/d, 210000 m3/d. The drawdown was 11.8 m and 23.7 m for the two scenarios in the end of 100 year. Contour maps for water heads and drawdown and hydrographs for piezometric head are represented. The drawdown was less than the half of the saturated thickness (the safe yield case).

Keywords: remote sensing, management of aquifer systems, simulation modeling, western desert, South Darb El Arbaieen

Procedia PDF Downloads 376
117 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows

Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman

Abstract:

The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.

Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer

Procedia PDF Downloads 102
116 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers

Procedia PDF Downloads 120
115 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling

Authors: Zhenyu Zhang, Hsi-Hsien Wei

Abstract:

Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.

Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime

Procedia PDF Downloads 124
114 Possibilities to Evaluate the Climatic and Meteorological Potential for Viticulture in Poland: The Case Study of the Jagiellonian University Vineyard

Authors: Oskar Sekowski

Abstract:

Current global warming causes changes in the traditional zones of viticulture worldwide. During 20th century, the average global air temperature increased by 0.89˚C. The models of climate change indicate that viticulture, currently concentrating in narrow geographic niches, may move towards the poles, to higher geographic latitudes. Global warming may cause changes in traditional viticulture regions. Therefore, there is a need to estimate the climatic conditions and climate change in areas that are not traditionally associated with viticulture, e.g., Poland. The primary objective of this paper is to prepare methodology to evaluate the climatic and meteorological potential for viticulture in Poland based on a case study. Moreover, the additional aim is to evaluate the climatic potential of a mesoregion where a university vineyard is located. The daily data of temperature, precipitation, insolation, and wind speed (1988-2018) from the meteorological station located in Łazy, southern Poland, was used to evaluate 15 climatological parameters and indices connected with viticulture. The next steps of the methodology are based on Geographic Information System methods. The topographical factors such as a slope gradient and slope exposure were created using Digital Elevation Models. The spatial distribution of climatological elements was interpolated by ordinary kriging. The values of each factor and indices were also ranked and classified. The viticultural potential was determined by integrating two suitability maps, i.e., the topographical and climatic ones, and by calculating the average for each pixel. Data analysis shows significant changes in heat accumulation indices that are driven by increases in maximum temperature, mostly increasing number of days with Tmax > 30˚C. The climatic conditions of this mesoregion are sufficient for vitis vinifera viticulture. The values of indicators and insolation are similar to those in the known wine regions located on similar geographical latitudes in Europe. The smallest threat to viticulture in study area is the occurrence of hail and the highest occurrence of frost in the winter. This research provides the basis for evaluating general suitability and climatologic potential for viticulture in Poland. To characterize the climatic potential for viticulture, it is necessary to assess the suitability of all climatological and topographical factors that can influence viticulture. The methodology used in this case study shows places where there is a possibility to create vineyards. It may also be helpful for wine-makers to select grape varieties.

Keywords: climatologic potential, climatic classification, Poland, viticulture

Procedia PDF Downloads 81
113 Stochastic Nuisance Flood Risk for Coastal Areas

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

The U.S. Federal Emergency Management Agency (FEMA) developed flood maps based on experts’ experience and estimates of the probability of flooding. Current flood-risk models evaluate flood risk with regional and subjective measures without impact from torrential rain and nuisance flooding at the neighborhood level. Nuisance flooding occurs in small areas in the community, where a few streets or blocks are routinely impacted. This type of flooding event occurs when torrential rainstorm combined with high tide and sea level rise temporarily exceeds a given threshold. In South Florida, this threshold is 1.7 ft above Mean Higher High Water (MHHW). The National Weather Service defines torrential rain as rain deposition at a rate greater than 0.3-inches per hour or three inches in a single day. Data from the Florida Climate Center, 1970 to 2020, shows 371 events with more than 3-inches of rain in a day in 612 months. The purpose of this research is to develop a data-driven method to determine comprehensive analytical damage-avoidance criteria that account for nuisance flood events at the single-family home level. The method developed uses the Failure Mode and Effect Analysis (FMEA) method from the American Society of Quality (ASQ) to estimate the Damage Avoidance (DA) preparation for a 1-day 100-year storm. The Consequence of Nuisance Flooding (CoNF) is estimated from community mitigation efforts to prevent nuisance flooding damage. The Probability of Nuisance Flooding (PoNF) is derived from the frequency and duration of torrential rainfall causing delays and community disruptions to daily transportation, human illnesses, and property damage. Urbanization and population changes are related to the U.S. Census Bureau's annual population estimates. Data collected by the United States Department of Agriculture (USDA) Natural Resources Conservation Service’s National Resources Inventory (NRI) and locally by the South Florida Water Management District (SFWMD) track the development and land use/land cover changes with time. The intent is to include temporal trends in population density growth and the impact on land development. Results from this investigation provide the risk of nuisance flooding as a function of CoNF and PoNF for coastal areas of South Florida. The data-based criterion provides awareness to local municipalities on their flood-risk assessment and gives insight into flood management actions and watershed development.

Keywords: flood risk, nuisance flooding, urban flooding, FMEA

Procedia PDF Downloads 67
112 Analysis of the Homogeneous Turbulence Structure in Uniformly Sheared Bubbly Flow Using First and Second Order Turbulence Closures

Authors: Hela Ayeb Mrabtini, Ghazi Bellakhal, Jamel Chahed

Abstract:

The presence of the dispersed phase in gas-liquid bubbly flow considerably alters the liquid turbulence. The bubbles induce turbulent fluctuations that enhance the global liquid turbulence level and alter the mechanisms of turbulence. RANS modeling of uniformly sheared flows on an isolated sphere centered in a control volume is performed using first and second order turbulence closures. The sphere is placed in the production-dissipation equilibrium zone where the liquid velocity is set equal to the relative velocity of the bubbles. The void fraction is determined by the ratio between the sphere volume and the control volume. The analysis of the turbulence statistics on the control volume provides numerical results that are interpreted with regard to the effect of the bubbles wakes on the turbulence structure in uniformly sheared bubbly flow. We assumed for this purpose that at low void fraction where there is no hydrodynamic interaction between the bubbles, the single-phase flow simulation on an isolated sphere is representative on statistical average of a sphere network. The numerical simulations were firstly validated against the experimental data of bubbly homogeneous turbulence with constant shear and then extended to produce numerical results for a wide range of shear rates from 0 to 10 s^-1. These results are compared with our turbulence closure proposed for gas-liquid bubbly flows. In this closure, the turbulent stress tensor in the liquid is split into a turbulent dissipative part produced by the gradient of the mean velocity which also contains the turbulence generated in the bubble wakes and a pseudo-turbulent non-dissipative part induced by the bubbles displacements. Each part is determined by a specific transport equation. The simulations of uniformly sheared flows on an isolated sphere reproduce the mechanisms related to the turbulent part, and the numerical results are in perfect accordance with the modeling of the transport equation of the turbulent part. The reduction of second order turbulence closure provides a description of the modification of turbulence structure by the bubbles presence using a dimensionless number expressed in terms of two-time scales characterizing the turbulence induced by the shear and that induced by bubbles displacements. The numerical simulations carried out in the framework of a comprehensive analysis reproduce particularly the attenuation of the turbulent friction showed in the experimental results of bubbly homogeneous turbulence subjected to a constant shear.

Keywords: gas-liquid bubbly flows, homogeneous turbulence, turbulence closure, uniform shear

Procedia PDF Downloads 439
111 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 107
110 An Adjoint-Based Method to Compute Derivatives with Respect to Bed Boundary Positions in Resistivity Measurements

Authors: Mostafa Shahriari, Theophile Chaumont-Frelet, David Pardo

Abstract:

Resistivity measurements are used to characterize the Earth’s subsurface. They are categorized into two different groups: (a) those acquired on the Earth’s surface, for instance, controlled source electromagnetic (CSEM) and Magnetotellurics (MT), and (b) those recorded with borehole logging instruments such as Logging-While-Drilling (LWD) devices. LWD instruments are mostly used for geo-steering purposes, i.e., to adjust dip and azimuthal angles of a well trajectory to drill along a particular geological target. Modern LWD tools measure all nine components of the magnetic field corresponding to three orthogonal transmitter and receiver orientations. In order to map the Earth’s subsurface and perform geo-steering, we invert measurements using a gradient-based method that utilizes the derivatives of the recorded measurements with respect to the inversion variables. For resistivity measurements, these inversion variables are usually the constant resistivity value of each layer and the bed boundary positions. It is well-known how to compute derivatives with respect to the constant resistivity value of each layer using semi-analytic or numerical methods. However, similar formulas for computing the derivatives with respect to bed boundary positions are unavailable. The main contribution of this work is to provide an adjoint-based formulation for computing derivatives with respect to the bed boundary positions. The key idea to obtain the aforementioned adjoint state formulations for the derivatives is to separate the tangential and normal components of the field and treat them differently. This formulation allows us to compute the derivatives faster and more accurately than with traditional finite differences approximations. In the presentation, we shall first derive a formula for computing the derivatives with respect to the bed boundary positions for the potential equation. Then, we shall extend our formulation to 3D Maxwell’s equations. Finally, by considering a 1D domain and reducing the dimensionality of the problem, which is a common practice in the inversion of resistivity measurements, we shall derive a formulation to compute the derivatives of the measurements with respect to the bed boundary positions using a 1.5D variational formulation. Then, we shall illustrate the accuracy and convergence properties of our formulations by comparing numerical results with the analytical derivatives for the potential equation. For the 1.5D Maxwell’s system, we shall compare our numerical results based on the proposed adjoint-based formulation vs those obtained with a traditional finite difference approach. Numerical results shall show that our proposed adjoint-based technique produces enhanced accuracy solutions while its cost is negligible, as opposed to the finite difference approach that requires the solution of one additional problem per derivative.

Keywords: inverse problem, bed boundary positions, electromagnetism, potential equation

Procedia PDF Downloads 159
109 Angiomotin Regulates Integrin Beta 1-Mediated Endothelial Cell Migration and Angiogenesis

Authors: Yuanyuan Zhang, Yujuan Zheng, Giuseppina Barutello, Sumako Kameishi, Kungchun Chiu, Katharina Hennig, Martial Balland, Federica Cavallo, Lars Holmgren

Abstract:

Angiogenesis describes that new blood vessels migrate from pre-existing ones to form 3D lumenized structure and remodeling. During directional migration toward the gradient of pro-angiogenic factors, the endothelial cells, especially the tip cells need filopodia to sense the environment and exert the pulling force. Of particular interest are the integrin proteins, which play an essential role in focal adhesion in the connection between migrating cells and extracellular matrix (ECM). Understanding how these biomechanical complexes orchestrate intrinsic and extrinsic forces is important for our understanding of the underlying mechanisms driving angiogenesis. We have previously identified Angiomotin (Amot), a member of Amot scaffold protein family, as a promoter for endothelial cell migration in vitro and zebrafish models. Hence, we established inducible endothelial-specific Amot knock-out mice to study normal retinal angiogenesis as well as tumor angiogenesis. We found that the migration ratio of the blood vessel network to the edge was significantly decreased in Amotec- retinas at postnatal day 6 (P6). While almost all the Amot defect tip cells lost migration advantages at P7. In consistence with the dramatic morphology defect of tip cells, there was a non-autonomous defect in astrocytes, as well as the disorganized fibronectin expression pattern correspondingly in migration front. Furthermore, the growth of transplanted LLC tumor was inhibited in Amot knockout mice due to fewer vasculature involved. By using MMTV-PyMT transgenic mouse model, there was a significantly longer period before tumors arised when Amot was specifically knocked out in blood vessels. In vitro evidence showed that Amot binded to beta-actin, Integrin beta 1 (ITGB1), Fibronectin, FAK, Vinculin, major focal adhesion molecules, and ITGB1 and stress fibers were distinctly induced by Amot transfection. Via traction force microscopy, the total energy (force indicater) was found significantly decreased in Amot knockdown cells. Taken together, we propose that Amot is a novel partner of the ITGB1/Fibronectin protein complex at focal adhesion and required for exerting force transition between endothelial cell and extracellular matrix.

Keywords: angiogenesis, angiomotin, endothelial cell migration, focal adhesion, integrin beta 1

Procedia PDF Downloads 209
108 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison

Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.

Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison

Procedia PDF Downloads 134
107 The Influence of Salt Body of J. Ech Cheid on the Maturity History of the Cenomanian: Turonian Source Rock

Authors: Mohamed Malek Khenissi, Mohamed Montassar Ben Slama, Anis Belhaj Mohamed, Moncef Saidi

Abstract:

Northern Tunisia is well known by its different and complex structural and geological zones that have been the result of a geodynamic history that extends from the early Mesozoic era to the actual period. One of these zones is the salt province, where the Halokinesis process is manifested by a number of NE/SW salt structures such as Jebel Ech-Cheid which represents masses of materials characterized by a high plasticity and low density. The salt masses extrusions that have been developed due to an extension that started from the late Triassic to late Cretaceous. The evolution of salt bodies within sedimentary basins have not only contributed to modify the architecture of the basin, but it also has certain geochemical effects which touch mainly source rocks that surround it. It has been demonstrated that the presence of salt structures within sedimentary basins can influence its temperature distribution and thermal history. Moreover, it has been creating heat flux anomalies that may affect the maturity of organic matter and the timing of hydrocarbon generation. Field samples of the Bahloul source rock (Cenomanan-Tunonian) were collected from different sights from all around Ech Cheid salt structure and evaluated using Rock-eval pyrolysis and GC/MS techniques in order to assess the degree of maturity evolution and the heat flux anomalies in the different zones analyze. The Total organic Carbon (TOC) values range between 1 to 9% and the (Tmax) ranges between 424 and 445°C, also the distribution of the source rock biomarkers both saturated and aromatic changes in a regular fashions with increasing maturity and this are shown in the chromatography results such as Ts/(Ts+Tm) ratios, 22S/(22S+22R) values for C31 homohopanes, ββ/(ββ+αα)20R and 20S/(20S+20R) ratios for C29 steranes which gives a consistent maturity indications and assessment of the field samples. These analyses are carried to interpret the maturity evolution and the heat flux around Ech Cheid salt structure through the geological history. These analyses also aim to demonstrate that the salt structure can have a direct effect on the geothermal gradient of the basin and on the maturity of the Bahloul Formation source rock. The organic matter has reached different stages of thermal maturity, but delineate a general increasing maturity trend. Our study confirms that the J. Ech Cheid salt body have on the first hand: a huge influence on the local distribution of anoxic depocentre at least within Cenomanian-Turonian time. In the second hand, the thermal anomaly near the salt mass has affected the maturity of Bahloul Formation.

Keywords: Bahloul formation, depocentre, GC/MS, rock-eval

Procedia PDF Downloads 219
106 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 112
105 Simulation of Wet Scrubbers for Flue Gas Desulfurization

Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra

Abstract:

Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.

Keywords: desulfurization, discrete phase, scrubber, wall film

Procedia PDF Downloads 232
104 NK Cells Expansion Model from PBMC Led to a Decrease of CD4+ and an Increase of CD8+ and CD25+CD127- T-Reg Lymphocytes in Patients with Ovarian Neoplasia

Authors: Rodrigo Fernandes da Silva, Daniela Maira Cardozo, Paulo Cesar Martins Alves, Sophie Françoise Derchain, Fernando Guimarães

Abstract:

T-reg lymphocytes are important for the control of peripheral tolerance. They control the adaptive immune system and prevent autoimmunity through its suppressive action on CD4+ and CD8+ lymphocytes. The suppressive action also includes B lymphocytes, dendritic cells, monocytes/macrophages and recently, studies have shown that T-reg are also able to inhibit NK cells, therefore they exert their control of the immune response from innate to adaptive response. Most tumors express self-ligands, therefore it is believed that T-reg cells induce tolerance of the immune system, hindering the development of successful immunotherapies. T-reg cells have been linked to the suppression mechanisms of the immune response against tumors, including ovarian cancer. The goal of this study was to disclose the sub-population of the expanded CD3+ lymphocytes reported by previous studies, using the long-term culture model designed by Carlens et al 2001, to generate effector cell suspensions enriched with cytotoxic CD3-CD56+ NK cells, from PBMC of ovarian neoplasia patients. Methods and Results: Blood was collected from 12 patients with ovarian neoplasia after signed consent: 7 benign (Bng) and 5 malignant (Mlg). Mononuclear cells were separated by Ficoll-Paque gradient. Long-term culture was conducted by a 21 day culturing process with SCGM CellGro medium supplemented with anti-CD3 (10ng/ml, first 5 days), IL-2 (1000UI/ml) and FBS (10%). After 21 days of expansion, there was an increase in the population of CD3+ lymphocytes in the benign and malignant group. Within CD3+ population, there was a significant decrease in the population of CD4+ lymphocytes in the benign (median Bgn D-0=73.68%, D-21=21.05%) (p<0.05) and malignant (median Mlg D-0=64.00%, D-21=11.97%) (p < 0.01) group. Inversely, after 21 days of expansion, there was an increase in the population of CD8+ lymphocytes within the CD3+ population in the benign (median Bgn D-0=16.80%, D-21=38.56%) and malignant (median Mlg D-0=27.12%, D-21=72.58%) group. However, this increase was only significant on the malignant group (p<0.01). Within the CD3+CD4+ population, there was a significant increase (p < 0.05) in the population of T-reg lymphocytes in the benign (median Bgn D-0=9.84%, D-21=39.47%) and malignant (median Mlg D-0=3.56%, D-21=16.18%) group. Statistical analysis inter groups was performed by Kruskal-Wallis test and intra groups by Mann Whitney test. Conclusion: The CD4+ and CD8+ sub-population of CD3+ lymphocytes shifts with the culturing process. This might be due to the process of the immune system to produce a cytotoxic response. At the same time, T-reg lymphocytes increased within the CD4+ population, suggesting a modulation of the immune response towards cells of the immune system. The expansion of the T-reg population can hinder an immune response against cancer. Therefore, an immunotherapy using this expansion procedure should aim to halt the expansion of T-reg or its immunosuppresion capability.

Keywords: regulatory T cells, CD8+ T cells, CD4+ T cells, NK cell expansion

Procedia PDF Downloads 426
103 Resonant Tunnelling Diode Output Characteristics Dependence on Structural Parameters: Simulations Based on Non-Equilibrium Green Functions

Authors: Saif Alomari

Abstract:

The paper aims at giving physical and mathematical descriptions of how the structural parameters of a resonant tunnelling diode (RTD) affect its output characteristics. Specifically, the value of the peak voltage, peak current, peak to valley current ratio (PVCR), and the difference between peak and valley voltages and currents ΔV and ΔI. A simulation-based approach using the Non-Equilibrium Green Function (NEGF) formalism based on the Silvaco ATLAS simulator is employed to conduct a series of designed experiments. These experiments show how the doping concentration in the emitter and collector layers, their thicknesses, and the width of the barriers and the quantum well influence the above-mentioned output characteristics. Each of these parameters was systematically changed while holding others fixed in each set of experiments. Factorial experiments are outside the scope of this work and will be investigated in future. The physics involved in the operation of the device is thoroughly explained and mathematical models based on curve fitting and underlaying physical principles are deduced. The models can be used to design devices with predictable output characteristics. These models were found absent in the literature that the author acanned. Results show that the doping concentration in each region has an effect on the value of the peak voltage. It is found that increasing the carrier concentration in the collector region shifts the peak to lower values, whereas increasing it in the emitter shifts the peak to higher values. In the collector’s case, the shift is either controlled by the built-in potential resulting from the concentration gradient or the conductivity enhancement in the collector. The shift to higher voltages is found to be also related to the location of the Fermi-level. The thicknesses of these layers play a role in the location of the peak as well. It was found that increasing the thickness of each region shifts the peak to higher values until a specific characteristic length, afterwards the peak becomes independent of the thickness. Finally, it is shown that the thickness of the barriers can be optimized for a particular well width to produce the highest PVCR or the highest ΔV and ΔI. The location of the peak voltage is important in optoelectronic applications of RTDs where the operating point of the device is usually the peak voltage point. Furthermore, the PVCR, ΔV, and ΔI are of great importance for building RTD-based oscillators as they affect the frequency response and output power of the oscillator.

Keywords: peak to valley ratio, peak voltage shift, resonant tunneling diodes, structural parameters

Procedia PDF Downloads 115