Search results for: reduced modeling
6133 Determination of Stress-Strain Curve of Duplex Stainless Steel Welds
Authors: Carolina Payares-Asprino
Abstract:
Dual-phase duplex stainless steel comprised of ferrite and austenite has shown high strength and corrosion resistance in many aggressive environments. Joining duplex alloys is challenging due to several embrittling precipitates and metallurgical changes during the welding process. The welding parameters strongly influence the quality of a weld joint. Therefore, it is necessary to quantify the weld bead’s integral properties as a function of welding parameters, especially when part of the weld bead is removed through a machining process due to aesthetic reasons or to couple the elements in the in-service structure. The present study uses the existing stress-strain model to predict the stress-strain curves for duplex stainless-steel welds under different welding conditions. Having mathematical expressions that predict the shape of the stress-strain curve is advantageous since it reduces the experimental work in obtaining the tensile test. In analysis and design, such stress-strain modeling simplifies the time of operations by being integrated into calculation tools, such as the finite element program codes. The elastic zone and the plastic zone of the curve can be defined by specific parameters, generating expressions that simulate the curve with great precision. There are empirical equations that describe the stress-strain curves. However, they only refer to the stress-strain curve for the stainless steel, but not when the material is under the welding process. It is a significant contribution to the applications of duplex stainless steel welds. For this study, a 3x3 matrix with a low, medium, and high level for each of the welding parameters were applied, giving a total of 27 weld bead plates. Two tensile specimens were manufactured from each welded plate, resulting in 54 tensile specimens for testing. When evaluating the four models used to predict the stress-strain curve in the welded specimens, only one model (Rasmussen) presented a good correlation in predicting the strain stress curve.Keywords: duplex stainless steels, modeling, stress-stress curve, tensile test, welding
Procedia PDF Downloads 1676132 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting
Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola
Abstract:
Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.Keywords: accelerometer, health, mobile applications, physical activity, physical performance
Procedia PDF Downloads 1966131 Development of Ketorolac Tromethamine Encapsulated Stealth Liposomes: Pharmacokinetics and Bio Distribution
Authors: Yasmin Begum Mohammed
Abstract:
Ketorolac tromethamine (KTM) is a non-steroidal anti-inflammatory drug with a potent analgesic and anti-inflammatory activity due to prostaglandin related inhibitory effect of drug. It is a non-selective cyclo-oxygenase inhibitor. The drug is currently used orally and intramuscularly in multiple divided doses, clinically for the management arthritis, cancer pain, post-surgical pain, and in the treatment of migraine pain. KTM has short biological half-life of 4 to 6 hours, which necessitates frequent dosing to retain the action. The frequent occurrence of gastrointestinal bleeding, perforation, peptic ulceration, and renal failure lead to the development of other drug delivery strategies for the appropriate delivery of KTM. The ideal solution would be to target the drug only to the cells or tissues affected by the disease. Drug targeting could be achieved effectively by liposomes that are biocompatible and biodegradable. The aim of the study was to develop a parenteral liposome formulation of KTM with improved efficacy while reducing side effects by targeting the inflammation due to arthritis. PEG-anchored (stealth) and non-PEG-anchored liposomes were prepared by thin film hydration technique followed by extrusion cycle and characterized for in vitro and in vivo. Stealth liposomes (SLs) exhibited increase in percent encapsulation efficiency (94%) and 52% percent of drug retention during release studies in 24 h with good stability for a period of 1 month at -20°C and 4°C. SLs showed about maximum 55% of edema inhibition with significant analgesic effect. SLs produced marked differences over those of non-SL formulations with an increase in area under plasma concentration time curve, t₁/₂, mean residence time, and reduced clearance. 0.3% of the drug was detected in arthritic induced paw with significantly reduced drug localization in liver, spleen, and kidney for SLs when compared to other conventional liposomes. Thus SLs help to increase the therapeutic efficacy of KTM by increasing the targeting potential at the inflammatory region.Keywords: biodistribution, ketorolac tromethamine, stealth liposomes, thin film hydration technique
Procedia PDF Downloads 2956130 Element Distribution and REE Dispersal in Sandstone-Hosted Copper Mineralization within Oligo-Miocene Strata, NE Iran: Insights from Lithostratigraphy and Mineralogy
Authors: Mostafa Feiz, Mohammad Safari, Hossein Hadizadeh
Abstract:
The Chalpo copper area is located in northeastern Iran, which is part of the structural zone of central Iran and the back-arc basin of Sabzevar. This sedimentary basin accumulated in destructive-oligomiocene sediments is named the Nasr-Chalpo-Sangerd (NCS) basin. The sedimentary layers in this basin originated mainly from Upper Cretaceous ophiolitic rocks and intermediate to mafic-post ophiolitic volcanic rocks, deposited as a nonconformity. The mineralized sandstone layers in the Chalpo area include leached zones (with a thickness of 5 to 8 meters) and mineralized lenses with a thickness of 0.5 to 0.7 meters. Ore minerals include primary sulfide minerals, such as chalcocite, chalcopyrite, and pyrite, as well as secondary minerals, such as covellite, digenite, malachite, and azurite, formed in three stages that comprise primary, simultaneously, and supergene stage. The best agents that control the mineralization in this area include the permeability of host rocks, the presence of fault zones as the conduits for copper oxide solutions, and significant amounts of plant fossils, which create a reducing environment for the deposition of mineralized layers. The calculations of mass changes on copper-bearing layers and primary sandstone layers indicate that Pb, As, Cd, Te, and Mo are enriched in the mineralized zones, whereas SiO₂, TiO₂, Fe₂O₃, V, Sr, and Ba are depleted. The combination of geological, stratigraphic, and geochemical studies suggests that the origin of copper may have been the underlying red strata that contained hornblende, plagioclase, biotite, alkaline feldspar, and labile minerals. Dehydration and hydrolysis of these minerals during the diagenetic process caused the leaching of copper and associated elements by circling fluids, which formed an oxidant-hydrothermal solution. Copper and silver in this oxidant solution might have moved upwards through the basin-fault zones and deposited in the reducing environments in the sandstone layers that have had abundant organic matter. Copper in these solutions was probably carried by chloride complexes. The collision of oxidant and reduced solutions caused the deposition of Cu and Ag, whereas some s elements in oxidant environments (e.g., Fe₂O₃, TiO₂, SiO₂, REEs) become uns in the reduced condition. Therefore, the copper-bearing sandstones in the study area are depleted from these elements resulting from the leaching process. The results indicate that during the mineralization stage, LREEs and MREEs were depleted, but Cu, Ag, and S were enriched. Based on field evidence, it seems that the circulation of connate fluids in the reb-bed strata, produced by diagenetic processes, encountered to reduced facies, which formed earlier by abundant fossil-plant debris in the sandstones, is the best model for precipitating sulfide-copper minerals.Keywords: Chalpo, Oligo-Miocene red beds, sandstone-hosted copper mineralization, mass change, LREEs and MREEs
Procedia PDF Downloads 276129 Modeling of Leaks Effects on Transient Dispersed Bubbly Flow
Authors: Mohand Kessal, Rachid Boucetta, Mourad Tikobaini, Mohammed Zamoum
Abstract:
Leakage problem of two-component fluids flow is modeled for a transient one-dimensional homogeneous bubbly flow and developed by taking into account the effect of a leak located at the middle point of the pipeline. The corresponding three conservation equations are numerically resolved by an improved characteristic method. The obtained results are explained and commented in terms of physical impact on the flow parameters.Keywords: fluid transients, pipelines leaks, method of characteristics, leakage problem
Procedia PDF Downloads 4796128 Environmental Performance Measurement for Network-Level Pavement Management
Authors: Jessica Achebe, Susan Tighe
Abstract:
The recent Canadian infrastructure report card reveals the unhealthy state of municipal infrastructure intensified challenged faced by municipalities to maintain adequate infrastructure performance thresholds and meet user’s required service levels. For a road agency, huge funding gap issue is inflated by growing concerns of the environmental repercussion of road construction, operation and maintenance activities. As the reduction of material consumption and greenhouse gas emission when maintain and rehabilitating road networks can achieve added benefits including improved life cycle performance of pavements, reduced climate change impacts and human health effect due to less air pollution, improved productivity due to optimal allocation of resources and reduced road user cost. Incorporating environmental sustainability measure into pavement management is solution widely cited and studied. However measuring the environmental performance of road network is still a far-fetched practice in road network management, more so an ostensive agency-wide environmental sustainability or sustainable maintenance specifications is missing. To address this challenge, this present research focuses on the environmental sustainability performance of network-level pavement management. The ultimate goal is to develop a framework to incorporate environmental sustainability in pavement management systems for network-level maintenance programming. In order to achieve this goal, this study reviewed previous studies that employed environmental performance measures, as well as the suitability of environmental performance indicators for the evaluation of the sustainability of network-level pavement maintenance strategies. Through an industry practice survey, this paper provides a brief forward regarding the pavement manager motivations and barriers to making more sustainable decisions, and data needed to support the network-level environmental sustainability. The trends in network-level sustainable pavement management are also presented, existing gaps are highlighted, and ideas are proposed for sustainable network-level pavement management.Keywords: pavement management, sustainability, network-level evaluation, environment measures
Procedia PDF Downloads 2116127 Source Identification Model Based on Label Propagation and Graph Ordinary Differential Equations
Authors: Fuyuan Ma, Yuhan Wang, Junhe Zhang, Ying Wang
Abstract:
Identifying the sources of information dissemination is a pivotal task in the study of collective behaviors in networks, enabling us to discern and intercept the critical pathways through which information propagates from its origins. This allows for the control of the information’s dissemination impact in its early stages. Numerous methods for source detection rely on pre-existing, underlying propagation models as prior knowledge. Current models that eschew prior knowledge attempt to harness label propagation algorithms to model the statistical characteristics of propagation states or employ Graph Neural Networks (GNNs) for deep reverse modeling of the diffusion process. These approaches are either deficient in modeling the propagation patterns of information or are constrained by the over-smoothing problem inherent in GNNs, which limits the stacking of sufficient model depth to excavate global propagation patterns. Consequently, we introduce the ODESI model. Initially, the model employs a label propagation algorithm to delineate the distribution density of infected states within a graph structure and extends the representation of infected states from integers to state vectors, which serve as the initial states of nodes. Subsequently, the model constructs a deep architecture based on GNNs-coupled Ordinary Differential Equations (ODEs) to model the global propagation patterns of continuous propagation processes. Addressing the challenges associated with solving ODEs on graphs, we approximate the analytical solutions to reduce computational costs. Finally, we conduct simulation experiments on two real-world social network datasets, and the results affirm the efficacy of our proposed ODESI model in source identification tasks.Keywords: source identification, ordinary differential equations, label propagation, complex networks
Procedia PDF Downloads 206126 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks
Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions
Procedia PDF Downloads 826125 A Temporal QoS Ontology For ERTMS/ETCS
Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien
Abstract:
Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. Indeed, a user operation, such as adding a new constraint on existing planning constraints can cause temporal inconsistencies, which can lead to system failures. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are given.Keywords: system requirement specification, ERTMS/ETCS, temporal ontologies, domain ontologies
Procedia PDF Downloads 4226124 Free and Open Source Software for BIM Workflow of Steel Structure Design
Authors: Danilo Di Donato
Abstract:
The continuous new releases of free and open source software (FOSS) and the high costs of proprietary software -whose monopoly is characterized by closed codes and the low level of implementation and customization of software by end-users- impose a reflection on possible tools that can be chosen and adopted for the design and the representation of new steel constructions. The paper aims to show experimentation carried out to verify the actual potential and the effective applicability of FOSS supports to the BIM modeling of steel structures, particularly considering the goal of a possible workflow in order to achieve high level of development (LOD); allow effective interchange methods between different software. To this end, the examined software packages are those with open source or freeware licenses, in order to evaluate their use in architectural praxis. The test has primarily involved the experimentation of Freecad -the only Open Source software that allows a complete and integrated BIM workflow- and then the results have been compared with those of two proprietary software, Sketchup and TeklaBim Sight, which are released with a free version, but not usable for commercial purposes. The experiments carried out on Open Source, and freeware software was then compared with the outcomes that are obtained by two proprietary software, Sketchup Pro and Tekla Structure which has special modules particularly addressed to the design of steel structures. This evaluation has concerned different comparative criteria, that have been defined on the basis of categories related to the reliability, the efficiency, the potentiality, achievable LOD and user-friendliness of the analyzed software packages. In order to verify the actual outcomes of FOSS BIM for the steel structure projects, these results have been compared with a simulation related to a real case study and carried out with a proprietary software BIM modeling. Therefore, the same design theme, the project of a shelter of public space, has been developed using different software. Therefore the purpose of the contribution is to assess what are the developments and potentialities inherent in FOSS BIM, in order to estimate their effective applicability to professional practice, their limits and new fields of research they propose.Keywords: BIM, steel buildings, FOSS, LOD
Procedia PDF Downloads 1746123 In situ Immobilization of Mercury in a Contaminated Calcareous Soil Using Water Treatment Residual Nanoparticles
Authors: Elsayed A. Elkhatib, Ahmed M. Mahdy, Mohamed L. Moharem, Mohamed O. Mesalem
Abstract:
Mercury (Hg) is one of the most toxic and bio-accumulative heavy metal in the environment. However, cheap and effective in situ remediation technology is lacking. In this study, the effects of water treatment residuals nanoparticles (nWTR) on mobility, fractionation and speciation of mercury in an arid zone soil from Egypt were evaluated. Water treatment residual nanoparticles with high surface area (129 m 2 g-1) were prepared using Fritsch planetary mono mill. Scanning and transmission electron microscopy revealed that the nanoparticles of WTR nanoparticles are spherical in shape, and single particle sizes are in the range of 45 to 96 nm. The x-ray diffraction (XRD) results ascertained that amorphous iron, aluminum (hydr)oxides and silicon oxide dominating all nWTR, with no apparent crystalline iron–Al (hydr)oxides. Addition of nWTR, greatly increased the Hg sorption capacities of studied soils and greatly reduced the cumulative Hg released from the soils. Application of nWTR at 0.10 and 0.30 % rates reduced the released Hg from the soil by 50 and 85 % respectively. The power function and first order kinetics models well described the desorption process from soils and nWTR amended soils as evidenced by high coefficient of determination (R2) and low SE values. Application of nWTR greatly increased the association of Hg with the residual fraction. Meanwhile, application of nWTR at a rate of 0.3% greatly increased the association of Hg with the residual fraction (>93%) and significantly increased the most stable Hg species (Hg(OH)2 amor) which in turn enhanced Hg immobilization in the studied soils. Fourier transmission infrared spectroscopy analysis indicated the involvement of nWTR in the retention of Hg (II) through OH groups which suggest inner-sphere adsorption of Hg ions to surface functional groups on nWTR. These results demonstrated the feasibility of using a low-cost nWTR as best management practice to immobilize excess Hg in contaminated soils.Keywords: release kinetics, Fourier transmission infrared spectroscopy, Hg fractionation, Hg species
Procedia PDF Downloads 2346122 Mathematical Modeling of the AMCs Cross-Contamination Removal in the FOUPs: Finite Element Formulation and Application in FOUP’s Decontamination
Authors: N. Santatriniaina, J. Deseure, T. Q. Nguyen, H. Fontaine, C. Beitia, L. Rakotomanana
Abstract:
Nowadays, with the increasing of the wafer's size and the decreasing of critical size of integrated circuit manufacturing in modern high-tech, microelectronics industry needs a maximum attention to challenge the contamination control. The move to 300 mm is accompanied by the use of Front Opening Unified Pods for wafer and his storage. In these pods an airborne cross contamination may occur between wafers and the pods. A predictive approach using modeling and computational methods is very powerful method to understand and qualify the AMCs cross contamination processes. This work investigates the required numerical tools which are employed in order to study the AMCs cross-contamination transfer phenomena between wafers and FOUPs. Numerical optimization and finite element formulation in transient analysis were established. Analytical solution of one dimensional problem was developed and the calibration process of physical constants was performed. The least square distance between the model (analytical 1D solution) and the experimental data are minimized. The behavior of the AMCs intransient analysis was determined. The model framework preserves the classical forms of the diffusion and convection-diffusion equations and yields to consistent form of the Fick's law. The adsorption process and the surface roughness effect were also traduced as a boundary condition using the switch condition Dirichlet to Neumann and the interface condition. The methodology is applied, first using the optimization methods with analytical solution to define physical constants, and second using finite element method including adsorption kinetic and the switch of Dirichlet to Neumann condition.Keywords: AMCs, FOUP, cross-contamination, adsorption, diffusion, numerical analysis, wafers, Dirichlet to Neumann, finite elements methods, Fick’s law, optimization
Procedia PDF Downloads 5076121 Integration of LCA and BIM for Sustainable Construction
Authors: Laura Álvarez Antón, Joaquín Díaz
Abstract:
The construction industry is turning towards sustainability. It is a well-known fact that sustainability is based on a balance between environmental, social and economic aspects. In order to achieve sustainability efficiently, these three criteria should be taken into account in the initial project phases, since that is when a project can be influenced most effectively. Thus the aim must be to integrate important tools like BIM and LCA at an early stage in order to make full use of their potential. With the synergies resulting from the integration of BIM and LCA, a wider approach to sustainability becomes possible, covering the three pillars of sustainability.Keywords: building information modeling (BIM), construction industry, design phase, life cycle assessment (LCA), sustainability
Procedia PDF Downloads 4516120 Effectiveness of Control Measures for Ambient Fine Particulate Matters Concentration Improvement in Taiwan
Authors: Jiun-Horng Tsai, Shi-Jie, Nieh
Abstract:
Fine particulate matter (PM₂.₅) has become an important issue all over the world over the last decade. Annual mean PM₂.₅ concentration has been over the ambient air quality standard of PM₂.₅ (annual average concentration as 15μg/m³) which adapted by Taiwan Environmental Protection Administration (TEPA). TEPA, therefore, has developed a number of air pollution control measures to improve the ambient concentration by reducing the emissions of primary fine particulate matter and the precursors of secondary PM₂.₅. This study investigated the potential improvement of ambient PM₂.₅ concentration by the TEPA program and the other scenario for further emission reduction on various sources. Four scenarios had been evaluated in this study, including a basic case and three reduction scenarios (A to C). The ambient PM₂.₅ concentration was evaluated by Community Multi-scale Air Quality modelling system (CMAQ) ver. 4.7.1 along with the Weather Research and Forecasting Model (WRF) ver. 3.4.1. The grid resolutions in the modelling work are 81 km × 81 km for domain 1 (covers East Asia), 27 km × 27 km for domain 2 (covers Southeast China and Taiwan), and 9 km × 9 km for domain 3 (covers Taiwan). The result of PM₂.₅ concentration simulation in different regions of Taiwan shows that the annual average concentration of basic case is 24.9 μg/m³, and are 22.6, 18.8, and 11.3 μg/m³, respectively, for scenarios A to C. The annual average concentration of PM₂.₅ would be reduced by 9-55 % for those control scenarios. The result of scenario C (the emissions of precursors reduce to allowance levels) could improve effectively the airborne PM₂.₅ concentration to attain the air quality standard. According to the results of unit precursor reduction contribution, the allowance emissions of PM₂.₅, SOₓ, and NOₓ are 16.8, 39, and 62 thousand tons per year, respectively. In the Kao-Ping air basin, the priority for reducing precursor emissions is PM₂.₅ > NOₓ > SOₓ, whereas the priority for reducing precursor emissions is PM₂.₅ > SOₓ > NOₓ in others area. The result indicates that the target pollutants that need to be reduced in different air basin are different, and the control measures need to be adapted to local conditions.Keywords: airborne PM₂.₅, community multi-scale air quality modelling system, control measures, weather research and forecasting model
Procedia PDF Downloads 1396119 River Habitat Modeling for the Entire Macroinvertebrate Community
Authors: Pinna Beatrice., Laini Alex, Negro Giovanni, Burgazzi Gemma, Viaroli Pierluigi, Vezza Paolo
Abstract:
Habitat models rarely consider macroinvertebrates as ecological targets in rivers. Available approaches mainly focus on single macroinvertebrate species, not addressing the ecological needs and functionality of the entire community. This research aimed to provide an approach to model the habitat of the macroinvertebrate community. The approach is based on the recently developed Flow-T index, together with a Random Forest (RF) regression, which is employed to apply the Flow-T index at the meso-habitat scale. Using different datasets gathered from both field data collection and 2D hydrodynamic simulations, the model has been calibrated in the Trebbia river (2019 campaign), and then validated in the Trebbia, Taro, and Enza rivers (2020 campaign). The three rivers are characterized by a braiding morphology, gravel riverbeds, and summer low flows. The RF model selected 12 mesohabitat descriptors as important for the macroinvertebrate community. These descriptors belong to different frequency classes of water depth, flow velocity, substrate grain size, and connectivity to the main river channel. The cross-validation R² coefficient (R²𝒸ᵥ) of the training dataset is 0.71 for the Trebbia River (2019), whereas the R² coefficient for the validation datasets (Trebbia, Taro, and Enza Rivers 2020) is 0.63. The agreement between the simulated results and the experimental data shows sufficient accuracy and reliability. The outcomes of the study reveal that the model can identify the ecological response of the macroinvertebrate community to possible flow regime alterations and to possible river morphological modifications. Lastly, the proposed approach allows extending the MesoHABSIM methodology, widely used for the fish habitat assessment, to a different ecological target community. Further applications of the approach can be related to flow design in both perennial and non-perennial rivers, including river reaches in which fish fauna is absent.Keywords: ecological flows, macroinvertebrate community, mesohabitat, river habitat modeling
Procedia PDF Downloads 946118 The Anti-Angiogenic Effect of Tectorigenin in a Mouse Model of Retinopathy of Prematurity
Authors: KuiDong Kang, Hye Bin Yim, Su Ah Kim
Abstract:
Purpose: Tectorigenin is an isoflavone derived from the rhizome of Belamacanda chinensis. In this study, oxygen-induced retinopathy was used to characterize the anti-angiogenic properties of tectorigenin in mice. Methods: ICR neonatal mice were exposed to 75% oxygen from postnatal day P7 until P12 and returned to room air (21% oxygen) for five days (P12 to P17). Mice were subjected to daily intraperitoneal injection of tectorigenin (1 mg/kg, 10 mg/kg) and vehicle from P12 to P17. Retro-orbital injection of FITC-dextran was performed and retinal flat mounts were viewed by fluorescence microscopy. The Central avascular area was quantified from the digital images in a masked fashion using image analysis software (NIH ImageJ). Neovascular tufts were quantified by using SWIFT_NV and neovascular lumens were quantified from a histologic section in a masked fashion. Immunohistochemistry and Western blot analysis were also performed to demonstrate the anti-angiogenic activity of this compound in vivo. Results: In the retina of tectorigenin injected mouse (10mg/kg), the central non-perfusion area was significantly decreased compared to the vehicle injected group (1.76±0.5 mm2 vs 2.85±0.6 mm2, P<0.05). In vehicle-injected group, 33.45 ± 5.51% of the total retinal area was avascular, whereas the retinas of pups treated with high-dose (10 mg/kg) tectorigenin showed avascular retinal areas of 21.25 ±4.34% (P<0.05). High dose of tectorigenin also significantly reduced the number of vascular lumens in the histologic section. Tectorigenin (10 mg/kg) significantly reduced the expression of vascular endothelial growth factor (VEGF), matrix metalloproteinase-2 (MMP-2), MMP-9, and angiotensin II compared to the vehicle injected group. Tectorigenin did not affect CD31 abundance at any tested dose. Conclusions: Our results show that tectorigenin possesses powerful anti-angiogenic properties and can attenuate new vessel formation in the retina after systemic administration. These results imply that this compound can be considered as a candidate substance for therapeutic inhibition of retinal angiogenesis.Keywords: tectorigenin, anti-angiogenic, retinopathy, Belamacanda chinensis
Procedia PDF Downloads 2676117 Differential Impacts of Whole-Growth-Duration Warming on the Grain Yield and Quality between Early and Late Rice
Authors: Shan Huang, Guanjun Huang, Yongjun Zeng, Haiyuan Wang
Abstract:
The impacts of whole-growth warming on grain yield and quality in double rice cropping systems still remain largely unknown. In this study, a two-year field whole-growth warming experiment was conducted with two inbred indica rice cultivars (Zhongjiazao 17 and Xiangzaoxian 45) for early season and two hybrid indica rice cultivars (Wanxiangyouhuazhan and Tianyouhuazhan) for late season. The results showed that whole-growth warming did not affect early rice yield but significantly decreased late rice yield, which was caused by the decreased grain weight that may be related to the increased plant respiration and reduced translocation of dry matter accumulated during the pre-heading phase under warming. Whole-growth warming improved the milling quality of late rice but decreased that of early rice; however, the chalky rice rate and chalkiness degree were increased by 20.7% and 33.9% for early rice and 37.6 % and 51.6% for late rice under warming, respectively. We found that the crude protein content of milled rice was significantly increased by warming in both early and late rice, which would result in deterioration of eating quality. Besides, compared with the control treatment, the setback of late rice was significantly reduced by 17.8 % under warming, while that of early rice was not significantly affected by warming. These results suggest that the negative impacts of whole-growth warming on grain quality may be more severe in early rice than in late rice. Therefore, adaptation in both rice breeding and agronomic practices is needed to alleviate climate warming on the production of a double rice cropping system. Climate-smart agricultural practices ought to be implemented to mitigate the detrimental effects of warming on rice grain quality. For instance, fine-tuning the application rate and timing of inorganic nitrogen fertilizers, along with the introduction of organic amendments and the cultivation of heat-tolerant rice varieties, can help reduce the negative impact of rising temperatures on rice quality. Furthermore, to comprehensively understand the influence of climate warming on rice grain quality, future research should encompass a wider range of rice cultivars and experimental sites.Keywords: climate warming, double rice cropping, dry matter, grain quality, grain yield
Procedia PDF Downloads 426116 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna
Abstract:
Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network
Procedia PDF Downloads 1606115 Conceptualizing Psycho-Social Intervention with Juvenile Offenders as Attachment Therapy: A Practical Approach
Authors: Genziana Lay
Abstract:
A wide majority of older children and adolescents who enter the juvenile court system present with an array of problematic symptoms and behaviors including anxiety, depression, aggressive acting out, detachment, and substance abuse. Attachment theory offers a framework for understanding normative and pathological functioning, which during development is influenced by emotional, social and cognitive elements. There is clear evidence that children and adolescents with the highest risk of developing adaptation problems present an insecure attachment profile. Most offending minors have experienced dysfunctional family relationships as well as social and/or economic deprivation. Their maladaptive attachment develops not only through their relationship with caregivers but with the environment at large. Activation of their faulty attachment system leads them to feel emotionally overwhelmed and engage in destructive behaviors and decision-making. A psycho-social intervention with this population conceptualized as attachment therapy is a multi-faceted, practical approach that has shown excellent results in terms of increased psychological well-being and drastically reduced rates of re-offense/ destructive behavior. Through several; components including psychotherapy, monitoring, volunteering, meditation and socialization, the program focuses on seven dimensions: self-efficacy, responsibility, empathy/reparation, autonomy/security, containment/structure, insight building, and relational health. This paper presents the program and illustrates how the framework of attachment theory practically applied to psycho-social intervention has great therapeutic and social reparation potential. Preliminary evidence drawn from the Sassari Juvenile Court is very promising; this paper will illustrate these results and propose an even more comprehensive, applicable approach to psycho-social reparative intervention that leads to greater psychological health and reduced recidivism in the child and adolescent population.Keywords: attachment, child, adolescent, crime, juvenile, psychosocial
Procedia PDF Downloads 1726114 Studying the Simultaneous Effect of Petroleum and DDT Pollution on the Geotechnical Characteristics of Sands
Authors: Sara Seyfi
Abstract:
DDT and petroleum contamination in coastal sand alters the physical and mechanical properties of contaminated soils. This article aims to understand the effects of DDT pollution on the geotechnical characteristics of sand groups, including sand, silty sand, and clay sand. First, the studies conducted on the topic of the article will be reviewed. In the initial stage of the tests, this article deals with the identification of the used sands (sand, silty sand, clay sand) by FTIR, µ-XRF and SEM methods. Then, the geotechnical characteristics of these sand groups, including density, permeability, shear strength, compaction, and plasticity, are investigated using a sand cone, head permeability test, Vane shear test, strain gauge penetrometer, and plastic limit test. Sand groups are artificially contaminated with petroleum substances with 1, 2, 4, 8, 10, 12% by weight. In a separate experiment, amounts of 2, 4, 8, 12, 16, 20 mg/liter of DDT were added to the sand groups. Geotechnical characteristics and identification analysis are performed on the contaminated samples. In the final tests, the mentioned amounts of oil pollution and DDT are simultaneously added to the sand groups, and identification and measurement processes are carried out. The results of the tests showed that petroleum contamination had reduced the optimal moisture content, permeability, and plasticity of all samples. Except silty sand’s plasticity, which petroleum increased it by 1-4% and decreased it by 8-12%. The dry density of sand and clay sand increased, but that of silty sand decreased. Also, the shear strength of sand and silty sand increased, but that of clay sand decreased. DDT contamination increased the maximum dry density and decreased the permeability of all samples. It also reduced the optimum moisture content of the sand. The shear resistance of silty sand and clayey sand decreased, and plasticity of clayey sand increased, and silty sand decreased. The simultaneous effect of petroleum and DDT pollution on the maximum dry density of sand and clayey sand has been synergistic, on the plasticity of clayey sand and silty sand, there has been antagonism. This process has caused antagonism of optimal sand content, shear strength of silty sand and clay sand. In other cases, the effect of synergy or antagonism is not observed.Keywords: DDT contamination, geotechnical characteristics, petroleum contamination, sand
Procedia PDF Downloads 496113 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms
Authors: Naina Mahajan, Bikram Pal Kaur
Abstract:
The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool
Procedia PDF Downloads 3386112 Augmented and Virtual Reality Experiences in Plant and Agriculture Science Education
Authors: Sandra Arango-Caro, Kristine Callis-Duehl
Abstract:
The Education Research and Outreach Lab at the Donald Danforth Plant Science Center established the Plant and Agriculture Augmented and Virtual Reality Learning Laboratory (PAVRLL) to promote science education through professional development, school programs, internships, and outreach events. Professional development is offered to high school and college science and agriculture educators on the use and applications of zSpace and Oculus platforms. Educators learn to use, edit, or create lesson plans in the zSpace platform that are aligned with the Next Generation Science Standards. They also learn to use virtual reality experiences created by the PAVRLL available in Oculus (e.g. The Soybean Saga). Using a cost-free loan rotation system, educators can bring the AVR units to the classroom and offer AVR activities to their students. Each activity has user guides and activity protocols for both teachers and students. The PAVRLL also offers activities for 3D plant modeling. High school students work in teams of art-, science-, and technology-oriented students to design and create 3D models of plant species that are under research at the Danforth Center and present their projects at scientific events. Those 3D models are open access through the zSpace platform and are used by PAVRLL for professional development and the creation of VR activities. Both teachers and students acquire knowledge of plant and agriculture content and real-world problems, gain skills in AVR technology, 3D modeling, and science communication, and become more aware and interested in plant science. Students that participate in the PAVRLL activities complete pre- and post-surveys and reflection questions that evaluate interests in STEM and STEM careers, students’ perceptions of three design features of biology lab courses (collaboration, discovery/relevance, and iteration/productive failure), plant awareness, and engagement and learning in AVR environments. The PAVRLL was established in the fall of 2019, and since then, it has trained 15 educators, three of which will implement the AVR programs in the fall of 2021. Seven students have worked in the 3D plant modeling activity through a virtual internship. Due to the COVID-19 pandemic, the number of teachers trained, and classroom implementations have been very limited. It is expected that in the fall of 2021, students will come back to the schools in person, and by the spring of 2022, the PAVRLL activities will be fully implemented. This will allow the collection of enough data on student assessments that will provide insights on benefits and best practices for the use of AVR technologies in the classrooms. The PAVRLL uses cutting-edge educational technologies to promote science education and assess their benefits and will continue its expansion. Currently, the PAVRLL is applying for grants to create its own virtual labs where students can experience authentic research experiences using real Danforth research data based on programs the Education Lab already used in classrooms.Keywords: assessment, augmented reality, education, plant science, virtual reality
Procedia PDF Downloads 1726111 Modeling Salam Contract for Profit and Loss Sharing
Authors: Dchieche Amina, Aboulaich Rajae
Abstract:
Profit and loss sharing suggests an equitable sharing of risks and profits between the parts involved in a financial transaction. Salam is a contract in which advance payment is made for goods to be delivered at a future date. The purpose of this work is to price a new contract for profit and loss sharing based on Salam contract, using Khiyar Al Ghabn which is an agreement of choice in case of misrepresent facts.Keywords: Islamic finance, shariah compliance, profit and loss sharing, derivatives, risks, hedging, salam contract
Procedia PDF Downloads 3326110 Management of Acute Appendicitis with Preference on Delayed Primary Suturing of Surgical Incision
Authors: N. A. D. P. Niwunhella, W. G. R. C. K. Sirisena
Abstract:
Appendicitis is one of the most encountered abdominal emergencies worldwide. Proper clinical diagnosis and appendicectomy with minimal post operative complications are therefore priorities. Aim of this study was to ascertain the overall management of acute appendicitis in Sri Lanka in special preference to delayed primary suturing of the surgical site, comparing other local and international treatment outcomes. Data were collected prospectively from 155 patients who underwent appendicectomy following clinical and radiological diagnosis with ultrasonography. Histological assessment was done for all the specimens. All perforated appendices were managed with delayed primary closure. Patients were followed up for 28 days to assess complications. Mean age of patient presentation was 27 years; mean pre-operative waiting time following admission was 24 hours; average hospital stay was 72 hours; accuracy of clinical diagnosis of appendicitis as confirmed by histology was 87.1%; post operative wound infection rate was 8.3%, and among them 5% had perforated appendices; 4 patients had post operative complications managed without re-opening. There was no fistula formation or mortality reported. Current study was compared with previously published data: a comparison on management of acute appendicitis in Sri Lanka vs. United Kingdom (UK). The diagnosis of current study was equally accurate, but post operative complications were significantly reduced - (current study-9.6%, compared Sri Lankan study-16.4%; compared UK study-14.1%). During the recent years, there has been an exponential rise in the use of Computerised Tomography (CT) imaging in the assessment of patients with acute appendicitis. Even though, the diagnostic accuracy without using CT, and treatment outcome of acute appendicitis in this study match other local studies as well as with data compared to UK. Therefore CT usage has not increased the diagnostic accuracy of acute appendicitis significantly. Especially, delayed primary closure may have reduced post operative wound infection rate for ruptured appendices, therefore suggest this approach for further evaluation as a safer and an effective practice in other hospitals worldwide as well.Keywords: acute appendicitis, computerised tomography, diagnostic accuracy, delayed primary closure
Procedia PDF Downloads 1676109 Prediction of Ionic Liquid Densities Using a Corresponding State Correlation
Authors: Khashayar Nasrifar
Abstract:
Ionic liquids (ILs) exhibit particular properties exemplified by extremely low vapor pressure and high thermal stability. The properties of ILs can be tailored by proper selection of cations and anions. As such, ILs are appealing as potential solvents to substitute traditional solvents with high vapor pressure. One of the IL properties required in chemical and process design is density. In developing corresponding state liquid density correlations, scaling hypothesis is often used. The hypothesis expresses the temperature dependence of saturated liquid densities near the vapor-liquid critical point as a function of reduced temperature. Extending the temperature dependence, several successful correlations were developed to accurately correlate the densities of normal liquids from the triple point to a critical point. Applying mixing rules, the liquid density correlations are extended to liquid mixtures as well. ILs are not molecular liquids, and they are not classified among normal liquids either. Also, ILs are often used where the condition is far from equilibrium. Nevertheless, in calculating the properties of ILs, the use of corresponding state correlations would be useful if no experimental data were available. With well-known generalized saturated liquid density correlations, the accuracy in predicting the density of ILs is not that good. An average error of 4-5% should be expected. In this work, a data bank was compiled. A simplified and concise corresponding state saturated liquid density correlation is proposed by phenomena-logically modifying reduced temperature using the temperature-dependence for an interacting parameter of the Soave-Redlich-Kwong equation of state. This modification improves the temperature dependence of the developed correlation. Parametrization was next performed to optimize the three global parameters of the correlation. The correlation was then applied to the ILs in our data bank with satisfactory predictions. The correlation of IL density applied at 0.1 MPa and was tested with an average uncertainty of around 2%. No adjustable parameter was used. The critical temperature, critical volume, and acentric factor were all required. Methods to extend the predictions to higher pressures (200 MPa) were also devised. Compared to other methods, this correlation was found more accurate. This work also presents the chronological order of developing such correlations dealing with ILs. The pros and cons are also expressed.Keywords: correlation, corresponding state principle, ionic liquid, density
Procedia PDF Downloads 1276108 Modeling Driving Distraction Considering Psychological-Physical Constraints
Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang
Abstract:
Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints
Procedia PDF Downloads 916107 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients
Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid
Abstract:
Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.Keywords: diabetes, T2DM, SLC47A1, Pakistan, polymorphism
Procedia PDF Downloads 1596106 Comparative Evaluation of Root Uptake Models for Developing Moisture Uptake Based Irrigation Schedules for Crops
Authors: Vijay Shankar
Abstract:
In the era of water scarcity, effective use of water via irrigation requires good methods for determining crop water needs. Implementation of irrigation scheduling programs requires an accurate estimate of water use by the crop. Moisture depletion from the root zone represents the consequent crop evapotranspiration (ET). A numerical model for simulating soil water depletion in the root zone has been developed by taking into consideration soil physical properties, crop and climatic parameters. The governing differential equation for unsaturated flow of water in the soil is solved numerically using the fully implicit finite difference technique. The water uptake by plants is simulated by using three different sink functions. The non-linear model predictions are in good agreement with field data and thus it is possible to schedule irrigations more effectively. The present paper describes irrigation scheduling based on moisture depletion from the different layers of the root zone, obtained using different sink functions for three cash, oil and forage crops: cotton, safflower and barley, respectively. The soil is considered at a moisture level equal to field capacity prior to planting. Two soil moisture regimes are then imposed for irrigated treatment, one wherein irrigation is applied whenever soil moisture content is reduced to 50% of available soil water; and other wherein irrigation is applied whenever soil moisture content is reduced to 75% of available soil water. For both the soil moisture regimes it has been found that the model incorporating a non-linear sink function which provides best agreement of computed root zone moisture depletion with field data, is most effective in scheduling irrigations. Simulation runs with this moisture uptake function result in saving 27.3 to 45.5% & 18.7 to 37.5%, 12.5 to 25% % &16.7 to 33.3% and 16.7 to 33.3% & 20 to 40% irrigation water for cotton, safflower and barley respectively, under 50 & 75% moisture depletion regimes over other moisture uptake functions considered in the study. Simulation developed can be used for an optimized irrigation planning for different crops, choosing a suitable soil moisture regime depending upon the irrigation water availability and crop requirements.Keywords: irrigation water, evapotranspiration, root uptake models, water scarcity
Procedia PDF Downloads 3316105 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth
Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.Keywords: treeline, dynamic, climate, modeling
Procedia PDF Downloads 836104 Hepatocyte-Intrinsic NF-κB Signaling Is Essential to Control a Systemic Viral Infection
Authors: Sukumar Namineni, Tracy O'Connor, Ulrich Kalinke, Percy Knolle, Mathias Heikenwaelder
Abstract:
The liver is one of the pivotal organs in vertebrate animals, serving a multitude of functions such as metabolism, detoxification and protein synthesis and including a predominant role in innate immunity. The innate immune mechanisms pertaining to liver in controlling viral infections have largely been attributed to the Kupffer cells, the locally resident macrophages. However, all the cells of liver are equipped with innate immune functions including, in particular, the hepatocytes. Hence, our aim in this study was to elucidate the innate immune contribution of hepatocytes in viral clearance using mice lacking Ikkβ specifically in the hepatocytes, termed IkkβΔᴴᵉᵖ mice. Blockade of Ikkβ activation in IkkβΔᴴᵉᵖ mice affects the downstream signaling of canonical NF-κB signaling by preventing the nuclear translocation of NF-κB, an important step required for the initiation of innate immune responses. Interestingly, infection of IkkβΔᴴᵉᵖ mice with lymphocytic choriomeningitis virus (LCMV) led to strongly increased hepatic viral titers – mainly confined in clusters of infected hepatocytes. This was due to reduced interferon stimulated gene (ISG) expression during the onset of infection and a reduced CD8+ T-cell-mediated response. Decreased ISG production correlated with increased liver LCMV protein and LCMV in isolated hepatocytes from IkkβΔᴴᵉᵖ mice. A similar phenotype was found in LCMV-infected mice lacking interferon signaling in hepatocytes (IFNARΔᴴᵉᵖ) suggesting a link between NFkB and interferon signaling in hepatocytes. We also observed a failure of interferon-mediated inhibition of HBV replication in HepaRG cells treated with NF-kB inhibitors corroborating our initial findings with LCMV infections. Collectively, these results clearly highlight a previously unknown and influential role of hepatocytes in the induction of innate immune responses leading to viral clearance during a systemic viral infection with LCMV-WE.Keywords: CD8+ T cell responses, innate immune mechanisms in the liver, interferon signaling, interferon stimulated genes, NF-kB signaling, viral clearance
Procedia PDF Downloads 191