Search results for: de-noising techniques
1525 Geophysical Mapping of Anomalies Associated with Sediments of Gwandu Formation Around Argungu and Its Environs NW, Nigeria
Authors: Adamu Abubakar, Abdulganiyu Yunusa, Likkason Othniel Kamfani, Abdulrahman Idris Augie
Abstract:
This research study is being carried out in accordance with the Gwandu formation's potential exploratory activities in the inland basin of northwest Nigeria.The present research aims to identify and characterize subsurface anomalies within Gwandu formation using electrical resistivity tomography (ERT) and magnetic surveys, providing valuable insights for mineral exploration. The study utilizes various data enhancement techniques like derivatives, upward continuation, and spectral analysis alongside 2D modeling of electrical imaging profiles to analyze subsurface structures and anomalies. Data was collected through ERT and magnetic surveys, with subsequent processing including derivatives, spectral analysis, and 2D modeling. The results indicate significant subsurface structures such as faults, folds, and sedimentary layers. The study area's geoelectric and magnetic sections illustrate the depth and distribution of sedimentary formations, enhancing understanding of the geological framework. Thus, showed that the entire formations of Eocene sediment of Gwandu are overprinted by the study area's Tertiary strata. The NE to SW and E to W cross-profile for the pseudo geoelectric sections beneath the study area were generated using a two-dimensional (2D) electrical resistivity imaging. 2D magnetic modelling, upward continuation, and derivative analysis are used to delineate the signatures of subsurface magnetic anomalies. The results also revealed The sediment thickness by surface depth ranges from ∼4.06 km and ∼23.31 km. The Moho interface, the lower and upper mantle crusts boundary, and magnetic crust are all located at depths of around ∼10.23 km. The vertical distance between the local models of the foundation rocks to the north and south of the Sokoto Group was approximately ∼6 to ∼8 km and ∼4.5 km, respectively.Keywords: high-resolution aeromagnetic data, electrical resistivity imaging, subsurface anomalies, 2d dorward modeling
Procedia PDF Downloads 141524 An Efficient Emitting Supramolecular Material Derived from Calixarene: Synthesis, Optical and Electrochemical Features
Authors: Serkan Sayin, Songul F. Varol
Abstract:
High attention on the organic light-emitting diodes has been paid since their efficient properties in the flat panel displays, and solid-state lighting was realized. Because of their high efficient electroluminescence, brightness and providing eminent in the emission range, organic light emitting diodes have been preferred a material compared with the other materials consisting of the liquid crystal. Calixarenes obtained from the reaction of p-tert-butyl phenol and formaldehyde in a suitable base have been potentially used in various research area such as catalysis, enzyme immobilization, and applications, ion carrier, sensors, nanoscience, etc. In addition, their tremendous frameworks, as well as their easily functionalization, make them an effective candidate in the applied chemistry. Herein, a calix[4]arene derivative has been synthesized, and its structure has been fully characterized using Fourier Transform Infrared Spectrophotometer (FTIR), proton nuclear magnetic resonance (¹H-NMR), carbon-13 nuclear magnetic resonance (¹³C-NMR), liquid chromatography-mass spectrometry (LC-MS), and elemental analysis techniques. The calixarene derivative has been employed as an emitting layer in the fabrication of the organic light-emitting diodes. The optical and electrochemical features of calixarane-contained organic light-emitting diodes (Clx-OLED) have been also performed. The results showed that Clx-OLED exhibited blue emission and high external quantum efficacy. As a conclusion obtained results attributed that the synthesized calixarane derivative is a promising chromophore with efficient fluorescent quantum yield that provides it an attractive candidate for fabricating effective materials for fluorescent probes and labeling studies. This study was financially supported by the Scientific and Technological Research Council of Turkey (TUBITAK Grant no. 117Z402).Keywords: calixarene, OLED, supramolecular chemistry, synthesis
Procedia PDF Downloads 2531523 Joint Training Offer Selection and Course Timetabling Problems: Models and Algorithms
Authors: Gianpaolo Ghiani, Emanuela Guerriero, Emanuele Manni, Alessandro Romano
Abstract:
In this article, we deal with a variant of the classical course timetabling problem that has a practical application in many areas of education. In particular, in this paper we are interested in high schools remedial courses. The purpose of such courses is to provide under-prepared students with the skills necessary to succeed in their studies. In particular, a student might be under prepared in an entire course, or only in a part of it. The limited availability of funds, as well as the limited amount of time and teachers at disposal, often requires schools to choose which courses and/or which teaching units to activate. Thus, schools need to model the training offer and the related timetabling, with the goal of ensuring the highest possible teaching quality, by meeting the above-mentioned financial, time and resources constraints. Moreover, there are some prerequisites between the teaching units that must be satisfied. We first present a Mixed-Integer Programming (MIP) model to solve this problem to optimality. However, the presence of many peculiar constraints contributes inevitably in increasing the complexity of the mathematical model. Thus, solving it through a general purpose solver may be performed for small instances only, while solving real-life-sized instances of such model requires specific techniques or heuristic approaches. For this purpose, we also propose a heuristic approach, in which we make use of a fast constructive procedure to obtain a feasible solution. To assess our exact and heuristic approaches we perform extensive computational results on both real-life instances (obtained from a high school in Lecce, Italy) and randomly generated instances. Our tests show that the MIP model is never solved to optimality, with an average optimality gap of 57%. On the other hand, the heuristic algorithm is much faster (in about the 50% of the considered instances it converges in approximately half of the time limit) and in many cases allows achieving an improvement on the objective function value obtained by the MIP model. Such an improvement ranges between 18% and 66%.Keywords: heuristic, MIP model, remedial course, school, timetabling
Procedia PDF Downloads 6051522 The Effect of Artificial Intelligence on Digital Factory
Authors: Sherif Fayez Lewis Ghaly
Abstract:
up to datefacupupdated planning has the mission of designing merchandise, plant life, procedures, enterprise, regions, and the development of a up to date. The requirements for up-to-date planning and the constructing of a updated have changed in recent years. everyday restructuring is turning inupupdated greater essential up-to-date hold the competitiveness of a manufacturing facilityupdated. restrictions in new regions, shorter existence cycles of product and manufacturing generation up-to-date a VUCA global (Volatility, Uncertainty, Complexity & Ambiguity) up-to-date greater frequent restructuring measures inside a manufacturing facilityupdated. A virtual up-to-date model is the making plans basis for rebuilding measures and up-to-date an fundamental up-to-date. short-time period rescheduling can now not be handled through on-web site inspections and manual measurements. The tight time schedules require 3177227fc5dac36e3e5ae6cd5820dcaa making plans fashions. updated the high variation fee of facup-to-dateries defined above, a method for rescheduling facupdatedries on the idea of a modern-day digital up to datery dual is conceived and designed for sensible software in updated restructuring projects. the point of interest is on rebuild approaches. The purpose is up-to-date preserve the planning basis (virtual up-to-date model) for conversions within a up to datefacupupdated updated. This calls for the application of a methodology that reduces the deficits of present techniques. The goal is up-to-date how a digital up to datery version may be up to date up to date during ongoing up to date operation. a method up-to-date on phoup to dategrammetry technology is presented. the focus is on developing a easy and fee-powerful up to date tune the numerous adjustments that arise in a manufacturing unit constructing in the course of operation. The method is preceded with the aid of a hardware and software assessment up-to-date become aware of the most cost effective and quickest version.Keywords: building information modeling, digital factory model, factory planning, maintenance digital factory model, photogrammetry, restructuring
Procedia PDF Downloads 281521 Assessment of Land Suitability for Tea Cultivation Using Geoinformatics in the Mansehra and Abbottabad District, Pakistan
Authors: Nasir Ashraf, Sajid Rahid Ahmad, Adeel Ahmad
Abstract:
Pakistan is a major tea consumer country and ranked as the third largest importer of tea worldwide. Out of all beverage consumed in Pakistan, tea is the one with most demand for which tea import is inevitable. Being an agrarian country, Pakistan should cultivate its own tea and save the millions of dollars cost from tea import. So the need is to identify the most suitable areas with favorable weather condition and suitable soils where tea can be planted. This research is conducted over District Mansehra and District Abbottabad in Khyber Pakhtoonkhwah Province of Pakistan where the most favorable conditions for tea cultivation already exist and National Tea Research Institute has done successful experiments to cultivate high quality tea. High tech approach is adopted to meet the objectives of this research by using the remotely sensed data i.e. Aster DEM, Landsat8 Imagery. The Remote Sensing data was processed in Erdas Imagine, Envi and further analyzed in ESRI ArcGIS spatial analyst for final results and representation of result data in map layouts. Integration of remote sensing data with GIS provided the perfect suitability analysis. The results showed that out of all study area, 13.4% area is highly suitable while 33.44% area is suitable for tea plantation. The result of this research is an impressive GIS based outcome and structured format of data for the agriculture planners and Tea growers. Identification of suitable tea growing areas by using remotely sensed data and GIS techniques is a pressing need for the country. Analysis of this research lets the planners to address variety of action plans in an economical and scientific manner which can lead tea production in Pakistan to meet demand. This geomatics based model and approach may be used to identify more areas for tea cultivation to meet our demand which we can reduce by planting our own tea, and our country can be independent in tea production.Keywords: agrarian country, GIS, geoinformatics, suitability analysis, remote sensing
Procedia PDF Downloads 3891520 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 991519 Carbon-Based Electrodes for Parabens Detection
Authors: Aniela Pop, Ianina Birsan, Corina Orha, Rodica Pode, Florica Manea
Abstract:
Carbon nanofiber-epoxy composite electrode has been investigated through voltammetric and amperometric techniques in order to detect parabens from aqueous solutions. The occurrence into environment as emerging pollutants of these preservative compounds has been extensively studied in the last decades, and consequently, a rapid and reliable method for their quantitative quantification is required. In this study, methylparaben (MP) and propylparaben (PP) were chosen as representatives for paraben class. The individual electrochemical detection of each paraben has been successfully performed. Their electrochemical oxidation occurred at the same potential value. Their simultaneous quantification should be assessed electrochemically only as general index of paraben class as a cumulative signal corresponding to both MP and PP from solution. The influence of pH on the electrochemical signal was studied. pH ranged between 1.3 and 9.0 allowed shifting the detection potential value to smaller value, which is very desired for the electroanalysis. Also, the signal is better-defined and higher sensitivity is achieved. Differential-pulsed voltammetry and square-wave voltammetry were exploited under the optimum pH conditions to improve the electroanalytical performance for the paraben detection. Also, the operation conditions were selected, i.e., the step potential, modulation amplitude and the frequency. Chronomaprometry application as the easiest electrochemical detection method led to worse sensitivity, probably due to a possible fouling effect of the electrode surface. The best electroanalytical performance was achieved by pulsed voltammetric technique but the selection of the electrochemical technique is related to the concrete practical application. A good reproducibility of the voltammetric-based method using carbon nanofiber-epoxy composite electrode was determined and no interference effect was found for the cation and anion species that are common in the water matrix. Besides these characteristics, the long life-time of the electrode give to carbon nanofiber-epoxy composite electrode a great potential for practical applications.Keywords: carbon nanofiber-epoxy composite electrode, electroanalysis, methylparaben, propylparaben
Procedia PDF Downloads 2251518 Comparison of the Postoperative Analgesic Effects of Morphine, Paracetamol, and Ketorolac in Patient-Controlled Analgesia in the Patients Undergoing Open Cholecystectomy
Authors: Siamak Yaghoubi, Vahideh Rashtchi, Marzieh Khezri, Hamid Kayalha, Monadi Hamidfar
Abstract:
Background and objectives: Effective postoperative pain management in abdominal surgeries, which are painful procedures, plays an important role in reducing postoperative complications and increasing patient’s satisfaction. There are many techniques for pain control, one of which is Patient-Controlled Analgesia (PCA). The aim of this study was to compare the analgesic effects of morphine, paracetamol and ketorolac in the patients undergoing open cholecystectomy, using PCA method. Material and Methods: This randomized controlled trial was performed on 330 ASA (American Society of Anesthesiology) I-II patients ( three equal groups, n=110) who were scheduled for elective open cholecystectomy in Shahid Rjaee hospital of Qazvin, Iran from August 2013 until September 2015. All patients were managed by general anesthesia with TIVA (Total Intra Venous Anesthesia) technique. The control group received morphine with maximum dose of 0.02mg/kg/h, the paracetamol group received paracetamol with maximum dose of 1mg/kg/h, and the ketorolac group received ketorolac with maximum daily dose of 60mg using IV-PCA method. The parameters of pain, nausea, hemodynamic variables (BP and HR), pruritus, arterial oxygen desaturation, patient’s satisfaction and pain score were measured every two hours for 8 hours following operation in all groups. Results: There were no significant differences in demographic data between the three groups. there was a statistically significant difference with regard to the mean pain score at all times between morphine and paracetamol, morphine and ketorolac, and paracetamol and ketorolac groups (P<0.001). Results indicated a reduction with time in the mean level of postoperative pain in all three groups. At all times the mean level of pain in ketorolac group was less than that in the other two groups (p<0.001). Conclusion: According to the results of this study ketorolac is more effective than morphine and paracetamol in postoperative pain control in the patients undergoing open cholecystectomy, using PCA method.Keywords: analgesia, cholecystectomy, ketorolac, morphine, paracetamol
Procedia PDF Downloads 1971517 Integrations of Students' Learning Achievements and Their Analytical Thinking Abilities with the Problem-Based Learning and the Concept Mapping Instructional Methods on Gene and Chromosome Issue at the 12th Grade Level
Authors: Waraporn Thaimit, Yuwadee Insamran, Natchanok Jansawang
Abstract:
Focusing on Analytical Thinking and Learning Achievement are the critical component of visual thinking that gives one the ability to solve problems quickly and effectively that allows to complex problems into components, and the result had been achieved or acquired form of the subject students of which resulted in changes within the individual as a result of activity in learning. The aims of this study are to administer on comparisons between students’ analytical thinking abilities and their learning achievements sample size consisted of 80 students who sat at the 12th grade level in 2 classes from Chaturaphak Phiman Ratchadaphisek School, the 40-student experimental group with the Problem-Based Learning (PBL) and 40-student controlling group with the Concept Mapping Instructional (CMI) methods were designed. Research instruments composed with the 5-lesson instructional plans to be assessed with the pretest and posttest techniques on each instructional method. Students’ responses of their analytical thinking abilities were assessed with the Analytical Thinking Tests and students’ learning achievements were tested of the Learning Achievement Tests. Statistically significant differences with the paired t-test and F-test (Two-way MANCOVA) between post- and pre-tests of the whole students in two chemistry classes were found. Associations between student learning outcomes in each instructional method and their analytical thinking abilities to their learning achievements also were found (ρ < .05). The use of two instructional methods for this study is revealed that the students perceive their abilities to be highly learning achievement in chemistry classes with the PBL group ought to higher than the CMI group. Suggestions that analytical thinking ability involves the process of gathering relevant information and identifying key issues related to the learning achievement information.Keywords: comparisons, students learning achievements, analytical thinking abilities, the problem-based learning method, the concept mapping instructional method, gene and chromosome issue, chemistry classes
Procedia PDF Downloads 2621516 Objective Assessment of the Evolution of Microplastic Contamination in Sediments from a Vast Coastal Area
Authors: Vanessa Morgado, Ricardo Bettencourt da Silva, Carla Palma
Abstract:
The environmental pollution by microplastics is well recognized. Microplastics were already detected in various matrices from distinct environmental compartments worldwide, some from remote areas. Various methodologies and techniques have been used to determine microplastic in such matrices, for instance, sediment samples from the ocean bottom. In order to determine microplastics in a sediment matrix, the sample is typically sieved through a 5 mm mesh, digested to remove the organic matter, and density separated to isolate microplastics from the denser part of the sediment. The physical analysis of microplastic consists of visual analysis under a stereomicroscope to determine particle size, colour, and shape. The chemical analysis is performed by an infrared spectrometer coupled to a microscope (micro-FTIR), allowing to the identification of the chemical composition of microplastic, i.e., the type of polymer. Creating legislation and policies to control and manage (micro)plastic pollution is essential to protect the environment, namely the coastal areas. The regulation is defined from the known relevance and trends of the pollution type. This work discusses the assessment of contamination trends of a 700 km² oceanic area affected by contamination heterogeneity, sampling representativeness, and the uncertainty of the analysis of collected samples. The methodology developed consists of objectively identifying meaningful variations of microplastic contamination by the Monte Carlo simulation of all uncertainty sources. This work allowed us to unequivocally conclude that the contamination level of the studied area did not vary significantly between two consecutive years (2018 and 2019) and that PET microplastics are the major type of polymer. The comparison of contamination levels was performed for a 99% confidence level. The developed know-how is crucial for the objective and binding determination of microplastic contamination in relevant environmental compartments.Keywords: measurement uncertainty, micro-ATR-FTIR, microplastics, ocean contamination, sampling uncertainty
Procedia PDF Downloads 891515 DNA-Polycation Condensation by Coarse-Grained Molecular Dynamics
Authors: Titus A. Beu
Abstract:
Many modern gene-delivery protocols rely on condensed complexes of DNA with polycations to introduce the genetic payload into cells by endocytosis. In particular, polyethyleneimine (PEI) stands out by a high buffering capacity (enabling the efficient condensation of DNA) and relatively simple fabrication. Realistic computational studies can offer essential insights into the formation process of DNA-PEI polyplexes, providing hints on efficient designs and engineering routes. We present comprehensive computational investigations of solvated PEI and DNA-PEI polyplexes involving calculations at three levels: ab initio, all-atom (AA), and coarse-grained (CG) molecular mechanics. In the first stage, we developed a rigorous AA CHARMM (Chemistry at Harvard Macromolecular Mechanics) force field (FF) for PEI on the basis of accurate ab initio calculations on protonated model pentamers. We validated this atomistic FF by matching the results of extensive molecular dynamics (MD) simulations of structural and dynamical properties of PEI with experimental data. In a second stage, we developed a CG MARTINI FF for PEI by Boltzmann inversion techniques from bead-based probability distributions obtained from AA simulations and ensuring an optimal match between the AA and CG structural and dynamical properties. In a third stage, we combined the developed CG FF for PEI with the standard MARTINI FF for DNA and performed comprehensive CG simulations of DNA-PEI complex formation and condensation. Various technical aspects which are crucial for the realistic modeling of DNA-PEI polyplexes, such as options of treating electrostatics and the relevance of polarizable water models, are discussed in detail. Massive CG simulations (with up to 500 000 beads) shed light on the mechanism and provide time scales for DNA polyplex formation independence of PEI chain size and protonation pattern. The DNA-PEI condensation mechanism is shown to primarily rely on the formation of DNA bundles, rather than by changes of the DNA-strand curvature. The gained insights are expected to be of significant help for designing effective gene-delivery applications.Keywords: DNA condensation, gene-delivery, polyethylene-imine, molecular dynamics.
Procedia PDF Downloads 1201514 Relocation of Livestocks in Rural of Canakkale Province Using Remote Sensing and GIS
Authors: Melis Inalpulat, Tugce Civelek, Unal Kizil, Levent Genc
Abstract:
Livestock production is one of the most important components of rural economy. Due to the urban expansion, rural areas close to expanding cities transform into urban districts during the time. However, the legislations have some restrictions related to livestock farming in such administrative units since they tend to create environmental concerns like odor problems resulted from excessive manure production. Therefore, the existing animal operations should be moved from the settlement areas. This paper was focused on determination of suitable lands for livestock production in Canakkale province of Turkey using remote sensing (RS) data and GIS techniques. To achieve the goal, Formosat 2 and Landsat 8 imageries, Aster DEM, and 1:25000 scaled soil maps, village boundaries, and village livestock inventory records were used. The study was conducted using suitability analysis which evaluates the land in terms of limitations and potentials, and suitability range was categorized as Suitable (S) and Non-Suitable (NS). Limitations included the distances from main and crossroads, water resources and settlements, while potentials were appropriate values for slope, land use capability and land use land cover status. Village-based S land distribution results were presented, and compared with livestock inventories. Results showed that approximately 44230 ha area is inappropriate because of the distance limitations for roads and etc. (NS). Moreover, according to LULC map, 71052 ha area consists of forests, olive and other orchards, and thus, may not be suitable for building such structures (NS). In comparison, it was found that there are a total of 1228 ha S lands within study area. The village-based findings indicated that, in some villages livestock production continues on NS areas. Finally, it was suggested that organized livestock zones may be constructed to serve in more than one village after the detailed analysis complemented considering also political decisions, opinion of the local people, etc.Keywords: GIS, livestock, LULC, remote sensing, suitable lands
Procedia PDF Downloads 2981513 Phylogenetic Analysis Based On the Internal Transcribed Spacer-2 (ITS2) Sequences of Diadegma semiclausum (Hymenoptera: Ichneumonidae) Populations Reveals Significant Adaptive Evolution
Authors: Ebraheem Al-Jouri, Youssef Abu-Ahmad, Ramasamy Srinivasan
Abstract:
The parasitoid, Diadegma semiclausum (Hymenoptera: Ichneumonidae) is one of the most effective exotic parasitoids of diamondback moth (DBM), Plutella xylostella in the lowland areas of Homs, Syria. Molecular evolution studies are useful tools to shed light on the molecular bases of insect geographical spread and adaptation to new hosts and environment and for designing better control strategies. In this study, molecular evolution analysis was performed based on the 42 nuclear internal transcribed spacer-2 (ITS2) sequences representing the D. semiclausum and eight other Diadegma spp. from Syria and worldwide. Possible recombination events were identified by RDP4 program. Four potential recombinants of the American D. insulare and D. fenestrale (Jeju) were detected. After detecting and removing recombinant sequences, the ratio of non-synonymous (dN) to synonymous (dS) substitutions per site (dN/dS=ɷ) has been used to identify codon positions involved in adaptive processes. Bayesian techniques were applied to detect selective pressures at a codon level by using five different approaches including: fixed effects likelihood (FEL), internal fixed effects likelihood (IFEL), random effects method (REL), mixed effects model of evolution (MEME) and Program analysis of maximum liklehood (PAML). Among the 40 positively selected amino acids (aa) that differed significantly between clades of Diadegma species, three aa under positive selection were only identified in D. semiclausum. Additionally, all D. semiclausum branches tree were highly found under episodic diversifying selection (EDS) at p≤0.05. Our study provide evidence that both recombination and positive selection have contributed to the molecular diversity of Diadegma spp. and highlights the significant contribution of D. semiclausum in adaptive evolution and influence the fitness in the DBM parasitoid.Keywords: diadegma sp, DBM, ITS2, phylogeny, recombination, dN/dS, evolution, positive selection
Procedia PDF Downloads 4161512 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1271511 Grassroots Innovation for Greening Bangladesh's Urban Slums: The Role of Local Agencies
Authors: Razia Sultana
Abstract:
The chapter investigates the roles of local Non-Governmental Organisations (NGOs) and Community Based Organisations (CBOs) in climate change adaptation through grassroots innovation in urban slums in Dhaka, Bangladesh. The section highlights green infrastructure as an innovative process to mitigate the challenges emanating from climate change at the bottom of the pyramid. The research draws on semi-structured in-depth interviews with 11 NGOs and 2 CBOs working in various slums in Dhaka. The study explores the activities of local agencies relating to urban green infrastructure (UGI) and its possible mitigation of a range of climate change impacts: thermal discomfort, heat stress, flooding and the urban heat island. The main argument of the chapter is unlike the Global North stakeholders’ activities relating to UGI in cities of the Global South have not been expanded on a large scale. Moreover, UGI as a risk management strategy is underutilised in the developing countries. The study finds that, in the context of Bangladesh, climate change adaptation through green infrastructure in cities is still nascent for local NGOs and CBOs. Mostly their activities are limited to addressing the basic needs of slum communities such as water and sanitation. Hence urban slum dwellers have been one of the most vulnerable groups in that they are deprived of the city’s basic ecological services. NGOs are utilizing UGI in an innovative way despite various problems in slums. For instance, land scarcity and land insecurity in slums are two key areas where UGI faces resistance. There are limited instances of NGOs using local and indigenous techniques to encourage slum dwellers to adopt UGI for creating sustainable environments. It is in this context that the paper is an attempt to showcase some of the grassroots innovation that NGOs are currently adopting in slums. Also, some challenges and opportunities are discussed to address UGI as a strategy for climate change adaptation in slums.Keywords: climate change adaptation, green infrastructure, Dhaka, slums, NGOs
Procedia PDF Downloads 1531510 Exploring the Design of Prospective Human Immunodeficiency Virus Type 1 Reverse Transcriptase Inhibitors through a Comprehensive Approach of Quantitative Structure Activity Relationship Study, Molecular Docking, and Molecular Dynamics Simulations
Authors: Mouna Baassi, Mohamed Moussaoui, Sanchaita Rajkhowa, Hatim Soufi, Said Belaaouad
Abstract:
The objective of this paper is to address the challenging task of targeting Human Immunodeficiency Virus type 1 Reverse Transcriptase (HIV-1 RT) in the treatment of AIDS. Reverse Transcriptase inhibitors (RTIs) have limitations due to the development of Reverse Transcriptase mutations that lead to treatment resistance. In this study, a combination of statistical analysis and bioinformatics tools was adopted to develop a mathematical model that relates the structure of compounds to their inhibitory activities against HIV-1 Reverse Transcriptase. Our approach was based on a series of compounds recognized for their HIV-1 RT enzymatic inhibitory activities. These compounds were designed via software, with their descriptors computed using multiple tools. The most statistically promising model was chosen, and its domain of application was ascertained. Furthermore, compounds exhibiting comparable biological activity to existing drugs were identified as potential inhibitors of HIV-1 RT. The compounds underwent evaluation based on their chemical absorption, distribution, metabolism, excretion, toxicity properties, and adherence to Lipinski's rule. Molecular docking techniques were employed to examine the interaction between the Reverse Transcriptase (Wild Type and Mutant Type) and the ligands, including a known drug available in the market. Molecular dynamics simulations were also conducted to assess the stability of the RT-ligand complexes. Our results reveal some of the new compounds as promising candidates for effectively inhibiting HIV-1 Reverse Transcriptase, matching the potency of the established drug. This necessitates further experimental validation. This study, beyond its immediate results, provides a methodological foundation for future endeavors aiming to discover and design new inhibitors targeting HIV-1 Reverse Transcriptase.Keywords: QSAR, ADMET properties, molecular docking, molecular dynamics simulation, reverse transcriptase inhibitors, HIV type 1
Procedia PDF Downloads 921509 Method for Auto-Calibrate Projector and Color-Depth Systems for Spatial Augmented Reality Applications
Authors: R. Estrada, A. Henriquez, R. Becerra, C. Laguna
Abstract:
Spatial Augmented Reality is a variation of Augmented Reality where the Head-Mounted Display is not required. This variation of Augmented Reality is useful in cases where the need for a Head-Mounted Display itself is a limitation. To achieve this, Spatial Augmented Reality techniques substitute the technological elements of Augmented Reality; the virtual world is projected onto a physical surface. To create an interactive spatial augmented experience, the application must be aware of the spatial relations that exist between its core elements. In this case, the core elements are referred to as a projection system and an input system, and the process to achieve this spatial awareness is called system calibration. The Spatial Augmented Reality system is considered calibrated if the projected virtual world scale is similar to the real-world scale, meaning that a virtual object will maintain its perceived dimensions when projected to the real world. Also, the input system is calibrated if the application knows the relative position of a point in the projection plane and the RGB-depth sensor origin point. Any kind of projection technology can be used, light-based projectors, close-range projectors, and screens, as long as it complies with the defined constraints; the method was tested on different configurations. The proposed procedure does not rely on a physical marker, minimizing the human intervention on the process. The tests are made using a Kinect V2 as an input sensor and several projection devices. In order to test the method, the constraints defined were applied to a variety of physical configurations; once the method was executed, some variables were obtained to measure the method performance. It was demonstrated that the method obtained can solve different arrangements, giving the user a wide range of setup possibilities.Keywords: color depth sensor, human computer interface, interactive surface, spatial augmented reality
Procedia PDF Downloads 1241508 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator
Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty
Abstract:
Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state
Procedia PDF Downloads 2661507 Rapid and Cheap Test for Detection of Streptococcus pyogenes and Streptococcus pneumoniae with Antibiotic Resistance Identification
Authors: Marta Skwarecka, Patrycja Bloch, Rafal Walkusz, Oliwia Urbanowicz, Grzegorz Zielinski, Sabina Zoledowska, Dawid Nidzworski
Abstract:
Upper respiratory tract infections are one of the most common reasons for visiting a general doctor. Streptococci are the most common bacterial etiological factors in these infections. There are many different types of Streptococci and infections vary in severity from mild throat infections to pneumonia. For example, S. pyogenes mainly contributes to acute pharyngitis, palatine tonsils and scarlet fever, whereas S. Streptococcus pneumoniae is responsible for several invasive diseases like sepsis, meningitis or pneumonia with high mortality and dangerous complications. There are only a few diagnostic tests designed for detection Streptococci from the infected throat of patients. However, they are mostly based on lateral flow techniques, and they are not used as a standard due to their low sensitivity. The diagnostic standard is to culture patients throat swab on semi selective media in order to multiply pure etiological agent of infection and subsequently to perform antibiogram, which takes several days from the patients visit in the clinic. Therefore, the aim of our studies is to develop and implement to the market a Point of Care device for the rapid identification of Streptococcus pyogenes and Streptococcus pneumoniae with simultaneous identification of antibiotic resistance genes. In the course of our research, we successfully selected genes for to-species identification of Streptococci and genes encoding antibiotic resistance proteins. We have developed a reaction to amplify these genes, which allows detecting the presence of S. pyogenes or S. pneumoniae followed by testing their resistance to erythromycin, chloramphenicol and tetracycline. What is more, the detection of β-lactamase-encoding genes that could protect Streptococci against antibiotics from the ampicillin group, which are widely used in the treatment of this type of infection is also developed. The test is carried out directly from the patients' swab, and the results are available after 20 to 30 minutes after sample subjection, which could be performed during the medical visit.Keywords: antibiotic resistance, Streptococci, respiratory infections, diagnostic test
Procedia PDF Downloads 1291506 Spatial Data Science for Data Driven Urban Planning: The Youth Economic Discomfort Index for Rome
Authors: Iacopo Testi, Diego Pajarito, Nicoletta Roberto, Carmen Greco
Abstract:
Today, a consistent segment of the world’s population lives in urban areas, and this proportion will vastly increase in the next decades. Therefore, understanding the key trends in urbanization, likely to unfold over the coming years, is crucial to the implementation of sustainable urban strategies. In parallel, the daily amount of digital data produced will be expanding at an exponential rate during the following years. The analysis of various types of data sets and its derived applications have incredible potential across different crucial sectors such as healthcare, housing, transportation, energy, and education. Nevertheless, in city development, architects and urban planners appear to rely mostly on traditional and analogical techniques of data collection. This paper investigates the prospective of the data science field, appearing to be a formidable resource to assist city managers in identifying strategies to enhance the social, economic, and environmental sustainability of our urban areas. The collection of different new layers of information would definitely enhance planners' capabilities to comprehend more in-depth urban phenomena such as gentrification, land use definition, mobility, or critical infrastructural issues. Specifically, the research results correlate economic, commercial, demographic, and housing data with the purpose of defining the youth economic discomfort index. The statistical composite index provides insights regarding the economic disadvantage of citizens aged between 18 years and 29 years, and results clearly display that central urban zones and more disadvantaged than peripheral ones. The experimental set up selected the city of Rome as the testing ground of the whole investigation. The methodology aims at applying statistical and spatial analysis to construct a composite index supporting informed data-driven decisions for urban planning.Keywords: data science, spatial analysis, composite index, Rome, urban planning, youth economic discomfort index
Procedia PDF Downloads 1351505 A Review of Hypnosis Uses for Anxiety and Phobias Treatment
Authors: Fleura Shkëmbi, Sevim Mustafa, Naim Fanaj
Abstract:
Hypnosis, often known as cognitive therapy, is a sort of mind-body psychotherapy. A professional and certified hypnotist or hypnotherapist guides the patient into this extreme level of focus and relaxation during the session by utilizing verbal cues, repetition, and imagery. In recent years, hypnotherapy has gained popularity in the treatment of a variety of disorders, including anxiety and particular phobias. The term "phobia" is commonly used to define fear of a certain trigger. When faced with potentially hazardous situations, the brain naturally experiences dread. While a little dread here and there may keep us safe, phobias can drastically reduce our quality of life. In summary, persons who suffer from anxiety are considered to see particular environmental situations as dangerous, but those who do not suffer from anxiety do not. Hypnosis is essential in the treatment of anxiety disorders. Hypnosis can help patients minimize their anxiety symptoms. This broad concept has aided in the development of models and therapies for anxiety disorders such as generalized anxiety disorder, panic attacks, hypochondria, and obsessional disorders. Hypnosis techniques are supposed to be attentive and mental pictures, which is conceivable; this is why they're associated with improved working memory and visuospatial abilities. In this sense, the purpose of this study is to determine how effectively specific therapeutic methods perform in treating persons with anxiety and phobias. In addition to cognitive-behavioral therapy and other therapies, the approaches emphasized the use of therapeutic hypnosis. This study looks at the use of hypnosis and related psychotherapy procedures in the treatment of anxiety disorders. Following a discussion of the evolution of hypnosis as a therapeutic tool, neurobiological research is used to demonstrate the influence of hypnosis on the change of perception in the brain. The use of hypnosis in the treatment of phobias, stressful situations, and posttraumatic stress disorder is examined, as well as similarities between the hypnotic state and dissociative reactions to trauma. Through an extensive literature evaluation, this study will introduce hypnotherapy procedures that result in more successful anxiety and phobia treatment.Keywords: anxiety, hypnosis, hypnotherapy, phobia, technique, state
Procedia PDF Downloads 1191504 Nanopriming Potential of Metal Nanoparticles against Internally Seed Borne Pathogen Ustilago triciti
Authors: Anjali Sidhu, Anju Bala, Amit Kumar
Abstract:
Metal nanoparticles have the potential to revolutionize the agriculture owing to sizzling interdisciplinary nano-technological application domain. Numerous patents and products incorporating engineered nanoparticles (NPs) entered into agro-applications with the collective goal to promote proficiency as well as sustainability with lower input and generating meager waste than conventional products and approaches. Loose smut of wheat caused by Ustilago segetum tritici is an internally seed-borne pathogen. It is dormant in the seed unless the seed germinates and its symptoms are expressed at the reproductive stage of the plant only. Various seed treatment agents are recommended for this disease but due to the inappropriate methods of seed treatments used by farmers, each and every seed may not get treated, and the infected seeds escape the fungicidal action. The antimicrobial potential and small size of nanoparticles made them the material of choice as they could enter each seed and restrict the pathogen inside the seed due to the availability of more number of nanoparticles per unit volume of the nanoformulations. Nanoparticles of diverse nature known for their in vitro antimicrobial activity viz. ZnO, MgO, CuS and AgNPs were synthesized, surface modified and characterized by traditional methods. They were applied on infected wheat seeds which were then grown in pot conditions, and their mycelium was tracked in the shoot and leaf region of the seedlings by microscopic staining techniques. Mixed responses of inhibition of this internal mycelium were observed. The time and method of application concluded to be critical for application, which was optimised in the present work. The results implicated that there should be field trails to get final fate of these pot trails up to commercial level. The success of their field trials could be interpreted as a revolution to replace high dose organic fungicides of high residue behaviour.Keywords: metal nanoparticles, nanopriming, seed borne pathogen, Ustilago segetum tritici
Procedia PDF Downloads 1441503 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2
Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle
Abstract:
With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis
Procedia PDF Downloads 721502 Arthroscopic Fixation of Posterior Cruciate Ligament Avulsion Fracture through Posterior Trans Septal Portal Using Button Fixation Device: Mini Tight Rope
Authors: Ratnakar Rao, Subair Khan, Hari Haran
Abstract:
Posterior cruciate ligament (PCL) avulsion fractures is a rare condition and commonly mismanaged.Surgical reattachment has been shown to produce better result compared with conservative management.Only few techniques are reported in arthroscopic fixation of PCL Avulsion Fracture and they are complex.We describe a new technique in fixation of the PCL Avulsion fracture through a posterior trans septal portal using button fixation device (Mini Tight Rope). Eighteen patients with an isolated posterior cruciate ligament avulsion fracture were operated under arthroscopy. Standard Antero Medial Portal and Antero Lateral portals made and additional Postero Medial and Postero Lateral portals made and trans Septal portal established. Avulsion fracture identified, elevated, prepared. Reduction achieved using PCL Tibial guide (Arthrex) and fixation was achieved using Mini Tight Rope,Arthrex (2 buttons with a suture). Reduction confirmed using probe and Image intensifier. Postoperative assessment made clinically and radiologically. 15 patients had good to excellent results with no posterior sag or instability. The range of motion was normal. No complications were recorded per operatively. 2 patients had communition of the fragment while drilling, for one patient it was managed by suturing technique and the second patient PCL Reconstruction was done. One patient had persistent instability with poor outcome. Establishing trans septal portal helps in better visualization of the posterior compartment of the knee. Assessment of the bony fragment, preparation 0f the bone bed andit protects from injury to posterior neurovascular structures. Fixation using the button with suture (Mini Tight Rope) is stable and easily reproducible for PCL Avulsion fracture with single large fragment.Keywords: PCL avulsion, arthroscopy, transeptal, minitight rope technique
Procedia PDF Downloads 2581501 Psychological Stressors Caused by Urban Expansion in Algeria
Authors: Laid Fekih
Abstract:
Background: The purpose of this paper is to examine the psychological stressors caused by urbanization, a field study conducted on a sample range of youth who live in urban areas. Some of them reside in areas with green surroundings while others reside in lack of green areas, which saw the terrible expansion of urban. The study included the impact of urbanization on the mental health of youths; select the psychological problems most commonly caused by urbanization, and the impact of green spaces in alleviating stress. Method: The method used in this research is descriptive, as the data collected from a sample of 160 young men were analyzed. The tool used is the psychological distress test. We proceeded with some statistical techniques, which provided percentages, analysis of variance, and t-tests. Results: The findings of this research were: (i) The psychological stressors caused by urban expansion are mainly in the intensity of stress, incompetence, emotional, and psychosomatic problems. (ii) There was a statistically significant difference at the level of significance 0.02 among young people who live in places in green spaces and without green space in terms of psychological stressors, in favor of young people who live in places free of greenery. (iii) The quality of this primary variable effect of housing (rental or ownership) is statistically significant in favor of young people living in rented accommodation. Conclusion: The green spaces provided by Tlemcen city are inadequate and insufficient to fulfill the population's requirements for contact with nature, leading to such effects that may negatively affect mental health, which makes it a prominent process that should not be neglected. Incorporating green spaces into the design of buildings, homes, and communities to create shared spaces, which facilitate interaction and foster well-being, becomes the main purpose. We think this approach can support the reconstruction of the built environment with green spaces by facilitating the link between psychological stress perception studies and technologies.Keywords: psychological stressors, urbanization, psychological problems, green spaces
Procedia PDF Downloads 821500 Tectonic Complexity: Out-of-Sequence Thrusting in the Higher Himalaya of Jhakri-Sarahan region, Himachal Pradesh, India
Authors: Rajkumar Ghosh
Abstract:
The study focuses on the tectonics of out-of-sequence thrusting (OOST) in the NW region of the Himalaya, particularly in Himachal Pradesh. The research aims to identify the features and nature of OOST in the field and the associated rock types and lithological boundaries in the field of NW Himalaya, Himachal Pradesh, India. The research employs fieldwork and micro-structure observations, correlations, and analyses to identify and analyze the OOST features and associated rock types. The study reveals the presence of three OOSTs, namely Jhakri Thrust (JT), Sarahan Thrust (ST), and Chaura Thrust (CT), which consist of several branches, some of which are still active. The thrust system exhibits varying internal geometry, including box folds, boudins, scar folds, crenulation cleavages, kink folds, and tension gashes. The CT, which is concealed beneath Jutogh Thrust sheet, represents a steepened downward thrust, while the JT has a western dip and is south-westward verging. The research provides crucial information on the tectonics of OOST in the NW region of the Himalaya, particularly in Himachal Pradesh, which is crucial in understanding the regional geological evolution and associated hazards. The data were collected through fieldwork and micro-structure observations, correlations, and analyses of rock samples. The data were analyzed using tectonic and geochronological techniques to identify the nature and characteristics of OOST. The research addressed the question of identifying Higher Himalayan OOST in the field of NW Himalaya, Himachal Pradesh, India, and the associated rock types and lithological boundaries. The study concludes that there is minimal documentation and a lack of suitable exposure of rocks to generalize the features of OOST in the field in NW Higher Himalaya, Himachal Pradesh. The study recommends more extensive mapping and fieldwork to improve understanding of OOST in the region.Keywords: out-of-sequence thrust (OOST), main central thrust (MCT), jhakri thrust (JT), sarahan thrust (ST), chaura thrust (CT), higher himalaya (HH)
Procedia PDF Downloads 911499 Location Uncertainty – A Probablistic Solution for Automatic Train Control
Authors: Monish Sengupta, Benjamin Heydecker, Daniel Woodland
Abstract:
New train control systems rely mainly on Automatic Train Protection (ATP) and Automatic Train Operation (ATO) dynamically to control the speed and hence performance. The ATP and the ATO form the vital element within the CBTC (Communication Based Train Control) and within the ERTMS (European Rail Traffic Management System) system architectures. Reliable and accurate measurement of train location, speed and acceleration are vital to the operation of train control systems. In the past, all CBTC and ERTMS system have deployed a balise or equivalent to correct the uncertainty element of the train location. Typically a CBTC train is allowed to miss only one balise on the track, after which the Automatic Train Protection (ATP) system applies emergency brake to halt the service. This is because the location uncertainty, which grows within the train control system, cannot tolerate missing more than one balise. Balises contribute a significant amount towards wayside maintenance and studies have shown that balises on the track also forms a constraint for future track layout change and change in speed profile.This paper investigates the causes of the location uncertainty that is currently experienced and considers whether it is possible to identify an effective filter to ascertain, in conjunction with appropriate sensors, more accurate speed, distance and location for a CBTC driven train without the need of any external balises. An appropriate sensor fusion algorithm and intelligent sensor selection methodology will be deployed to ascertain the railway location and speed measurement at its highest precision. Similar techniques are already in use in aviation, satellite, submarine and other navigation systems. Developing a model for the speed control and the use of Kalman filter is a key element in this research. This paper will summarize the research undertaken and its significant findings, highlighting the potential for introducing alternative approaches to train positioning that would enable removal of all trackside location correction balises, leading to huge reduction in maintenances and more flexibility in future track design.Keywords: ERTMS, CBTC, ATP, ATO
Procedia PDF Downloads 4101498 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model
Authors: Gholba Niranjan Dilip, Anil Kumar
Abstract:
Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector
Procedia PDF Downloads 1601497 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data
Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali
Abstract:
The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors
Procedia PDF Downloads 691496 Rheological Properties and Thermal Performance of Suspensions of Microcapsules Containing Phase Change Materials
Authors: Vinh Duy Cao, Carlos Salas-Bringas, Anna M. Szczotok, Marianne Hiorth, Anna-Lena Kjøniksen
Abstract:
The increasing cost of energy supply for the purposes of heating and cooling creates a demand for more energy efficient buildings. Improved construction techniques and enhanced material technology can greatly reduce the energy consumption needed for the buildings. Microencapsulated phase change materials (MPCM) suspensions utilized as heat transfer fluids for energy storage and heat transfer applications provide promising potential solutions. A full understanding of the flow and thermal characteristics of microcapsule suspensions is needed to optimize the design of energy storage systems, in order to reduce the capital cost, system size, and energy consumption. The MPCM suspensions exhibited pseudoplastic and thixotropic behaviour, and significantly improved the thermal performance of the suspensions. Three different models were used to characterize the thixotropic behaviour of the MPCM suspensions: the second-order structural, kinetic model was found to give a better fit to the experimental data than the Weltman and Figoni-Shoemaker models. For all samples, the initial shear stress increased, and the breakdown rate accelerated significantly with increasing concentration. The thermal performance and rheological properties, especially the selection of rheological models, will be useful for developing the applications of microcapsules as heat transfer fluids in thermal energy storage system such as calculation of an optimum MPCM concentration, pumping power requirement, and specific power consumption. The effect of temperature on the shear thinning properties of the samples suggests that some of the phase change material is located outside the capsules, and contributes to agglomeration of the samples.Keywords: latent heat, microencapsulated phase change materials, pseudoplastic, suspension, thixotropic behaviour
Procedia PDF Downloads 266