Search results for: real time pest tracking
17776 Trauma in the Unconsoled: A Crisis of the Self
Authors: Assil Ghariri
Abstract:
This article studies the process of rewriting the self through memory in Kazuo Ishiguro’s novel, the Unconsoled (1995). It deals with the journey that the protagonist Mr. Ryder takes through the unconscious, in search for his real self, in which trauma stands as an obstacle. The article uses Carl Jung’s theory of archetypes. Trauma, in this article, is discussed as one of the true obstacles of the unconscious that prevent people from realizing the truth about their selves.Keywords: Carl Jung, Kazuo Ishiguro, memory, trauma
Procedia PDF Downloads 40317775 Effect of Synthesis Parameters on Crystal Size and Perfection of Mordenite and Analcime
Authors: Zehui Du, Chaiwat Prapainainar, Paisan Kongkachuichay, Paweena Prapainainar
Abstract:
The aim of this work was to obtain small crystalline size and high crystallinity of mordenites and analcimes, by modifying the aging time, agitation, water content, crystallization temperature and crystallization time. Two different hydrothermal methods were studied. Both methods used Na2SiO3 as the silica source, NaAlO2 as the aluminum source, and NaOH as the alkali source. The first method used HMI as the template while the second method did not use the template. Mordenite crystals with spherical shape and bimodal in size of about 1 and 5 µm were obtained from the first method using conditions of 24 hr aging time, 170°C and 24 hr crystallization. Modernites with high crystallinity were formed using agitation system in the crystallization process. It was also found that the aging time of 2 hr and 24 hr did not much affect the formation of mordenite crystals. Analcime crystals were formed in spherical shape and facet on surface with the size between 13-15 µm by the second method using the conditions of 30 minutes aging time, 170°C and 24 hr crystallization without calcination. By increasing water content, the crystallization process was slowed down and resulted in smaller analcime crystals. Larger size of analcime crystals were observed when the samples were calcined at 300°C and 580°C. Higher calcination temperature led to higher crystal growth and resulted in larger crystal size. Finally, mordenite and analcime was used as fillers in zeolite/Nafion composite membrane to solve the fuel cross over problem in direct alcohol fuel cell.Keywords: analcime, hydrothermal synthesis, mordenite, zeolite
Procedia PDF Downloads 26417774 Characterization of Nano Coefficient of Friction through Lfm of Superhydrophobic/Oleophobic Coatings Applied on 316l Ss
Authors: Hamza Shams, Sajid Saleem, Bilal A. Siddiqui
Abstract:
This paper investigates the coefficient of friction at nano-levels of commercially available superhydrophobic/oleophobic coatings when applied over 316L SS. 316L Stainless Steel or Marine Stainless Steel has been selected for its widespread uses in structures, marine and biomedical applications. The coatings were investigated in harsh sand-storm and sea water environments. The particle size of the sand during the procedure was carefully selected to simulate sand-storm conditions. Sand speed during the procedure was carefully modulated to simulate actual wind speed during a sand-storm. Sample preparation was carried out using prescribed methodology by the coating manufacturer. The coating’s adhesion and thickness was verified before and after the experiment with the use of Scanning Electron Microscopy (SEM). The value for nano-level coefficient of friction has been determined using Lateral Force Microscopy (LFM). The analysis has been used to formulate a value of friction coefficient which in turn is associative of the amount of wear the coating can bear before the exposure of the base substrate to the harsh environment. The analysis aims to validate the coefficient of friction value as marketed by the coating manufacturers and more importantly test the coating in real-life applications to justify its use. It is expected that the coating would resist exposure to the harsh environment for a considerable amount of time. Further, it would prevent the sample from getting corroded in the process.Keywords: 316L SS, scanning electron microscopy, lateral force microscopy, marine stainless steel, oleophobic coating, superhydrophobic coating
Procedia PDF Downloads 48717773 Effects of Residence Time on Selective Absorption of Hydrogen Suphide
Authors: Dara Satyadileep, Abdallah S. Berrouk
Abstract:
Selective absorption of Hydrogen Sulphide (H2S) using methyldiethanol amine (MDEA) has become a point of interest as means of minimizing capital and operating costs of gas sweetening plants. This paper discusses the prominence of optimum design of column internals to best achieve H2S selectivity using MDEA. To this end, a kinetics-based process simulation model has been developed for a commercial gas sweetening unit. Trends of sweet gas H2S & CO2 contents as function of fraction active area (and hence residence time) have been explained through analysis of interdependent heat and mass transfer phenomena. Guidelines for column internals design in order to achieve desired degree of H2S selectivity are provided. Also the effectiveness of various operating conditions in achieving H2S selectivity for an industrial absorber with fixed internals is investigated.Keywords: gas sweetening, H2S selectivity, methyldiethanol amine, process simulation, residence time
Procedia PDF Downloads 34417772 A Data-Driven Optimal Control Model for the Dynamics of Monkeypox in a Variable Population with a Comprehensive Cost-Effectiveness Analysis
Authors: Martins Onyekwelu Onuorah, Jnr Dahiru Usman
Abstract:
Introduction: In the realm of public health, the threat posed by Monkeypox continues to elicit concern, prompting rigorous studies to understand its dynamics and devise effective containment strategies. Particularly significant is its recurrence in variable populations, such as the observed outbreak in Nigeria in 2022. In light of this, our study undertakes a meticulous analysis, employing a data-driven approach to explore, validate, and propose optimized intervention strategies tailored to the distinct dynamics of Monkeypox within varying demographic structures. Utilizing a deterministic mathematical model, we delved into the intricate dynamics of Monkeypox, with a particular focus on a variable population context. Our qualitative analysis provided insights into the disease-free equilibrium, revealing its stability when R0 is less than one and discounting the possibility of backward bifurcation, as substantiated by the presence of a single stable endemic equilibrium. The model was rigorously validated using real-time data from the Nigerian 2022 recorded cases for Epi weeks 1 – 52. Transitioning from qualitative to quantitative, we augmented our deterministic model with optimal control, introducing three time-dependent interventions to scrutinize their efficacy and influence on the epidemic's trajectory. Numerical simulations unveiled a pronounced impact of the interventions, offering a data-supported blueprint for informed decision-making in containing the disease. A comprehensive cost-effectiveness analysis employing the Infection Averted Ratio (IAR), Average Cost-Effectiveness Ratio (ACER), and Incremental Cost-Effectiveness Ratio (ICER) facilitated a balanced evaluation of the interventions’ economic and health impacts. In essence, our study epitomizes a holistic approach to understanding and mitigating Monkeypox, intertwining rigorous mathematical modeling, empirical validation, and economic evaluation. The insights derived not only bolster our comprehension of Monkeypox's intricate dynamics but also unveil optimized, cost-effective interventions. This integration of methodologies and findings underscores a pivotal stride towards aligning public health imperatives with economic sustainability, marking a significant contribution to global efforts in combating infectious diseases.Keywords: monkeypox, equilibrium states, stability, bifurcation, optimal control, cost-effectiveness
Procedia PDF Downloads 8617771 Changes in Textural Properties of Zucchini Slices with Deep-Fat-Frying
Authors: E. Karacabey, Ş. G. Özçelik, M. S. Turan, C. Baltacıoğlu, E. Küçüköner
Abstract:
Changes in textural properties of zucchini slices under effects of frying conditions were investigated. Frying time and temperature were interested process variables like slice thickness. Slice thickness was studied at three levels (2, 3, and 4 mm). Frying process was performed at two temperature levels (160 and 180 °C) and each for five different process time periods (1, 2, 3, 5, 8 and 10 min). As frying oil sunflower oil was used. Before frying zucchini slices were thermally processes in boiling water for 90 seconds to inactivate at least 80% of plant’s enzymes. After thermal process, zucchini slices were fried in an industrial fryer at specified temperature and time pairs. Fried slices were subjected to textural profile analysis (TPA) to determine textural properties. In this extent hardness, elasticity, cohesion, chewiness, firmness values of slices were figured out. Statistical analysis indicated significant variations in the studied textural properties with process conditions (p < 0.05). Hardness and firmness were determined for fresh and thermally processes zucchini slices to compare each others. Differences in hardness and firmness of fresh, thermally processed and fried slices were found to be significant (p < 0.05). This project (113R015) has been supported by TUBITAK.Keywords: sunflower oil, hardness, firmness, slice thickness, frying temperature, frying time
Procedia PDF Downloads 44417770 Geographic Legacies for Modern Day Disease Research: Autism Spectrum Disorder as a Case-Control Study
Authors: Rebecca Richards Steed, James Van Derslice, Ken Smith, Richard Medina, Amanda Bakian
Abstract:
Elucidating gene-environment interactions for heritable disease outcomes is an emerging area of disease research, with genetic studies informing hypotheses for environment and gene interactions underlying some of the most confounding diseases of our time, like autism spectrum disorder (ASD). Geography has thus far played a key role in identifying environmental factors contributing to disease, but its use can be broadened to include genetic and environmental factors that have a synergistic effect on disease. Through the use of family pedigrees and disease outcomes with life-course residential histories, space-time clustering of generations at critical developmental windows can provide further understanding of (1) environmental factors that contribute to disease patterns in families, (2) susceptible critical windows of development most impacted by environment, (3) and that are most likely to lead to an ASD diagnosis. This paper introduces a retrospective case-control study that utilizes pedigree data, health data, and residential life-course location points to find space-time clustering of ancestors with a grandchild/child with a clinical diagnosis of ASD. Finding space-time clusters of ancestors at critical developmental windows serves as a proxy for shared environmental exposures. The authors refer to geographic life-course exposures as geographic legacies. Identifying space-time clusters of ancestors creates a bridge for researching exposures of past generations that may impact modern-day progeny health. Results from the space-time cluster analysis show multiple clusters for the maternal and paternal pedigrees. The paternal grandparent pedigree resulted in the most space-time clustering for birth and childhood developmental windows. No statistically significant clustering was found for adolescent years. These results will be further studied to identify the specific share of space-time environmental exposures. In conclusion, this study has found significant space-time clusters of parents, and grandparents for both maternal and paternal lineage. These results will be used to identify what environmental exposures have been shared with family members at critical developmental windows of time, and additional analysis will be applied.Keywords: family pedigree, environmental exposure, geographic legacy, medical geography, transgenerational inheritance
Procedia PDF Downloads 11617769 Integrated Intensity and Spatial Enhancement Technique for Color Images
Authors: Evan W. Krieger, Vijayan K. Asari, Saibabu Arigela
Abstract:
Video imagery captured for real-time security and surveillance applications is typically captured in complex lighting conditions. These less than ideal conditions can result in imagery that can have underexposed or overexposed regions. It is also typical that the video is too low in resolution for certain applications. The purpose of security and surveillance video is that we should be able to make accurate conclusions based on the images seen in the video. Therefore, if poor lighting and low resolution conditions occur in the captured video, the ability to make accurate conclusions based on the received information will be reduced. We propose a solution to this problem by using image preprocessing to improve these images before use in a particular application. The proposed algorithm will integrate an intensity enhancement algorithm with a super resolution technique. The intensity enhancement portion consists of a nonlinear inverse sign transformation and an adaptive contrast enhancement. The super resolution section is a single image super resolution technique is a Fourier phase feature based method that uses a machine learning approach with kernel regression. The proposed technique intelligently integrates these algorithms to be able to produce a high quality output while also being more efficient than the sequential use of these algorithms. This integration is accomplished by performing the proposed algorithm on the intensity image produced from the original color image. After enhancement and super resolution, a color restoration technique is employed to obtain an improved visibility color image.Keywords: dynamic range compression, multi-level Fourier features, nonlinear enhancement, super resolution
Procedia PDF Downloads 55417768 Development and Evaluation of Gastro Retentive Floating Tablets of Ayurvedic Vati Formulation
Authors: Imran Khan Pathan, Anil Bhandari, Peeyush K. Sharma, Rakesh K. Patel, Suresh Purohit
Abstract:
Floating tablets of Marichyadi Vati were developed with an aim to prolong its gastric residence time and increase the bioavailability of drug. Rapid gastrointestinal transit could result in incomplete drug release from the drug delivery system above the absorption zone leading to diminished efficacy of the administered dose. The tablets were prepared by wet granulation technique, using HPMC E50 LV act as Matrixing agent, Carbopol as floating enhancer, microcrystalline cellulose as binder, sodium bi carbonate as effervescent agent with other excipients. The simplex lattice design was used for selection of variables for tablets formulation. Formulation was optimized on the basis of floating time and in vitro drug release. The results showed that the floating lag time for optimized formulation was found to be 61 second with about 97.32 % of total drug release within 3 hours. The in vitro release profiles of drug from the formulation could be best expressed zero order with highest linearity r2 = 0.9943. It was concluded that the gastroretentive drug delivery system can be developed for Marichyadi Vati containing piperine to increase the residence time of the drug in the stomach and thereby increasing bioavailability.Keywords: piperine, Marichyadi Vati, gastroretentive drug delivery, floating tablet
Procedia PDF Downloads 45717767 Attenuation Scale Calibration of an Optical Time Domain Reflectometer
Authors: Osama Terra, Hatem Hussein
Abstract:
Calibration of Optical Time Domain Reflectometer (OTDR) is crucial for the accurate determination of loss budget for long optical fiber links. In this paper, the calibration of the attenuation scale of an OTDR using two different techniques is discussed and implemented. The first technique is the external modulation method (EM). A setup is proposed to calibrate an OTDR over a dynamic range of around 15 dB based on the EM method. Afterwards, the OTDR is calibrated using two standard reference fibers (SRF). Both SRF are calibrated using cut-back technique; one of them is calibrated at our home institute (the National Institute of Standards – NIS) while the other at the National Physical Laboratory (NPL) of the United Kingdom to confirm our results. In addition, the parameters contributing the calibration uncertainty are thoroughly investigated. Although the EM method has several advantages over the SRF method, the uncertainties in the SRF method is found to surpass that of the EM method.Keywords: optical time domain reflectometer, fiber attenuation measurement, OTDR calibration, external source method
Procedia PDF Downloads 46517766 Fast Short-Term Electrical Load Forecasting under High Meteorological Variability with a Multiple Equation Time Series Approach
Authors: Charline David, Alexandre Blondin Massé, Arnaud Zinflou
Abstract:
In 2016, Clements, Hurn, and Li proposed a multiple equation time series approach for the short-term load forecasting, reporting an average mean absolute percentage error (MAPE) of 1.36% on an 11-years dataset for the Queensland region in Australia. We present an adaptation of their model to the electrical power load consumption for the whole Quebec province in Canada. More precisely, we take into account two additional meteorological variables — cloudiness and wind speed — on top of temperature, as well as the use of multiple meteorological measurements taken at different locations on the territory. We also consider other minor improvements. Our final model shows an average MAPE score of 1:79% over an 8-years dataset.Keywords: short-term load forecasting, special days, time series, multiple equations, parallelization, clustering
Procedia PDF Downloads 10317765 Correlation between Polysaccharides Molecular Weight Changes and Pectinases Gene Expression during Papaya Ripening
Authors: Samira B. R. Prado, Paulo R. Melfi, Beatriz T. Minguzzi, João P. Fabi
Abstract:
Fruit softening is the main change that occurs during papaya (Carica papaya L.) ripening. It is characterized by the depolymerization of cell wall polysaccharides, especially the pectic fractions, which causes cell wall disassembling. However, it is uncertain how the modification of the two main pectin polysaccharides fractions (water-soluble – WSF, and oxalate-soluble fractions - OSF) accounts for fruit softening. The aim of this work was to correlate molecular weight changes of WSF and OSF with the gene expression of pectin-solubilizing enzymes (pectinases) during papaya ripening. Papaya fruits obtained from a producer were harvest and storage under specific conditions. The fruits were divided in five groups according to days after harvesting. Cell walls from all groups of papaya pulp were isolated and fractionated (WSF and OSF). Expression profiles of pectinase genes were achieved according to the MIQE guidelines (Minimum Information for publication of Quantitative real-time PCR Experiments). The results showed an increased yield and a decreased molecular weight throughout ripening for WSF and OSF. Gene expression data support that papaya softening is achieved by polygalacturonases (PGs) up-regulation, in which their actions might have been facilitated by the constant action of pectinesterases (PMEs). Moreover, BGAL1 gene was up-regulated during ripening with a simultaneous galactose release, suggesting that galactosidases (GALs) could also account for pulp softening. The data suggest that a solubilization of galacturonans and a depolymerization of cell wall components were caused mainly by the action of PGs and GALs.Keywords: carica papaya, fruit ripening, galactosidases, plant cell wall, polygalacturonases
Procedia PDF Downloads 42317764 Assessing Project Performance through Work Sampling and Earned Value Analysis
Authors: Shobha Ramalingam
Abstract:
The majority of the infrastructure projects are affected by time overrun, resulting in project delays and subsequently cost overruns. Time overrun may vary from a few months to as high as five or more years, placing the project viability at risk. One of the probable reasons noted in the literature for this outcome in projects is due to poor productivity. Researchers contend that productivity in construction has only marginally increased over the years. While studies in the literature have extensively focused on time and cost parameters in projects, there are limited studies that integrate time and cost with productivity to assess project performance. To this end, a study was conducted to understand the project delay factors concerning cost, time and productivity. A case-study approach was adopted to collect rich data from a nuclear power plant project site for two months through observation, interviews and document review. The data were analyzed using three different approaches for a comprehensive understanding. Foremost, a root-cause analysis was performed on the data using Ishikawa’s fish-bone diagram technique to identify the various factors impacting the delay concerning time. Based on it, a questionnaire was designed and circulated to concerned executives, including project engineers and contractors to determine the frequency of occurrence of the delay, which was then compiled and presented to the management for a possible solution to mitigate. Second, a productivity analysis was performed on select activities, including rebar bending and concreting through a time-motion study to analyze product performance. Third, data on cost of construction for three years allowed analyzing the cost performance using earned value management technique. All three techniques allowed to systematically and comprehensively identify the key factors that deter project performance and productivity loss in the construction of the nuclear power plant project. The findings showed that improper planning and coordination between multiple trades, concurrent operations, improper workforce and material management, fatigue due to overtime were some of the key factors that led to delays and poor productivity. The findings are expected to act as a stepping stone for further research and have implications for practitioners.Keywords: earned value analysis, time performance, project costs, project delays, construction productivity
Procedia PDF Downloads 9717763 Importance of Road Infrastructure on the People Live in Afghanistan
Authors: Mursal Ibrahim Zada
Abstract:
Since 2001, the new Government of Afghanistan has put the improvement of transportation in rural area as one of the key issues for the development of the country. Since then, about 17,000 km of rural roads were planned to be constructed in the entire country. This thesis will assess the impact of rural road improvement on the development of rural communities and housing facilities. Specifically, this study aims to show that the improved road has leads to an improvement in the community, which in turn has a positive effect on the lives of rural people. To obtain this goal, a questionnaire survey was conducted in March 2015 to the residents of four different districts of Kabul province, Afghanistan, where the road projects were constructed in recent years. The collected data was analyzed using on a regression analysis considering different factors such as land price, waiting time at the station, travel time to the city, number of employed family members and so on. Three models are developed to demonstrate the relationship between different factors before and after the improvement of rural transportation. The results showed a significant change positively in the value of land price and housing facilities, travel time to the city, waiting time at the station, number of employed family members, fare per trip to the city, and number of trips to the city per month after the pavement of the road. The results indicated that the improvement of transportation has a significant impact on the improvement of the community in different parts, especially on the price of land and housing facility and travel time to the city.Keywords: accessibility, Afghanistan, housing facility, rural area, land price
Procedia PDF Downloads 26317762 Synthesis, Characterization, and Application of Novel Trihexyltetradecyl Phosphonium Chloride for Extractive Desulfurization of Liquid Fuel
Authors: Swapnil A. Dharaskar, Kailas L. Wasewar, Mahesh N. Varma, Diwakar Z. Shende
Abstract:
Owing to the stringent environmental regulations in many countries for production of ultra low sulfur petroleum fractions intending to reduce sulfur emissions results in enormous interest in this area among the scientific community. The requirement of zero sulfur emissions enhances the prominence for more advanced techniques in desulfurization. Desulfurization by extraction is a promising approach having several advantages over conventional hydrodesulphurization. Present work is dealt with various new approaches for desulfurization of ultra clean gasoline, diesel and other liquid fuels by extraction with ionic liquids. In present paper experimental data on extractive desulfurization of liquid fuel using trihexyl tetradecyl phosphonium chloride has been presented. The FTIR, 1H-NMR, and 13C-NMR have been discussed for the molecular confirmation of synthesized ionic liquid. Further, conductivity, solubility, and viscosity analysis of ionic liquids were carried out. The effects of reaction time, reaction temperature, sulfur compounds, ultrasonication, and recycling of ionic liquid without regeneration on removal of dibenzothiphene from liquid fuel were also investigated. In extractive desulfurization process, the removal of dibenzothiophene in n-dodecane was 84.5% for mass ratio of 1:1 in 30 min at 30OC under the mild reaction conditions. Phosphonium ionic liquids could be reused five times without a significant decrease in activity. Also, the desulfurization of real fuels, multistage extraction was examined. The data and results provided in present paper explore the significant insights of phosphonium based ionic liquids as novel extractant for extractive desulfurization of liquid fuels.Keywords: ionic liquid, PPIL, desulfurization, liquid fuel, extraction
Procedia PDF Downloads 60917761 Evaluating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management
Authors: Mirindi Derrick, Mirindi Frederic, Oluwakemi Oshineye
Abstract:
Abstract: The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software
Procedia PDF Downloads 5017760 Automated Natural Hazard Zonation System with Internet-SMS Warning: Distributed GIS for Sustainable Societies Creating Schema and Interface for Mapping and Communication
Authors: Devanjan Bhattacharya, Jitka Komarkova
Abstract:
The research describes the implementation of a novel and stand-alone system for dynamic hazard warning. The system uses all existing infrastructure already in place like mobile networks, a laptop/PC and the small installation software. The geospatial dataset are the maps of a region which are again frugal. Hence there is no need to invest and it reaches everyone with a mobile. A novel architecture of hazard assessment and warning introduced where major technologies in ICT interfaced to give a unique WebGIS based dynamic real time geohazard warning communication system. A never before architecture introduced for integrating WebGIS with telecommunication technology. Existing technologies interfaced in a novel architectural design to address a neglected domain in a way never done before–through dynamically updatable WebGIS based warning communication. The work publishes new architecture and novelty in addressing hazard warning techniques in sustainable way and user friendly manner. Coupling of hazard zonation and hazard warning procedures into a single system has been shown. Generalized architecture for deciphering a range of geo-hazards has been developed. Hence the developmental work presented here can be summarized as the development of internet-SMS based automated geo-hazard warning communication system; integrating a warning communication system with a hazard evaluation system; interfacing different open-source technologies towards design and development of a warning system; modularization of different technologies towards development of a warning communication system; automated data creation, transformation and dissemination over different interfaces. The architecture of the developed warning system has been functionally automated as well as generalized enough that can be used for any hazard and setup requirement has been kept to a minimum.Keywords: geospatial, web-based GIS, geohazard, warning system
Procedia PDF Downloads 40817759 New Machine Learning Optimization Approach Based on Input Variables Disposition Applied for Time Series Prediction
Authors: Hervice Roméo Fogno Fotsoa, Germaine Djuidje Kenmoe, Claude Vidal Aloyem Kazé
Abstract:
One of the main applications of machine learning is the prediction of time series. But a more accurate prediction requires a more optimal model of machine learning. Several optimization techniques have been developed, but without considering the input variables disposition of the system. Thus, this work aims to present a new machine learning architecture optimization technique based on their optimal input variables disposition. The validations are done on the prediction of wind time series, using data collected in Cameroon. The number of possible dispositions with four input variables is determined, i.e., twenty-four. Each of the dispositions is used to perform the prediction, with the main criteria being the training and prediction performances. The results obtained from a static architecture and a dynamic architecture of neural networks have shown that these performances are a function of the input variable's disposition, and this is in a different way from the architectures. This analysis revealed that it is necessary to take into account the input variable's disposition for the development of a more optimal neural network model. Thus, a new neural network training algorithm is proposed by introducing the search for the optimal input variables disposition in the traditional back-propagation algorithm. The results of the application of this new optimization approach on the two single neural network architectures are compared with the previously obtained results step by step. Moreover, this proposed approach is validated in a collaborative optimization method with a single objective optimization technique, i.e., genetic algorithm back-propagation neural networks. From these comparisons, it is concluded that each proposed model outperforms its traditional model in terms of training and prediction performance of time series. Thus the proposed optimization approach can be useful in improving the accuracy of time series forecasts. This proves that the proposed optimization approach can be useful in improving the accuracy of time series prediction based on machine learning.Keywords: input variable disposition, machine learning, optimization, performance, time series prediction
Procedia PDF Downloads 10917758 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 36017757 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data
Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu
Abstract:
Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq
Procedia PDF Downloads 14217756 A Multi-Criteria Model for Scheduling of Stochastic Single Machine Problem with Outsourcing and Solving It through Application of Chance Constrained
Authors: Homa Ghave, Parmis Shahmaleki
Abstract:
This paper presents a new multi-criteria stochastic mathematical model for a single machine scheduling with outsourcing allowed. There are multiple jobs processing in batch. For each batch, all of job or a quantity of it can be outsourced. The jobs have stochastic processing time and lead time and deterministic due dates arrive randomly. Because of the stochastic inherent of processing time and lead time, we use the chance constrained programming for modeling the problem. First, the problem is formulated in form of stochastic programming and then prepared in a form of deterministic mixed integer linear programming. The objectives are considered in the model to minimize the maximum tardiness and outsourcing cost simultaneously. Several procedures have been developed to deal with the multi-criteria problem. In this paper, we utilize the concept of satisfaction functions to increases the manager’s preference. The proposed approach is tested on instances where the random variables are normally distributed.Keywords: single machine scheduling, multi-criteria mathematical model, outsourcing strategy, uncertain lead times and processing times, chance constrained programming, satisfaction function
Procedia PDF Downloads 26417755 Smart Campus Digital Twin: Basic Framework - Current State, Trends and Challenges
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study presents an analysis of the Digital Twin concept applied to the academic environment, focusing on the development of a Digital Twin Smart Campus Framework. Using bibliometric analysis methodologies and literature review, the research investigates the evolution and applications of the Digital Twin in educational contexts, comparing these findings with the advances of Industry 4.0. It was identified gaps in the existing literature and highlighted the need to adapt Digital Twin principles to meet the specific demands of a smart campus. By integrating Industry 4.0 concepts such as automation, Internet of Things, and real-time data analytics, we propose an innovative framework for the successful implementation of the Digital Twin in academic settings. The results of this study provide valuable insights for university campus managers, allowing for a better understanding of the potential applications of the Digital Twin for operations, security, and user experience optimization. In addition, our framework offers practical guidance for transitioning from a digital campus to a digital twin smart campus, promoting innovation and efficiency in the educational environment. This work contributes to the growing literature on Digital Twins and Industry 4.0, while offering a specific and tailored approach to transforming university campuses into smart and connected spaces, high demanded by Society 5.0 trends. It is hoped that this framework will serve as a basis for future research and practical implementations in the field of higher education and educational technology.Keywords: smart campus, digital twin, industry 4.0, education trends, society 5.0
Procedia PDF Downloads 5917754 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density
Authors: Lalit Kumar, Rashid Al Shidi
Abstract:
Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.Keywords: dubas bug, date palm, tree density, infestation levels
Procedia PDF Downloads 19317753 Validation of the Linear Trend Estimation Technique for Prediction of Average Water and Sewerage Charge Rate Prices in the Czech Republic
Authors: Aneta Oblouková, Eva Vítková
Abstract:
The article deals with the issue of water and sewerage charge rate prices in the Czech Republic. The research is specifically focused on the analysis of the development of the average prices of water and sewerage charge rate in the Czech Republic in the years 1994-2021 and on the validation of the chosen methodology relevant for the prediction of the development of the average prices of water and sewerage charge rate in the Czech Republic. The research is based on data collection. The data for this research was obtained from the Czech Statistical Office. The aim of the paper is to validate the relevance of the mathematical linear trend estimate technique for the calculation of the predicted average prices of water and sewerage charge rates. The real values of the average prices of water and sewerage charge rates in the Czech Republic in the years 1994-2018 were obtained from the Czech Statistical Office and were converted into a mathematical equation. The same type of real data was obtained from the Czech Statistical Office for the years 2019-2021. Prediction of the average prices of water and sewerage charge rates in the Czech Republic in the years 2019-2021 were also calculated using a chosen method -a linear trend estimation technique. The values obtained from the Czech Statistical Office and the values calculated using the chosen methodology were subsequently compared. The research result is a validation of the chosen mathematical technique to be a suitable technique for this research.Keywords: Czech Republic, linear trend estimation, price prediction, water and sewerage charge rate
Procedia PDF Downloads 12017752 The Effect of Extremely Low Frequency Magnetic Field on Rats Brain
Authors: Omar Abdalla, Abdelfatah Ahmed, Ahmed Mustafa, Abdelazem Eldouma
Abstract:
The purpose of this study is evaluating the effect of extremely low frequency magnetic field on Waster rats brain. The number of rats used in this study were 25, which were divided into five groups, each group containing five rats as follows: Group 1: The control group which was not exposed to energized field; Group 2: Rats were exposed to a magnetic field with an intensity of 0.6 mT (2 hours/day); Group 3: Rats were exposed to a magnetic field of 1.2 mT (2 hours/day); Group4: Rats were exposed to a magnetic field of 1.8 mT (2 hours/day); Group 5: Rats were exposed to a magnetic field of 2.4 mT (2 hours/day) and all groups were exposed for seven days, by designing a maze and calculating the time average for arriving to the decoy at special conditions. We found the time average before exposure for the all groups was G2=330 s, G3=172 s, G4=500 s and G5=174 s, respectively. We exposed all groups to ELF-MF and measured the time and we found: G2=465 s, G3=388 s, G4=501 s, and G5=442 s. It was observed that the time average increased directly with field strength. Histological samples of frontal lop of brain for all groups were taken and we found lesion, atrophy, empty vacuoles and disorder choroid plexus at frontal lope of brain. And finally we observed the disorder of choroid plexus in histological results and Alzheimer's symptoms increase when the magnetic field increases.Keywords: nonionizing radiation, biophysics, magnetic field, shrinkage
Procedia PDF Downloads 54517751 Effect of Preloading on Long-Term Settlement of Closed Landfills: A Numerical Analysis
Authors: Mehrnaz Alibeikloo, Hajar Share Isfahani, Hadi Khabbaz
Abstract:
In recent years, by developing cities and increasing population, reconstructing on closed landfill sites in some regions is unavoidable. Long-term settlement is one of the major concerns associated with reconstruction on landfills after closure. The purpose of this research is evaluating the effect of preloading in various patterns of height and time on long-term settlements of closed landfills. In this regard, five scenarios of surcharge from 1 to 3 m high within 3, 4.5 and 6 months of preloading time have been modeled using PLAXIS 2D software. Moreover, the numerical results have been compared to those obtained from analytical methods, and a good agreement has been achieved. The findings indicate that there is a linear relationship between settlement and surcharge height. Although, long-term settlement decreased by applying a longer and higher preloading, the time of preloading was found to be a more effective factor compared to preloading height.Keywords: preloading, long-term settlement, landfill, PLAXIS 2D
Procedia PDF Downloads 19517750 Kemmer Oscillator in Cosmic String Background
Authors: N. Messai, A. Boumali
Abstract:
In this work, we aim to solve the two dimensional Kemmer equation including Dirac oscillator interaction term, in the background space-time generated by a cosmic string which is submitted to an uniform magnetic field. Eigenfunctions and eigenvalues of our problem have been found and the influence of the cosmic string space-time on the energy spectrum has been analyzed.Keywords: Kemmer oscillator, cosmic string, Dirac oscillator, eigenfunctions
Procedia PDF Downloads 58517749 Effects of Boiling Temperature and Time on Colour, Texture and Sensory Properties of Volutharpa ampullacea perryi Meat
Authors: Xianbao Sun, Jinlong Zhao, Shudong He, Jing Li
Abstract:
Volutharpa ampullacea perryi is a high-protein marine shellfish. However, few data are available on the effects of boiling temperatures and time on quality of the meat. In this study, colour, texture and sensory characteristics of Volutharpa ampullacea perryi meat during the boiling cooking processes (75-100 °C, 5-60 min) were investigated by colors analysis, texture profile analysis (TPA), scanning electron microscope (SEM) and sensory evaluation. The ratio of cooking loss gradually increased with the increase of temperature and time. The colour of meat became lighter and more yellower from 85 °C to 95 °C in a short time (5-20 min), but it became brown after a 30 min treatment. TPA results showed that the Volutharpa ampullacea perryi meat were more firm and less cohesive after a higher temperature (95-100 °C) treatment even in a short period (5-15 min). Based on the SEM analysis, it was easily found that the myofibrils structure was destroyed at a higher temperature (85-100 °C). Sensory data revealed that the meat cooked at 85-90 °C in 10-20 min showed higher scores in overall acceptance, as well as color, hardness and taste. Based on these results, it could be constructed that Volutharpa ampullacea perryi meat should be heated on a suitable condition (such as 85 °C 15 min or 90 °C 10 min) in the boiling cooking to be ensure a better acceptability.Keywords: Volutharpa ampullacea perryi meat, boiling cooking, colour, sensory, texture
Procedia PDF Downloads 28117748 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 33517747 Two Efficient Heuristic Algorithms for the Integrated Production Planning and Warehouse Layout Problem
Authors: Mohammad Pourmohammadi Fallah, Maziar Salahi
Abstract:
In the literature, a mixed-integer linear programming model for the integrated production planning and warehouse layout problem is proposed. To solve the model, the authors proposed a Lagrangian relax-and-fix heuristic that takes a significant amount of time to stop with gaps above 5$\%$ for large-scale instances. Here, we present two heuristic algorithms to solve the problem. In the first one, we use a greedy approach by allocating warehouse locations with less reservation costs and also less transportation costs from the production area to locations and from locations to the output point to items with higher demands. Then a smaller model is solved. In the second heuristic, first, we sort items in descending order according to the fraction of the sum of the demands for that item in the time horizon plus the maximum demand for that item in the time horizon and the sum of all its demands in the time horizon. Then we categorize the sorted items into groups of 3, 4, or 5 and solve a small-scale optimization problem for each group, hoping to improve the solution of the first heuristic. Our preliminary numerical results show the effectiveness of the proposed heuristics.Keywords: capacitated lot-sizing, warehouse layout, mixed-integer linear programming, heuristics algorithm
Procedia PDF Downloads 196