Search results for: rank-based chain-mode ensemble
41 A Distributed Mobile Agent Based on Intrusion Detection System for MANET
Authors: Maad Kamal Al-Anni
Abstract:
This study is about an algorithmic dependence of Artificial Neural Network on Multilayer Perceptron (MPL) pertaining to the classification and clustering presentations for Mobile Adhoc Network vulnerabilities. Moreover, mobile ad hoc network (MANET) is ubiquitous intelligent internetworking devices in which it has the ability to detect their environment using an autonomous system of mobile nodes that are connected via wireless links. Security affairs are the most important subject in MANET due to the easy penetrative scenarios occurred in such an auto configuration network. One of the powerful techniques used for inspecting the network packets is Intrusion Detection System (IDS); in this article, we are going to show the effectiveness of artificial neural networks used as a machine learning along with stochastic approach (information gain) to classify the malicious behaviors in simulated network with respect to different IDS techniques. The monitoring agent is responsible for detection inference engine, the audit data is collected from collecting agent by simulating the node attack and contrasted outputs with normal behaviors of the framework, whenever. In the event that there is any deviation from the ordinary behaviors then the monitoring agent is considered this event as an attack , in this article we are going to demonstrate the signature-based IDS approach in a MANET by implementing the back propagation algorithm over ensemble-based Traffic Table (TT), thus the signature of malicious behaviors or undesirable activities are often significantly prognosticated and efficiently figured out, by increasing the parametric set-up of Back propagation algorithm during the experimental results which empirically shown its effectiveness for the ratio of detection index up to 98.6 percentage. Consequently it is proved in empirical results in this article, the performance matrices are also being included in this article with Xgraph screen show by different through puts like Packet Delivery Ratio (PDR), Through Put(TP), and Average Delay(AD).Keywords: Intrusion Detection System (IDS), Mobile Adhoc Networks (MANET), Back Propagation Algorithm (BPA), Neural Networks (NN)
Procedia PDF Downloads 19540 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 11739 The Study of Using Mon Dance in Pathum Thani Province’s Tradition
Authors: Dusittorn Ngamying
Abstract:
This investigation of Mon Dance is focused on using in Pathum Thani Province’s tradition and has the following objectives: 1) to study the background of Mon dance in Pathum Thani Province; 2) to study Mon dance in Pathum Thani Province; 3) to study of using Mon Dance in Pathum Thani province’s tradition. This qualitative research was conducted in Pathum Thani provinces (the central of Thailand). Data was collected from a documentary study and field data by means of observation, interview and group discussion. Workshops were also held with a total of 100 attendees, comprised of 20 key informants, 40 casual informants and 40 general informants. Data was validated using a triangulation technique and findings are presented using descriptive analysis. The results of the studied showed that the historical background of Mon dance in Pathum Thani Province initiated during the war evacuation from Martaban (south of Burma) to settle down in Sam Khok, Pathum Thani Province in Ayutthaya period to Rattanakosin. The study found that Mon dance typically consists of 12 dancing process. The melodies have 12 songs. Piphat Mon (Mon traditional music ensemble) was used in the performance. The costume was dressed on Mon traditional. The performers were 6-12 women and depending on the employer’s demands. Length of the performance varied from the duration of music orchestration. Rituals and Customs were paying homage to teachers before the performance. The offerings were composed of flowers, incense sticks, candles, money gifts which were well arranged on a tray with pedestal, and also liquors, tobaccos and pure water for asking propitiousness. To using Mon Dance in Pathum Thani Province’s tradition, was found that it commonly performed in the funeral ceremonial tradition at present because the physical postures of the performance were graceful and exquisite as approved conservative. In addition, the value since the ancient time had believed that Mon Dance was the sacred thing considered as the dignity glorification especially for funeral ceremonies of the priest or royal hierarchy classes. However, Mon dance was continued to use in the traditions associated with Mon people activities in Pathum Thani Province, for instance, customary welcome for honor guest and Songkran Festival.Keywords: Mon dance, Pathum Tani Province, tradition, triangulation technique
Procedia PDF Downloads 59238 Estimating Precipitable Water Vapour Using the Global Positioning System and Radio Occultation over Ethiopian Regions
Authors: Asmamaw Yehun, Tsegaye Gogie, Martin Vermeer, Addisu Hunegnaw
Abstract:
The Global Positioning System (GPS) is a space-based radio positioning system, which is capable of providing continuous position, velocity, and time information to users anywhere on or near the surface of the Earth. The main objective of this work was to estimate the integrated precipitable water vapour (IPWV) using ground GPS and Low Earth Orbit (LEO) Radio Occultation (RO) to study spatial-temporal variability. For LEO-GPS RO, we used Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) datasets. We estimated the daily and monthly mean of IPWV using six selected ground-based GPS stations over a period of range from 2012 to 2016 (i.e. five-years period). The main perspective for selecting the range period from 2012 to 2016 is that, continuous data were available during these periods at all Ethiopian GPS stations. We studied temporal, seasonal, diurnal, and vertical variations of precipitable water vapour using GPS observables extracted from the precise geodetic GAMIT-GLOBK software package. Finally, we determined the cross-correlation of our GPS-derived IPWV values with those of the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40 Interim reanalysis and of the second generation National Oceanic and Atmospheric Administration (NOAA) model ensemble Forecast System Reforecast (GEFS/R) for validation and static comparison. There are higher values of the IPWV range from 30 to 37.5 millimetres (mm) in Gambela and Southern Regions of Ethiopia. Some parts of Tigray, Amhara, and Oromia regions had low IPWV ranges from 8.62 to 15.27 mm. The correlation coefficient between GPS-derived IPWV with ECMWF and GEFS/R exceeds 90%. We conclude that there are highly temporal, seasonal, diurnal, and vertical variations of precipitable water vapour in the study area.Keywords: GNSS, radio occultation, atmosphere, precipitable water vapour
Procedia PDF Downloads 8637 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy
Authors: Kemal Efe Eseller, Göktuğ Yazici
Abstract:
Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing
Procedia PDF Downloads 8836 Thermodynamic Analysis of Surface Seawater under Ocean Warming: An Integrated Approach Combining Experimental Measurements, Theoretical Modeling, Machine Learning Techniques, and Molecular Dynamics Simulation for Climate Change Assessment
Authors: Nishaben Desai Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
Understanding ocean thermodynamics has become increasingly critical as Earth's oceans serve as the primary planetary heat regulator, absorbing approximately 93% of excess heat energy from anthropogenic greenhouse gas emissions. This investigation presents a comprehensive analysis of Arabian Sea surface seawater thermodynamics, focusing specifically on heat capacity (Cp) and thermal expansion coefficient (α) - parameters fundamental to global heat distribution patterns. Through high-precision experimental measurements of ultrasonic velocity and density across varying temperature (293.15-318.15K) and salinity (0.5-35 ppt) conditions, it characterize critical thermophysical parameters including specific heat capacity, thermal expansion, and isobaric and isothermal compressibility coefficients in natural seawater systems. The study employs advanced machine learning frameworks - Random Forest, Gradient Booster, Stacked Ensemble Machine Learning (SEML), and AdaBoost - with SEML achieving exceptional accuracy (R² > 0.99) in heat capacity predictions. the findings reveal significant temperature-dependent molecular restructuring: enhanced thermal energy disrupts hydrogen-bonded networks and ion-water interactions, manifesting as decreased heat capacity with increasing temperature (negative ∂Cp/∂T). This mechanism creates a positive feedback loop where reduced heat absorption capacity potentially accelerates oceanic warming cycles. These quantitative insights into seawater thermodynamics provide crucial parametric inputs for climate models and evidence-based environmental policy formulation, particularly addressing the critical knowledge gap in thermal expansion behavior of seawater under varying temperature-salinity conditions.Keywords: climate change, arabian sea, thermodynamics, machine learning
Procedia PDF Downloads 1735 Energy Content and Spectral Energy Representation of Wave Propagation in a Granular Chain
Authors: Rohit Shrivastava, Stefan Luding
Abstract:
A mechanical wave is propagation of vibration with transfer of energy and momentum. Studying the energy as well as spectral energy characteristics of a propagating wave through disordered granular media can assist in understanding the overall properties of wave propagation through inhomogeneous materials like soil. The study of these properties is aimed at modeling wave propagation for oil, mineral or gas exploration (seismic prospecting) or non-destructive testing for the study of internal structure of solids. The study of Energy content (Kinetic, Potential and Total Energy) of a pulse propagating through an idealized one-dimensional discrete particle system like a mass disordered granular chain can assist in understanding the energy attenuation due to disorder as a function of propagation distance. The spectral analysis of the energy signal can assist in understanding dispersion as well as attenuation due to scattering in different frequencies (scattering attenuation). The selection of one-dimensional granular chain also helps in studying only the P-wave attributes of the wave and removing the influence of shear or rotational waves. Granular chains with different mass distributions have been studied, by randomly selecting masses from normal, binary and uniform distributions and the standard deviation of the distribution is considered as the disorder parameter, higher standard deviation means higher disorder and lower standard deviation means lower disorder. For obtaining macroscopic/continuum properties, ensemble averaging has been used. Interpreting information from a Total Energy signal turned out to be much easier in comparison to displacement, velocity or acceleration signals of the wave, hence, indicating a better analysis method for wave propagation through granular materials. Increasing disorder leads to faster attenuation of the signal and decreases the Energy of higher frequency signals transmitted, but at the same time the energy of spatially localized high frequencies also increases. An ordered granular chain exhibits ballistic propagation of energy whereas, a disordered granular chain exhibits diffusive like propagation, which eventually becomes localized at long periods of time.Keywords: discrete elements, energy attenuation, mass disorder, granular chain, spectral energy, wave propagation
Procedia PDF Downloads 29234 Habitat Suitability, Genetic Diversity and Population Structure of Two Sympatric Fruit Bat Species Reveal the Need of an Urgent Conservation Action
Authors: Mohamed Thani Ibouroi, Ali Cheha, Claudine Montgelard, Veronique Arnal, Dawiyat Massoudi, Guillelme Astruc, Said Ali Ousseni Dhurham, Aurelien Besnard
Abstract:
The Livingstone's flying fox (Pteropus livingstonii) and the Comorian fruit bat (P.seychellensis comorensis) are two endemic fruit bat species among the mostly threatened animals of the Comoros archipelagos. Despite their role as important ecosystem service providers like all flying fox species as pollinators and seed dispersers, little is known about their ecologies, population genetics and structures making difficult the development of evidence-based conservation strategies. In this study, we assess spatial distribution and ecological niche of both species using Species Distribution Modeling (SDM) based on the recent Ensemble of Small Models (ESMs) approach using presence-only data. Population structure and genetic diversity of the two species were assessed using both mitochondrial and microsatellite markers based on non-invasive genetic samples. Our ESMs highlight a clear niche partitioning of the two sympatric species. Livingstone’s flying fox has a very limited distribution, restricted on steep slope of natural forests at high elevation. On the contrary, the Comorian fruit bat has a relatively large geographic range spread over low elevations in farmlands and villages. Our genetic analysis shows a low genetic diversity for both fruit bats species. They also show that the Livingstone’s flying fox population of the two islands were genetically isolated while no evidence of genetic differentiation was detected for the Comorian fruit bats between islands. Our results support the idea that natural habitat loss, especially the natural forest loss and fragmentation are the important factors impacting the distribution of the Livingstone’s flying fox by limiting its foraging area and reducing its potential roosting sites. On the contrary, the Comorian fruit bats seem to be favored by human activities probably because its diets are less specialized. By this study, we concluded that the Livingstone’s flying fox species and its habitat are of high priority in term of conservation at the Comoros archipelagos scale.Keywords: Comoros islands, ecological niche, habitat loss, population genetics, fruit bats, conservation biology
Procedia PDF Downloads 26833 Regeneration of Geological Models Using Support Vector Machine Assisted by Principal Component Analysis
Authors: H. Jung, N. Kim, B. Kang, J. Choe
Abstract:
History matching is a crucial procedure for predicting reservoir performances and making future decisions. However, it is difficult due to uncertainties of initial reservoir models. Therefore, it is important to have reliable initial models for successful history matching of highly heterogeneous reservoirs such as channel reservoirs. In this paper, we proposed a novel scheme for regenerating geological models using support vector machine (SVM) and principal component analysis (PCA). First, we perform PCA for figuring out main geological characteristics of models. Through the procedure, permeability values of each model are transformed to new parameters by principal components, which have eigenvalues of large magnitude. Secondly, the parameters are projected into two-dimensional plane by multi-dimensional scaling (MDS) based on Euclidean distances. Finally, we train an SVM classifier using 20% models which show the most similar or dissimilar well oil production rates (WOPR) with the true values (10% for each). Then, the other 80% models are classified by trained SVM. We select models on side of low WOPR errors. One hundred channel reservoir models are initially generated by single normal equation simulation. By repeating the classification process, we can select models which have similar geological trend with the true reservoir model. The average field of the selected models is utilized as a probability map for regeneration. Newly generated models can preserve correct channel features and exclude wrong geological properties maintaining suitable uncertainty ranges. History matching with the initial models cannot provide trustworthy results. It fails to find out correct geological features of the true model. However, history matching with the regenerated ensemble offers reliable characterization results by figuring out proper channel trend. Furthermore, it gives dependable prediction of future performances with reduced uncertainties. We propose a novel classification scheme which integrates PCA, MDS, and SVM for regenerating reservoir models. The scheme can easily sort out reliable models which have similar channel trend with the reference in lowered dimension space.Keywords: history matching, principal component analysis, reservoir modelling, support vector machine
Procedia PDF Downloads 16032 Experimental Investigation of Seawater Thermophysical Properties: Understanding Climate Change Impacts on Marine Ecosystems Through Internal Pressure and Cohesion Energy Analysis
Authors: Nishaben Dholakiya, Anirban Roy, Ranjan Dey
Abstract:
The unprecedented rise in global temperatures has triggered complex changes in marine ecosystems, necessitating a deeper understanding of seawater's thermophysical properties by experimentally measuring ultrasonic velocity and density at varying temperatures and salinity. This study investigates the critical relationship between temperature variations and molecular-level interactions in Arabian Sea surface waters, specifically focusing on internal pressure (π) and cohesion energy density (CED) as key indicators of ecosystem disruption. Our experimental findings reveal that elevated temperatures significantly reduce internal pressure, weakening the intermolecular forces that maintain seawater's structural integrity. This reduction in π correlates directly with decreased habitat stability for marine organisms, particularly affecting pressure-sensitive species and their physiological processes. Similarly, the observed decline in cohesion energy density at higher temperatures indicates a fundamental shift in water molecule organization, impacting the dissolution and distribution of vital nutrients and gases. These molecular-level changes cascade through the ecosystem, affecting everything from planktonic organisms to complex food webs. By employing advanced machine learning techniques, including Stacked Ensemble Machine Learning (SEML) and AdaBoost (AB), we developed highly accurate predictive models (>99% accuracy) for these thermophysical parameters. The results provide crucial insights into the mechanistic relationship between climate warming and marine ecosystem degradation, offering valuable data for environmental policymaking and conservation strategies. The novelty of this research serves as no such thermodynamic investigation has been conducted before in literature, whereas this research establishes a quantitative framework for understanding how molecular-level changes in seawater properties directly influence marine ecosystem stability, emphasizing the urgent need for climate change mitigation efforts.Keywords: thermophysical properties, Arabian Sea, internal pressure, cohesion energy density, machine learning
Procedia PDF Downloads 1231 Early Prediction of Cognitive Impairment in Adults Aged 20 Years and Older using Machine Learning and Biomarkers of Heavy Metal Exposure
Authors: Ali Nabavi, Farimah Safari, Mohammad Kashkooli, Sara Sadat Nabavizadeh, Hossein Molavi Vardanjani
Abstract:
Cognitive impairment presents a significant and increasing health concern as populations age. Environmental risk factors such as heavy metal exposure are suspected contributors, but their specific roles remain incompletely understood. Machine learning offers a promising approach to integrate multi-factorial data and improve the prediction of cognitive outcomes. This study aimed to develop and validate machine learning models to predict early risk of cognitive impairment by incorporating demographic, clinical, and biomarker data, including measures of heavy metal exposure. A retrospective analysis was conducted using 2011-2014 National Health and Nutrition Examination Survey (NHANES) data. The dataset included participants aged 20 years and older who underwent cognitive testing. Variables encompassed demographic information, medical history, lifestyle factors, and biomarkers such as blood and urine levels of lead, cadmium, manganese, and other metals. Machine learning algorithms were trained on 90% of the data and evaluated on the remaining 10%, with performance assessed through metrics such as accuracy, area under curve (AUC), and sensitivity. Analysis included 2,933 participants. The stacking ensemble model demonstrated the highest predictive performance, achieving an AUC of 0.778 and a sensitivity of 0.879 on the test dataset. Key predictors included age, gender, hypertension, education level, urinary cadmium, and blood manganese levels. The findings indicate that machine learning can effectively predict the risk of cognitive impairment using a comprehensive set of clinical and environmental exposure data. Incorporating biomarkers of heavy metal exposure improved prediction accuracy and highlighted the role of environmental factors in cognitive decline. Further prospective studies are recommended to validate the models and assess their utility over time.Keywords: cognitive impairment, heavy metal exposure, predictive models, aging
Procedia PDF Downloads 230 Sequence Analysis and Molecular Cloning of PROTEOLYSIS 6 in Tomato
Authors: Nurulhikma Md Isa, Intan Elya Suka, Nur Farhana Roslan, Chew Bee Lynn
Abstract:
The evolutionarily conserved N-end rule pathway marks proteins for degradation by the Ubiquitin Proteosome System (UPS) based on the nature of their N-terminal residue. Proteins with a destabilizing N-terminal residue undergo a series of condition-dependent N-terminal modifications, resulting in their ubiquitination and degradation. Intensive research has been carried out in Arabidopsis previously. The group VII Ethylene Response Factor (ERFs) transcription factors are the first N-end rule pathway substrates found in Arabidopsis and their role in regulating oxygen sensing. ERFs also function as central hubs for the perception of gaseous signals in plants and control different plant developmental including germination, stomatal aperture, hypocotyl elongation and stress responses. However, nothing is known about the role of this pathway during fruit development and ripening aspect. The plant model system Arabidopsis cannot represent fleshy fruit model system therefore tomato is the best model plant to study. PROTEOLYSIS6 (PRT6) is an E3 ubiquitin ligase of the N-end rule pathway. Two homologs of PRT6 sequences have been identified in tomato genome database using the PRT6 protein sequence from model plant Arabidopsis thaliana. Homology search against Ensemble Plant database (tomato) showed Solyc09g010830.2 is the best hit with highest score of 1143, e-value of 0.0 and 61.3% identity compare to the second hit Solyc10g084760.1. Further homology search was done using NCBI Blast database to validate the data. The result showed best gene hit was XP_010325853.1 of uncharacterized protein LOC101255129 (Solanum lycopersicum) with highest score of 1601, e-value 0.0 and 48% identity. Both Solyc09g010830.2 and uncharacterized protein LOC101255129 were genes located at chromosome 9. Further validation was carried out using BLASTP program between these two sequences (Solyc09g010830.2 and uncharacterized protein LOC101255129) to investigate whether they were the same proteins represent PRT6 in tomato. Results showed that both proteins have 100 % identity, indicates that they were the same gene represents PRT6 in tomato. In addition, we used two different RNAi constructs that were driven under 35S and Polygalacturonase (PG) promoters to study the function of PRT6 during tomato developmental stages and ripening processes.Keywords: ERFs, PRT6, tomato, ubiquitin
Procedia PDF Downloads 24129 The High Precision of Magnetic Detection with Microwave Modulation in Solid Spin Assembly of NV Centres in Diamond
Authors: Zongmin Ma, Shaowen Zhang, Yueping Fu, Jun Tang, Yunbo Shi, Jun Liu
Abstract:
Solid-state quantum sensors are attracting wide interest because of their high sensitivity at room temperature. In particular, spin properties of nitrogen–vacancy (NV) color centres in diamond make them outstanding sensors of magnetic fields, electric fields and temperature under ambient conditions. Much of the work on NV magnetic sensing has been done so as to achieve the smallest volume, high sensitivity of NV ensemble-based magnetometry using micro-cavity, light-trapping diamond waveguide (LTDW), nano-cantilevers combined with MEMS (Micro-Electronic-Mechanical System) techniques. Recently, frequency-modulated microwaves with continuous optical excitation method have been proposed to achieve high sensitivity of 6 μT/√Hz using individual NV centres at nanoscale. In this research, we built-up an experiment to measure static magnetic field through continuous wave optical excitation with frequency-modulated microwaves method under continuous illumination with green pump light at 532 nm, and bulk diamond sample with a high density of NV centers (1 ppm). The output of the confocal microscopy was collected by an objective (NA = 0.7) and detected by a high sensitivity photodetector. We design uniform and efficient excitation of the micro strip antenna, which is coupled well with the spin ensembles at 2.87 GHz for zero-field splitting of the NV centers. Output of the PD signal was sent to an LIA (Lock-In Amplifier) modulated signal, generated by the microwave source by IQ mixer. The detected signal is received by the photodetector, and the reference signal enters the lock-in amplifier to realize the open-loop detection of the NV atomic magnetometer. We can plot ODMR spectra under continuous-wave (CW) microwave. Due to the high sensitivity of the lock-in amplifier, the minimum detectable value of the voltage can be measured, and the minimum detectable frequency can be made by the minimum and slope of the voltage. The magnetic field sensitivity can be derived from η = δB√T corresponds to a 10 nT minimum detectable shift in the magnetic field. Further, frequency analysis of the noise in the system indicates that at 10Hz the sensitivity less than 10 nT/√Hz.Keywords: nitrogen-vacancy (NV) centers, frequency-modulated microwaves, magnetic field sensitivity, noise density
Procedia PDF Downloads 44028 Production Optimization under Geological Uncertainty Using Distance-Based Clustering
Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe
Abstract:
It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization
Procedia PDF Downloads 14427 Nanoporous Metals Reinforced with Fullerenes
Authors: Deni̇z Ezgi̇ Gülmez, Mesut Kirca
Abstract:
Nanoporous (np) metals have attracted considerable attention owing to their cellular morphological features at atomistic scale which yield ultra-high specific surface area awarding a great potential to be employed in diverse applications such as catalytic, electrocatalytic, sensing, mechanical and optical. As one of the carbon based nanostructures, fullerenes are also another type of outstanding nanomaterials that have been extensively investigated due to their remarkable chemical, mechanical and optical properties. In this study, the idea of improving the mechanical behavior of nanoporous metals by inclusion of the fullerenes, which offers a new metal-carbon nanocomposite material, is examined and discussed. With this motivation, tensile mechanical behavior of nanoporous metals reinforced with carbon fullerenes is investigated by classical molecular dynamics (MD) simulations. Atomistic models of the nanoporous metals with ultrathin ligaments are obtained through a stochastic process simply based on the intersection of spherical volumes which has been used previously in literature. According to this technique, the atoms within the ensemble of intersecting spherical volumes is removed from the pristine solid block of the selected metal, which results in porous structures with spherical cells. Following this, fullerene units are added into the cellular voids to obtain final atomistic configurations for the numerical tensile tests. Several numerical specimens are prepared with different number of fullerenes per cell and with varied fullerene sizes. LAMMPS code was used to perform classical MD simulations to conduct uniaxial tension experiments on np models filled by fullerenes. The interactions between the metal atoms are modeled by using embedded atomic method (EAM) while adaptive intermolecular reactive empirical bond order (AIREBO) potential is employed for the interaction of carbon atoms. Furthermore, atomic interactions between the metal and carbon atoms are represented by Lennard-Jones potential with appropriate parameters. In conclusion, the ultimate goal of the study is to present the effects of fullerenes embedded into the cellular structure of np metals on the tensile response of the porous metals. The results are believed to be informative and instructive for the experimentalists to synthesize hybrid nanoporous materials with improved properties and multifunctional characteristics.Keywords: fullerene, intersecting spheres, molecular dynamic, nanoporous metals
Procedia PDF Downloads 23926 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil
Procedia PDF Downloads 13025 Protective Role of Curcumin against Ionising Radiation of Gamma Ray
Authors: Turban Kar, Maitree Bhattacharyya
Abstract:
Curcumin, a dietary antioxidant has been identified as a wonder molecule to possess therapeutic properties protecting the cellular macromolecules from oxidative damage. In our experimental study, we have explored the effectiveness of curcumin in protecting the structural paradigm of Human Serum Albumin (HSA) when exposed to gamma irradiation. HSA, being an important transport protein of the circulatory system, is involved in binding of variety of metabolites, drugs, dyes and fatty acids due to the presence of hydrophobic pockets inside the structure. HSA is also actively involved in the transportation of drugs and metabolites to their targets, because of its long half-life and regulation of osmotic blood pressure. Gamma rays, in its increasing concentration, results in structural alteration of the protein and superoxide radical generation. Curcumin, on the other hand, mitigates the damage, which has been evidenced in the following experiments. Our study explores the possibility for protection by curcumin during the molecular and conformational changes of HSA when exposed to gamma irradiation. We used a combination of spectroscopic methods to probe the conformational ensemble of the irradiated HSA and finally evaluated the extent of restoration by curcumin. SDS - PAGE indicated the formation of cross linked aggregates as a consequence of increasing exposure of gamma radiation. CD and FTIR spectroscopy inferred significant decrease in alpha helix content of HSA from 57% to 15% with increasing radiation doses. Steady state and time resolved fluorescence studies complemented the spectroscopic measurements when lifetime decay was significantly reduced from 6.35 ns to 0.37 ns. Hydrophobic and bityrosine study showed the effectiveness of curcumin for protection against radiation induced free radical generation. Moreover, bityrosine and hydrophobic profiling of gamma irradiated HSA in presence and absence of curcumin provided light on the formation of ROS species generation and the protective (magical) role of curcumin. The molecular mechanism of curcumin protection to HSA from gamma irradiation is yet unknown, though a possible explanation has been proposed in this work using Thioflavin T assay. It was elucidated, that when HSA is irradiated at low dose of gamma radiation in presence of curcumin, it is capable of retaining the native characteristic properties to a greater extent indicating stabilization of molecular structure. Thus, curcumin may be utilized as a therapeutic strategy to protect cellular proteins.Keywords: Bityrosine content, conformational change, curcumin, gamma radiation, human serum albumin
Procedia PDF Downloads 15624 A Multi-Scale Study of Potential-Dependent Ammonia Synthesis on IrO₂ (110): DFT, 3D-RISM, and Microkinetic Modeling
Authors: Shih-Huang Pan, Tsuyoshi Miyazaki, Minoru Otani, Santhanamoorthi Nachimuthu, Jyh-Chiang Jiang
Abstract:
Ammonia (NH₃) is crucial in renewable energy and agriculture, yet its traditional production via the Haber-Bosch process faces challenges due to the inherent inertness of nitrogen (N₂) and the need for high temperatures and pressures. The electrocatalytic nitrogen reduction (ENRR) presents a more sustainable option, functioning at ambient conditions. However, its advancement is limited by selectivity and efficiency challenges due to the competing hydrogen evolution reaction (HER). The critical roles of protonation of N-species and HER highlight the necessity of selecting optimal catalysts and solvents to enhance ENRR performance. Notably, transition metal oxides, with their adjustable electronic states and excellent chemical and thermal stability, have shown promising ENRR characteristics. In this study, we use density functional theory (DFT) methods to investigate the ENRR mechanisms on IrO₂ (110), a material known for its tunable electronic properties and exceptional chemical and thermal stability. Employing the constant electrode potential (CEP) model, where the electrode - electrolyte interface is treated as a polarizable continuum with implicit solvation, and adjusting electron counts to equalize work functions in the grand canonical ensemble, we further incorporate the advanced 3D Reference Interaction Site Model (3D-RISM) to accurately determine the ENRR limiting potential across various solvents and pH conditions. Our findings reveal that the limiting potential for ENRR on IrO₂ (110) is significantly more favorable than for HER, highlighting the efficiency of the IrO₂ catalyst for converting N₂ to NH₃. This is supported by the optimal *NH₃ desorption energy on IrO₂, which enhances the overall reaction efficiency. Microkinetic simulations further predict a promising NH₃ production rate, even at the solution's boiling point¸ reinforcing the catalytic viability of IrO₂ (110). This comprehensive approach provides an atomic-level understanding of the electrode-electrolyte interface in ENRR, demonstrating the practical application of IrO₂ in electrochemical catalysis. The findings provide a foundation for developing more efficient and selective catalytic strategies, potentially revolutionizing industrial NH₃ production.Keywords: density functional theory, electrocatalyst, nitrogen reduction reaction, electrochemistry
Procedia PDF Downloads 2423 Using Locus Equations for Berber Consonants Labiovellarization
Authors: Ali Benali Djouher Leila
Abstract:
Labiovelarization of velar consonants and labials is a very widespread phenomenon. It is attested in all the major northern Berber dialects. Only the Tuareg is totally unaware of it. But, even within the large Berber-speaking regions of the north, it is very unstable: it may be completely absent in certain dialects (such as the Bougie region in Kabylie), and its extension and frequency can vary appreciably between the dialects which know it. Some dialects of Great Kabylia or the Chleuh domain, for example, "labiovélarize" more than others from the same region. Thus, in Great Kabylia, the adjective "large" will be pronounced: amqqwran with the At Yiraten and amqqran with the At Yanni, a few kilometers away. One of the problems with them is deciding whether it is one or two phonemes. All the criteria used by linguists in this kind of case lead to the conclusion that they are unique phonemes (a phoneme and not a succession of two phonemes, / k + w /, for example). The phonetic and phonological criteria are moreover clearly confirmed by the morphological data since, in the system of verbal alternations, these complex segments are treated as single phonemes: agree, "to draw, to fetch water," akwer, "to fly," have exactly the same morphology as as "jealous," arem" taste," Ames, "dirty" or afeg, "steal" ... verbs with two radical consonants (type aCC). At the level of notation, both scientific and usual, it is, therefore, necessary to represent the labiovélarized by a single letter, possibly accompanied by a diacritic. In fact, actual practices are diverse. - The scientific representation of type does not seem adequate for current use because its realization is easy only on a microcomputer. The Berber Documentation File used a small ° (of n °) above the writing line: k °, g ° ... which has the advantage of being easy to achieve since it is part of general typographical conventions in Latin script and that it is present on a typewriter keyboard. Mouloud Mammeri, then the Berber Study Group of Vincennes (Tisuraf review), and a majority of Kabyle practitioners over the last twenty years have used the succession "consonant +" semi-vowel / w / "(CW) on the same line of writing; for all the reasons explained previously, this practice is not a good solution and should be abandoned, especially as it particularizes Kabyle in the Berber ensemble. In this study, we were interested in two velar consonants, / g / and / k /, labiovellarized: / gw / and the / kw / (we adopted the addition of the "w") for the representation for ease of writing in graphical mode. It is a question of trying to characterize these four consonants in order to see if they have different places of articulation and if they are distinct (if these velars are distinct from their labiovellarized counterpart). This characterization is done using locus equations.Keywords: berber consonants;, labiovelarization, locus equations, acoustical caracterization, kabylian dialect, algerian language
Procedia PDF Downloads 7622 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 10921 An Adaptive Oversampling Technique for Imbalanced Datasets
Authors: Shaukat Ali Shahee, Usha Ananthakumar
Abstract:
A data set exhibits class imbalance problem when one class has very few examples compared to the other class, and this is also referred to as between class imbalance. The traditional classifiers fail to classify the minority class examples correctly due to its bias towards the majority class. Apart from between-class imbalance, imbalance within classes where classes are composed of a different number of sub-clusters with these sub-clusters containing different number of examples also deteriorates the performance of the classifier. Previously, many methods have been proposed for handling imbalanced dataset problem. These methods can be classified into four categories: data preprocessing, algorithmic based, cost-based methods and ensemble of classifier. Data preprocessing techniques have shown great potential as they attempt to improve data distribution rather than the classifier. Data preprocessing technique handles class imbalance either by increasing the minority class examples or by decreasing the majority class examples. Decreasing the majority class examples lead to loss of information and also when minority class has an absolute rarity, removing the majority class examples is generally not recommended. Existing methods available for handling class imbalance do not address both between-class imbalance and within-class imbalance simultaneously. In this paper, we propose a method that handles between class imbalance and within class imbalance simultaneously for binary classification problem. Removing between class imbalance and within class imbalance simultaneously eliminates the biases of the classifier towards bigger sub-clusters by minimizing the error domination of bigger sub-clusters in total error. The proposed method uses model-based clustering to find the presence of sub-clusters or sub-concepts in the dataset. The number of examples oversampled among the sub-clusters is determined based on the complexity of sub-clusters. The method also takes into consideration the scatter of the data in the feature space and also adaptively copes up with unseen test data using Lowner-John ellipsoid for increasing the accuracy of the classifier. In this study, neural network is being used as this is one such classifier where the total error is minimized and removing the between-class imbalance and within class imbalance simultaneously help the classifier in giving equal weight to all the sub-clusters irrespective of the classes. The proposed method is validated on 9 publicly available data sets and compared with three existing oversampling techniques that rely on the spatial location of minority class examples in the euclidean feature space. The experimental results show the proposed method to be statistically significantly superior to other methods in terms of various accuracy measures. Thus the proposed method can serve as a good alternative to handle various problem domains like credit scoring, customer churn prediction, financial distress, etc., that typically involve imbalanced data sets.Keywords: classification, imbalanced dataset, Lowner-John ellipsoid, model based clustering, oversampling
Procedia PDF Downloads 41820 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 11419 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images
Authors: Ravija Gunawardana, Banuka Athuraliya
Abstract:
Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine
Procedia PDF Downloads 15718 Rest Behavior and Restoration: Searching for Patterns through a Textual Analysis
Authors: Sandra Christina Gressler
Abstract:
Resting is essentially the physical and mental relaxation. So, can behaviors that go beyond the merely physical relaxation to some extent be understood as a behavior of restoration? Studies on restorative environments emphasize the physical, mental and social benefits that some environments can provide and suggest that activities in natural environments reduce the stress of daily lives, promoting recovery against the daily wear. These studies, though specific in their results, do not unify the different possibilities of restoration. Considering the importance of restorative environments by promoting well-being, this research aims to verify the applicability of the theory on restorative environments in a Brazilian context, inquiring about the environment/behavior of rest. The research sought to achieve its goals by; a) identifying daily ways of how participants interact/connect with nature; b) identifying the resting environments/behavior; c) verifying if rest strategies match the restorative environments suggested by restorative studies; and d) verifying different rest strategies related to time. Workers from different companies in which certain functions require focused attention, and high school students from different schools, participated in this study. An interview was used to collect data and information. The data obtained were compared with studies of attention restoration theory and stress recovery. The collected data were analyzed through the basic descriptive inductive statistics and the use of the software ALCESTE® (Analyse Lexicale par Contexte d'un Ensemble de Segments de Texte). The open questions investigate perception of nature on a daily basis – analysis using ALCESTE; rest periods – daily, weekends and holidays – analysis using ALCESTE with tri-croisé; and resting environments and activities – analysis using a simple descriptive statistics. According to the results, environments with natural characteristics that are compatible with personal desires (physical aspects and distance) and residential environments when they fulfill the characteristics of refuge, safety, and self-expression, characteristics of primary territory, meet the requirements of restoration. Analyzes suggest that the perception of nature has a wide range that goes beyond the objects nearby and possible to be touched, as well as observation and contemplation of details. The restoration processes described in the studies of attention restoration theory occur gradually (hierarchically), starting with being away, following compatibility, fascination, and extent. They are also associated with the time that is available for rest. The relation between rest behaviors and the bio-demographic characteristics of the participants are noted. It reinforces, in studies of restoration, the need to insert not only investigations regarding the physical characteristics of the environment but also behavior, social relationship, subjective reactions, distance and time available. The complexity of the theme indicates the necessity for multimethod studies. Practical contributions provide subsidies for developing strategies to promote the welfare of the population.Keywords: attention restoration theory, environmental psychology, rest behavior, restorative environments
Procedia PDF Downloads 19717 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration
Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu
Abstract:
Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery
Procedia PDF Downloads 13216 Molecular Dynamics Simulation of Realistic Biochar Models with Controlled Microporosity
Authors: Audrey Ngambia, Ondrej Masek, Valentina Erastova
Abstract:
Biochar is an amorphous carbon-rich material generated from the pyrolysis of biomass with multifarious properties and functionality. Biochar has shown proven applications in the treatment of flue gas and organic and inorganic pollutants in soil and water/wastewater as a result of its multiple surface functional groups and porous structures. These properties have also shown potential in energy storage and carbon capture. The availability of diverse sources of biomass to produce biochar has increased interest in it as a sustainable and environmentally friendly material. The properties and porous structures of biochar vary depending on the type of biomass and high heat treatment temperature (HHT). Biochars produced at HHT between 400°C – 800°C generally have lower H/C and O/C ratios, higher porosities, larger pore sizes and higher surface areas with temperature. While all is known experimentally, there is little knowledge on the porous role structure and functional groups play on processes occurring at the atomistic scale, which are extremely important for the optimization of biochar for application, especially in the adsorption of gases. Atomistic simulations methods have shown the potential to generate such amorphous materials; however, most of the models available are composed of only carbon atoms or graphitic sheets, which are very dense or with simple slit pores, all of which ignore the important role of heteroatoms such as O, N, S and pore morphologies. Hence, developing realistic models that integrate these parameters are important to understand their role in governing adsorption mechanisms that will aid in guiding the design and optimization of biochar materials for target applications. In this work, molecular dynamics simulations in the isobaric ensemble are used to generate realistic biochar models taking into account experimentally determined H/C, O/C, N/C, aromaticity, micropore size range, micropore volumes and true densities of biochars. A pore generation approach was developed using virtual atoms, which is a Lennard-Jones sphere of varying van der Waals radius and softness. Its interaction via a soft-core potential with the biochar matrix allows the creation of pores with rough surfaces while varying the van der Waals radius parameters gives control to the pore-size distribution. We focused on microporosity, creating average pore sizes of 0.5 - 2 nm in diameter and pore volumes in the range of 0.05 – 1 cm3/g, which corresponds to experimental gas adsorption micropore sizes of amorphous porous biochars. Realistic biochar models with surface functionalities, micropore size distribution and pore morphologies were developed, and they could aid in the study of adsorption processes in confined micropores.Keywords: biochar, heteroatoms, micropore size, molecular dynamics simulations, surface functional groups, virtual atoms
Procedia PDF Downloads 7115 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method
Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov
Abstract:
The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection
Procedia PDF Downloads 21614 Ensemble of Misplacement, Juxtaposing Feminine Identity in Time and Space: An Analysis of Works of Modern Iranian Female Photographers
Authors: Delaram Hosseinioun
Abstract:
In their collections, Shirin Neshat, Mitra Tabrizian, Gohar Dashti and Newsha Tavakolian adopt a hybrid form of narrative to confront the restrictions imposed on women in hegemonic public and private spaces. Focusing on motives such as social marginalisation, crisis of belonging, as well as lack of agency for women, the artists depict the regression of women’s rights in their respective generations. Based on the ideas of Michael Bakhtin, namely his concept of polyphony or the plurality of contradictory voices, the views of Judith Butler on giving an account to oneself and Henri Leverbre’s theories on social space, this study illustrates the artists’ concept of identity in crisis through time and space. The research explores how the artists took their art as a novel dimension to depict and confront the hardships imposed on Iranian women. Henri Lefebvre makes a distinction between complex social structures through which individuals situate, perceive and represent themselves. By adding Bakhtin’s polyphonic view to Lefebvre’s concepts of perceived and lived spaces, the study explores the sense of social fragmentation in the works of Dashti and Tavakolian. One argument is that as the representatives of the contemporary generation of female artists who spend their lives in Iran and faced a higher degree of restrictions, their hyperbolic and theatrical styles stand as a symbolic act of confrontation against restrictive socio-cultural norms imposed on women. Further, the research explores the possibility of reclaiming one's voice and sense of agency through art, corresponding with the Bakhtinian sense of polyphony and Butler’s concept of giving an account to oneself. Works of Neshat and Tabrizian as the representatives of the previous generation who faced exile and diaspora, encompass a higher degree of misplacement, violence and decay of women’s presence. In Their works, the women’s body encompasses Lefebvre’s dismantled temporal and special setting. Notably, the ongoing social conviction and gender-based dogma imposed on women frame some of the concurrent motives among the selected collections of the four artists. By applying an interdisciplinary lens and integrating the conducted interviews with the artists, the study illustrates how the artists seek a transcultural account for themselves and women in their generations. Further, the selected collections manifest the urgency for an authentic and liberal voice and setting for women, resonating with the concurrent Women, Life, Freedom movement in Iran.Keywords: persian modern female photographers, transcultural studies, shirin neshat, mitra tabrizian, gohar dashti, newsha tavakolian, butler, bakhtin, lefebvre
Procedia PDF Downloads 7813 Lake of Neuchatel: Effect of Increasing Storm Events on Littoral Transport and Coastal Structures
Authors: Charlotte Dreger, Erik Bollaert
Abstract:
This paper presents two environmentally-friendly coastal structures realized on the Lake of Neuchâtel. Both structures reflect current environmental issues of concern on the lake and have been strongly affected by extreme meteorological conditions between their period of design and their actual operational period. The Lake of Neuchatel is one of the biggest Swiss lakes and measures around 38 km in length and 8.2 km in width, for a maximum water depth of 152 m. Its particular topographical alignment, situated in between the Swiss Plateau and the Jura mountains, combines strong winds and large fetch values, resulting in significant wave heights during storm events at both north-east and south-west lake extremities. In addition, due to flooding concerns, historically, lake levels have been lowered by several meters during the Jura correction works in the 19th and 20th century. Hence, during storm events, continuous erosion of the vulnerable molasse shorelines and sand banks generate frequent and abundant littoral transport from the center of the lake to its extremities. This phenomenon does not only cause disturbances of the ecosystem, but also generates numerous problems at natural or man-made infrastructures located along the shorelines, such as reed plants, harbor entrances, canals, etc. A first example is provided at the southwestern extremity, near the city of Yverdon, where an ensemble of 11 small islands, the Iles des Vernes, have been artificially created in view of enhancing biological conditions and food availability for bird species during their migration process, replacing at the same time two larger islands that were affected by lack of morphodynamics and general vegetalization of their surfaces. The article will present the concept and dimensioning of these islands based on 2D numerical modelling, as well as the realization and follow-up campaigns. In particular, the influence of several major storm events that occurred immediately after the works will be pointed out. Second, a sediment retention dike is discussed at the northeastern extremity, at the entrance of the Canal de la Broye into the lake. This canal is heavily used for navigation and suffers from frequent and significant sedimentation at its outlet. The new coastal structure has been designed to minimize sediment deposits around the exutory of the canal into the lake, by retaining the littoral transport during storm events. The article will describe the basic assumptions used to design the dike, as well as the construction works and follow-up campaigns. Especially the huge influence of changing meteorological conditions on the littoral transport of the Lake of Neuchatel since project design ten years ago will be pointed out. Not only the intensity and frequency of storm events are increasing, but also the main wind directions alter, affecting in this way the efficiency of the coastal structure in retaining the sediments.Keywords: meteorological evolution, sediment transport, lake of Neuchatel, numerical modelling, environmental measures
Procedia PDF Downloads 8612 Impact of Climate Change on Irrigation and Hydropower Potential: A Case of Upper Blue Nile Basin in Western Ethiopia
Authors: Elias Jemal Abdella
Abstract:
The Blue Nile River is an important shared resource of Ethiopia, Sudan and also, because it is the major contributor of water to the main Nile River, Egypt. Despite the potential benefits of regional cooperation and integrated joint basin management, all three countries continue to pursue unilateral plans for development. Besides, there is great uncertainty about the likely impacts of climate change in water availability for existing as well as proposed irrigation and hydropower projects in the Blue Nile Basin. The main objective of this study is to quantitatively assess the impact of climate change on the hydrological regime of the upper Blue Nile basin, western Ethiopia. Three models were combined, a dynamic Coordinated Regional Climate Downscaling Experiment (CORDEX) regional climate model (RCM) that is used to determine climate projections for the Upper Blue Nile basin for Representative Concentration Pathways (RCPs) 4.5 and 8.5 greenhouse gas emissions scenarios for the period 2021-2050. The outputs generated from multimodel ensemble of four (4) CORDEX-RCMs (i.e., rainfall and temperature) were used as input to a Soil and Water Assessment Tool (SWAT) hydrological model which was setup, calibrated and validated with observed climate and hydrological data. The outputs from the SWAT model (i.e., projections in river flow) were used as input to a Water Evaluation and Planning (WEAP) water resources model which was used to determine the water resources implications of the changes in climate. The WEAP model was set-up to simulate three development scenarios. Current Development scenario was the existing water resource development situation, Medium-term Development scenario was planned water resource development that is expected to be commissioned (i.e. before 2025) and Long-term full Development scenario were all planned water resource development likely to be commissioned (i.e. before 2050). The projected change of mean annual temperature for period (2021 – 2050) in most of the basin are warmer than the baseline (1982 -2005) average in the range of 1 to 1.4oC, implying that an increase in evapotranspiration loss. Subbasins which already distressed from drought may endure to face even greater challenges in the future. Projected mean annual precipitation varies from subbasin to subbasin; in the Eastern, North Eastern and South western highland of the basin a likely increase of mean annual precipitation up to 7% whereas in the western lowland part of the basin mean annual precipitation projected to decrease by 3%. The water use simulation indicates that currently irrigation demand in the basin is 1.29 Bm3y-1 for 122,765 ha of irrigation area. By 2025, with new schemes being developed, irrigation demand is estimated to increase to 2.5 Bm3y-1 for 277,779 ha. By 2050, irrigation demand in the basin is estimated to increase to 3.4 Bm3y-1 for 372,779 ha. The hydropower generation simulation indicates that 98 % of hydroelectricity potential could be produced if all planned dams are constructed.Keywords: Blue Nile River, climate change, hydropower, SWAT, WEAP
Procedia PDF Downloads 355