Search results for: equivalent circuit models
5179 Removal of Nickel and Vanadium from Crude Oil by Using Solvent Extraction and Electrochemical Process
Authors: Aliya Kurbanova, Nurlan Akhmetov, Abilmansur Yeshmuratov, Yerzhigit Sugurbekov, Ramiz Zulkharnay, Gulzat Demeuova, Murat Baisariyev, Gulnar Sugurbekova
Abstract:
Last decades crude oils have tended to become more challenge to process due to increasing amounts of sour and heavy crude oils. Some crude oils contain high vanadium and nickel content, for example Pavlodar LLP crude oil, which contains more than 23.09 g/t nickel and 58.59 g/t vanadium. In this study, we used two types of metal removing methods such as solvent extraction and electrochemical. The present research is conducted for comparative analysis of the deasphalting with organic solvents (cyclohexane, carbon tetrachloride, chloroform) and electrochemical method. Applying the cyclic voltametric analysis (CVA) and Inductively coupled plasma mass spectrometry (ICP MS), these mentioned types of metal extraction methods were compared in this paper. Maximum efficiency of deasphalting, with cyclohexane as the solvent, in Soxhlet extractor was 66.4% for nickel and 51.2% for vanadium content from crude oil. Percentage of Ni extraction reached maximum of approximately 55% by using the electrochemical method in electrolysis cell, which was developed for this research and consists of three sections: oil and protonating agent (EtOH) solution between two conducting membranes which divides it from two capsules of 10% sulfuric acid and two graphite electrodes which cover all three parts in electrical circuit. Ions of metals pass through membranes and remain in acid solutions. The best result was obtained in 60 minutes with ethanol to oil ratio 25% to 75% respectively, current fits into the range from 0.3A to 0.4A, voltage changed from 12.8V to 17.3V.Keywords: demetallization, deasphalting, electrochemical removal, heavy metals, petroleum engineering, solvent extraction
Procedia PDF Downloads 3265178 National Digital Soil Mapping Initiatives in Europe: A Review and Some Examples
Authors: Dominique Arrouays, Songchao Chen, Anne C. Richer-De-Forges
Abstract:
Soils are at the crossing of many issues such as food and water security, sustainable energy, climate change mitigation and adaptation, biodiversity protection, human health and well-being. They deliver many ecosystem services that are essential to life on Earth. Therefore, there is a growing demand for soil information on a national and global scale. Unfortunately, many countries do not have detailed soil maps, and, when existing, these maps are generally based on more or less complex and often non-harmonized soil classifications. An estimate of their uncertainty is also often missing. Thus, there are not easy to understand and often not properly used by end-users. Therefore, there is an urgent need to provide end-users with spatially exhaustive grids of essential soil properties, together with an estimate of their uncertainty. One way to achieve this is digital soil mapping (DSM). The concept of DSM relies on the hypothesis that soils and their properties are not randomly distributed, but that they depend on the main soil-forming factors that are climate, organisms, relief, parent material, time (age), and position in space. All these forming factors can be approximated using several exhaustive spatial products such as climatic grids, remote sensing products or vegetation maps, digital elevation models, geological or lithological maps, spatial coordinates of soil information, etc. Thus, DSM generally relies on models calibrated with existing observed soil data (point observations or maps) and so-called “ancillary co-variates” that come from other available spatial products. Then the model is generalized on grids where soil parameters are unknown in order to predict them, and the prediction performances are validated using various methods. With the growing demand for soil information at a national and global scale and the increase of available spatial co-variates national and continental DSM initiatives are continuously increasing. This short review illustrates the main national and continental advances in Europe, the diversity of the approaches and the databases that are used, the validation techniques and the main scientific and other issues. Examples from several countries illustrate the variety of products that were delivered during the last ten years. The scientific production on this topic is continuously increasing and new models and approaches are developed at an incredible speed. Most of the digital soil mapping (DSM) products rely mainly on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs or for existing conventional maps. However, some scientific issues remain to be solved and also political and legal ones related, for instance, to data sharing and to different laws in different countries. Other issues related to communication to end-users and education, especially on the use of uncertainty. Overall, the progress is very important and the willingness of institutes and countries to join their efforts is increasing. Harmonization issues are still remaining, mainly due to differences in classifications or in laboratory standards between countries. However numerous initiatives are ongoing at the EU level and also at the global level. All these progress are scientifically stimulating and also promissing to provide tools to improve and monitor soil quality in countries, EU and at the global level.Keywords: digital soil mapping, global soil mapping, national and European initiatives, global soil mapping products, mini-review
Procedia PDF Downloads 1845177 Astragaioside IV Inhibits Type2 Allergic Contact Dermatitis in Mice and the Mechanism Through TLRs-NF-kB Pathway
Authors: Xiao Wei, Dandan Sheng, Xiaoyan Jiang, Lili Gui, Huizhu Wang, Xi Yu, Hailiang Liu, Min Hong
Abstract:
Objective: Mice Type2 allergic contact dermatitis was utilized in this study to explore the effect of AS-IV on Type 2 allergic inflammatory. Methods: The mice were topically sensitized on the shaved abdomens with 1.5% FITC solution on abdominal skin in the day 1 and day 2 and elicited on the right ear with 0.5% FITC solution at day 6. Mice were treated with either AS-IV or normal saline from day 1 to day 5 (induction phase). Auricle swelling was measured 24 h after the elicitation. Ear pathohistological examination was carried out by HE staining. IL-4\IL-13, and IL-9 levels of ear tissue were detected by ELISA. Mice were treated with AS-IV at the initial stage of induction phase, ear tissue was taked at day 3.TSLP level of ear tissue was detected by ELISA and TSLPmRNA\NF-kBmRNA\TLRs(TLR2\TLR3\TLR8\TLR9)mRNA were detected by PCR. Results: AS-IV induction phase evidently inhibited the auricle inflam-mation of the models; pathohistological results indicated that AS-IV induction phase alleviated local edema and angiectasis of mice models and reduced lymphocytic infiltration. AS-IV induction phase markedly decreased IL-4\IL-13, and IL-9 levels in ear tissue. Moreover, at the initial stage of induction pha-se, AS-IV significantly reduced TSLP\TSLPmRNA\NF-kBmRNA\TLR2mRNA\TLR8 mRNA levels in ear tissue. Conclusion: Administration with AS-IV in induction phase could inhibit Type 2 allergic contact dermatitis in mice significantly, and the mechanism may be related with regulating TSLP through TLRs-NF-kB pathway.Keywords: Astragaioside IV, allergic contact dermatitis, TSLP, interleukin-4, interleukin-13, interleukin-9
Procedia PDF Downloads 4315176 The Effect of Sulfur and Calcium on the Formation of Dioxin in a Bubbling Fluidized Bed Incinerator
Authors: Chien-Song Chyang, Wei-Chih Wang
Abstract:
For the incineration process, the inhibition of dioxin formation is an important issue. Many investigations indicate that adding sulfur compounds in the combustion process can be an effectively inhibition for the dioxin formation. In the process, the ratio of sulfur-to-chlorine plays an important role for the reduction efficiency of dioxin formation. Ca-base sorbent is also a common used for the acid gas removing. Moreover, that is also the indirectly way for dioxin inhibition. Although sulfur and calcium can reduce the dioxin formation, it still have some confusion exists between these additives. To understand and clarify the relationship between the dioxin and simultaneous addition of sulfur and calcium are presented in this study. The experimental data conducted in a pilot scale fluidized bed combustion system at various operating conditions are analysis comprehensively. The focus is on the dioxin of fly ash in this study. The experimental data in this study showed that the PCDD/Fs concentration in the fly ash collected from the baghouse is increased slightly as the simultaneous addition of sulfur and calcium. This work described the CO concentration with the addition of sulfur and calcium at the freeboard temperature from 800°C to 900°C, which is raised by the fuel complexity. The positive correlation exists between the dioxin concentration and CO concentration and carbon contained in the fly ash.. At the same sulfur/chlorine ratio, the toxic equivalent quantity (TEQ) can be reduced by increasing the actual concentration of sulfur and calcium. The homologue profiles showed that the P₅CDD and P₅CDF were the two major sources for the toxicity of dioxin. 2,3,7,8-TCDD and 2,3,7,8-TCDF reduced by the addition of pyrite and hydrated lime. The experimental results showed that the trend of PCDD/Fs concentration in the fly ash was different by the different sulfur/chlorine ratio with the addition of sulfur at 800°C.Keywords: reduction of dioxin emissions, sulfur-to-chlorine ratio, de-chlorination, Ca-based sorbent
Procedia PDF Downloads 1475175 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing
Authors: Ahmed Elaksher, Islam Omar
Abstract:
Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition
Procedia PDF Downloads 635174 ALEF: An Enhanced Approach to Arabic-English Bilingual Translation
Authors: Abdul Muqsit Abbasi, Ibrahim Chhipa, Asad Anwer, Saad Farooq, Hassan Berry, Sonu Kumar, Sundar Ali, Muhammad Owais Mahmood, Areeb Ur Rehman, Bahram Baloch
Abstract:
Accurate translation between structurally diverse languages, such as Arabic and English, presents a critical challenge in natural language processing due to significant linguistic and cultural differences. This paper investigates the effectiveness of Facebook’s mBART model, fine-tuned specifically for sequence-tosequence (seq2seq) translation tasks between Arabic and English, and enhanced through advanced refinement techniques. Our approach leverages the Alef Dataset, a meticulously curated parallel corpus spanning various domains to capture the linguistic richness, nuances, and contextual accuracy essential for high-quality translation. We further refine the model’s output using advanced language models such as GPT-3.5 and GPT-4, which improve fluency, coherence, and correct grammatical errors in translated texts. The fine-tuned model demonstrates substantial improvements, achieving a BLEU score of 38.97, METEOR score of 58.11, and TER score of 56.33, surpassing widely used systems such as Google Translate. These results underscore the potential of mBART, combined with refinement strategies, to bridge the translation gap between Arabic and English, providing a reliable, context-aware machine translation solution that is robust across diverse linguistic contexts.Keywords: natural language processing, machine translation, fine-tuning, Arabic-English translation, transformer models, seq2seq translation, translation evaluation metrics, cross-linguistic communication
Procedia PDF Downloads 155173 Physicochemical Characterization of Medium Alkyd Resins Prepared with a Mixture of Linum usitatissimum L. and Plukenetia volubilis L. Oils
Authors: Antonella Hadzich, Santiago Flores
Abstract:
Alkyds have become essential raw materials in the coating and paint industry, due to their low cost, good application properties and lower environmental impact in comparison with petroleum-based polymers. The properties of these oil-modified materials depend on the type of polyunsaturated vegetable oil used for its manufacturing, since a higher degree of unsaturation provides a better crosslinking of the cured paint. Linum usitatissimum L. (flax) oil is widely used to develop alkyd resins due to its high degree of unsaturation. Although it is intended to find non-traditional sources and increase their commercial value, to authors’ best knowledge a natural source that can replace flaxseed oil has not yet been found. However, Plukenetia volubilis L. oil, of Peruvian origin, contains a similar fatty acid polyunsaturated content to the one reported for Linum usitatissimum L. oil. In this perspective, medium alkyd resins were prepared with a mixture of 50% of Linum usitatissimum L. oil and 50% of Plukenetia volubilis L. oil. Pure Linum usitatissimum L. oil was also used for comparison purposes. Three different resins were obtained by varying the amount of glycerol and pentaerythritol. The synthesized alkyd resins were characterized by FT-IR, and physicochemical properties like acid value, colour, viscosity, density and drying time were evaluated by standard methods. The pencil hardness and chemical resistance behaviour of the cured resins were also studied. Overall, it can be concluded that medium alkyd resins containing Plukenetia volubilis L. oil have an equivalent behaviour compared to those prepared purely with Linum usitatissimum L. oil. Both Plukenetia volubilis L. oil and pentaerythritol have a remarkable influence on certain physicochemical properties of medium alkyd resins.Keywords: alkyd resins, flaxseed oil, pentaerythritol, Plukenetia volubilis L. oil, protective coating
Procedia PDF Downloads 1225172 A Sub-Conjunctiva Injection of Rosiglitazone for Anti-Fibrosis Treatment after Glaucoma Filtration Surgery
Authors: Yang Zhao, Feng Zhang, Xuanchu Duan
Abstract:
Trans-differentiation of human Tenon fibroblasts (HTFs) to myo-fibroblasts and fibrosis of episcleral tissue are the most common reasons for the failure of glaucoma filtration surgery, with limited treatment options like antimetabolites which always have side-effects such as leakage of filter bulb, infection, hypotony, and endophthalmitis. Rosiglitazone, a specific thiazolidinedione is a synthetic high-affinity ligand for PPAR-r, which has been used in the treatment of type2 diabetes, and found to have pleiotropic functions against inflammatory response, cell proliferation and tissue fibrosis and to benefit to a variety of diseases in animal myocardium models, steatohepatitis models, etc. Here, in vitro we cultured primary HTFs and stimulated with TGF- β to induced myofibrogenic, then treated cells with Rosiglitazone to assess for fibrogenic response. In vivo, we used rabbit glaucoma model to establish the formation of post- trabeculectomy scarring. Then we administered subconjunctival injection with Rosiglitazone beside the filtering bleb, later protein, mRNA and immunofluorescence of fibrogenic markers are checked, and filtering bleb condition was measured. In vitro, we found Rosiglitazone could suppressed proliferation and migration of fibroblasts through macroautophagy via TGF- β /Smad signaling pathway. In vivo, on postoperative day 28, the mean number of fibroblasts in Rosiglitazone injection group was significantly the lowest and had the least collagen content and connective tissue growth factor. Rosiglitazone effectively controlled human and rabbit fibroblasts in vivo and in vitro. Its subconjunctiiva application may represent an effective, new avenue for the prevention of scarring after glaucoma surgery.Keywords: fibrosis, glaucoma, macroautophagy, rosiglitazone
Procedia PDF Downloads 2745171 [Keynote Talk]: Mathematical and Numerical Modelling of the Cardiovascular System: Macroscale, Mesoscale and Microscale Applications
Authors: Aymen Laadhari
Abstract:
The cardiovascular system is centered on the heart and is characterized by a very complex structure with different physical scales in space (e.g. micrometers for erythrocytes and centimeters for organs) and time (e.g. milliseconds for human brain activity and several years for development of some pathologies). The development and numerical implementation of mathematical models of the cardiovascular system is a tremendously challenging topic at the theoretical and computational levels, inducing consequently a growing interest over the past decade. The accurate computational investigations in both healthy and pathological cases of processes related to the functioning of the human cardiovascular system can be of great potential in tackling several problems of clinical relevance and in improving the diagnosis of specific diseases. In this talk, we focus on the specific task of simulating three particular phenomena related to the cardiovascular system on the macroscopic, mesoscopic and microscopic scales, respectively. Namely, we develop numerical methodologies tailored for the simulation of (i) the haemodynamics (i.e., fluid mechanics of blood) in the aorta and sinus of Valsalva interacting with highly deformable thin leaflets, (ii) the hyperelastic anisotropic behaviour of cardiomyocytes and the influence of calcium concentrations on the contraction of single cells, and (iii) the dynamics of red blood cells in microvasculature. For each problem, we present an appropriate fully Eulerian finite element methodology. We report several numerical examples to address in detail the relevance of the mathematical models in terms of physiological meaning and to illustrate the accuracy and efficiency of the numerical methods.Keywords: finite element method, cardiovascular system, Eulerian framework, haemodynamics, heart valve, cardiomyocyte, red blood cell
Procedia PDF Downloads 2525170 Ecosystem Model for Environmental Applications
Authors: Cristina Schreiner, Romeo Ciobanu, Marius Pislaru
Abstract:
This paper aims to build a system based on fuzzy models that can be implemented in the assessment of ecological systems, to determine appropriate methods of action for reducing adverse effects on environmental and implicit the population. The model proposed provides new perspective for environmental assessment, and it can be used as a practical instrument for decision-making.Keywords: ecosystem model, environmental security, fuzzy logic, sustainability of habitable regions
Procedia PDF Downloads 4205169 Mature Field Rejuvenation Using Hydraulic Fracturing: A Case Study of Tight Mature Oilfield with Reveal Simulator
Authors: Amir Gharavi, Mohamed Hassan, Amjad Shah
Abstract:
The main characteristics of unconventional reservoirs include low-to ultra low permeability and low-to-moderate porosity. As a result, hydrocarbon production from these reservoirs requires different extraction technologies than from conventional resources. An unconventional reservoir must be stimulated to produce hydrocarbons at an acceptable flow rate to recover commercial quantities of hydrocarbons. Permeability for unconventional reservoirs is mostly below 0.1 mD, and reservoirs with permeability above 0.1 mD are generally considered to be conventional. The hydrocarbon held in these formations naturally will not move towards producing wells at economic rates without aid from hydraulic fracturing which is the only technique to assess these tight reservoir productions. Horizontal well with multi-stage fracking is the key technique to maximize stimulated reservoir volume and achieve commercial production. The main objective of this research paper is to investigate development options for a tight mature oilfield. This includes multistage hydraulic fracturing and spacing by building of reservoir models in the Reveal simulator to model potential development options based on sidetracking the existing vertical well. To simulate potential options, reservoir models have been built in the Reveal. An existing Petrel geological model was used to build the static parts of these models. A FBHP limit of 40bars was assumed to take into account pump operating limits and to maintain the reservoir pressure above the bubble point. 300m, 600m and 900m lateral length wells were modelled, in conjunction with 4, 6 and 8 stages of fracs. Simulation results indicate that higher initial recoveries and peak oil rates are obtained with longer well lengths and also with more fracs and spacing. For a 25year forecast, the ultimate recovery ranging from 0.4% to 2.56% for 300m and 1000m laterals respectively. The 900m lateral with 8 fracs 100m spacing gave the highest peak rate of 120m3/day, with the 600m and 300m cases giving initial peak rates of 110m3/day. Similarly, recovery factor for the 900m lateral with 8 fracs and 100m spacing was the highest at 2.65% after 25 years. The corresponding values for the 300m and 600m laterals were 2.37% and 2.42%. Therefore, the study suggests that longer laterals with 8 fracs and 100m spacing provided the optimal recovery, and this design is recommended as the basis for further study.Keywords: unconventional, resource, hydraulic, fracturing
Procedia PDF Downloads 2985168 Evidence Theory Based Emergency Multi-Attribute Group Decision-Making: Application in Facility Location Problem
Authors: Bidzina Matsaberidze
Abstract:
It is known that, in emergency situations, multi-attribute group decision-making (MAGDM) models are characterized by insufficient objective data and a lack of time to respond to the task. Evidence theory is an effective tool for describing such incomplete information in decision-making models when the expert and his knowledge are involved in the estimations of the MAGDM parameters. We consider an emergency decision-making model, where expert assessments on humanitarian aid from distribution centers (HADC) are represented in q-rung ortho-pair fuzzy numbers, and the data structure is described within the data body theory. Based on focal probability construction and experts’ evaluations, an objective function-distribution centers’ selection ranking index is constructed. Our approach for solving the constructed bicriteria partitioning problem consists of two phases. In the first phase, based on the covering’s matrix, we generate a matrix, the columns of which allow us to find all possible partitionings of the HADCs with the service centers. Some constraints are also taken into consideration while generating the matrix. In the second phase, based on the matrix and using our exact algorithm, we find the partitionings -allocations of the HADCs to the centers- which correspond to the Pareto-optimal solutions. For an illustration of the obtained results, a numerical example is given for the facility location-selection problem.Keywords: emergency MAGDM, q-rung orthopair fuzzy sets, evidence theory, HADC, facility location problem, multi-objective combinatorial optimization problem, Pareto-optimal solutions
Procedia PDF Downloads 925167 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data
Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa
Abstract:
A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation
Procedia PDF Downloads 2025166 Setting up Model Hospitals in Health Care Waste Management in Madagascar
Authors: Sandrine Andriantsimietry, Hantanirina Ravaosendrasoa
Abstract:
Madagascar, in 2018, set up the first best available technology, autoclave, to treat the health care waste in public hospitals according the best environmental practices in health care waste management. Incineration of health care waste, frequently through open burning is the most common practice of treatment and elimination of health care waste across the country. Autoclave is a best available technology for non-incineration of health care waste that permits recycling of treated waste and prevents harm in environment through the reduction of unintended persistent organic pollutants from the health sector. A Global Environment Fund project supported the introduction of the non-incineration treatment of health care waste to help countries in Africa to move towards Stockholm Convention objectives in the health sector. Two teaching hospitals in Antananarivo and one district hospital in Manjakandriana were equipped respectively with 1300L, 250L and 80L autoclaves. The capacity of these model hospitals was strengthened by the donation of equipment and materials and the training of the health workers in best environmental practices in health care waste management. Proper segregation of waste in the wards to collect the infectious waste that was treated in the autoclave was the main step guaranteeing a cost-efficient non-incineration of health care waste. Therefore, the start-up of the switch of incineration into non-incineration treatment was carried out progressively in each ward with close supervision of hygienist. Emissions avoided of unintended persistent organic pollutants during these four months of autoclaves use is 9.4 g Toxic Equivalent per year. Public hospitals in low income countries can be model in best environmental practices in health care waste management but efforts must be made internally for sustainment.Keywords: autoclave, health care waste management, model hospitals, non-incineration
Procedia PDF Downloads 1635165 The Impact of a Sustainable Solar Heating System on the Growth of Strawberry Plants in an Agricultural Greenhouse
Authors: Ilham Ihoume, Rachid Tadili, Nora Arbaoui
Abstract:
The use of solar energy is a crucial tactic in the agricultural industry's plan to decrease greenhouse gas emissions. This clean source of energy can greatly lower the sector's carbon footprint and make a significant impact in the fight against climate change. In this regard, this study examines the effects of a solar-based heating system, in a north-south oriented agricultural greenhouse on the development of strawberry plants during winter. This system relies on the circulation of water as a heat transfer fluid in a closed circuit installed on the greenhouse roof to store heat during the day and release it inside at night. A comparative experimental study was conducted in two greenhouses, one experimental with the solar heating system and the other for control without any heating system. Both greenhouses are located on the terrace of the Solar Energy and Environment Laboratory of the Mohammed V University in Rabat, Morocco. The developed heating system consists of a copper coil inserted in double glazing and placed on the roof of the greenhouse, a water pump circulator, a battery, and a photovoltaic solar panel to power the electrical components. This inexpensive and environmentally friendly system allows the greenhouse to be heated during the winter and improves its microclimate system. This improvement resulted in an increase in the air temperature inside the experimental greenhouse by 6 °C and 8 °C, and a reduction in its relative humidity by 23% and 35% compared to the control greenhouse and the ambient air, respectively, throughout the winter. For the agronomic performance, it was observed that the production was 17 days earlier than in the control greenhouse.Keywords: sustainability, thermal energy storage, solar energy, agriculture greenhouse
Procedia PDF Downloads 885164 A Novel Rapid Well Control Technique Modelled in Computational Fluid Dynamics Software
Authors: Michael Williams
Abstract:
The ability to control a flowing well is of the utmost important. During the kill phase, heavy weight kill mud is circulated around the well. While increasing bottom hole pressure near wellbore formation, the damage is increased. The addition of high density spherical objects has the potential to minimise this near wellbore damage, increase bottom hole pressure and reduce operational time to kill the well. This operational time saving is seen in the rapid deployment of high density spherical objects instead of building high density drilling fluid. The research aims to model the well kill process using a Computational Fluid Dynamics software. A model has been created as a proof of concept to analyse the flow of micron sized spherical objects in the drilling fluid. Initial results show that this new methodology of spherical objects in drilling fluid agrees with traditional stream lines seen in non-particle flow. Additional models have been created to demonstrate that areas of higher flow rate around the bit can lead to increased probability of wash out of formations but do not affect the flow of micron sized spherical objects. Interestingly, areas that experience dimensional changes such as tool joints and various BHA components do not appear at this initial stage to experience increased velocity or create areas of turbulent flow, which could lead to further borehole stability. In conclusion, the initial models of this novel well control methodology have not demonstrated any adverse flow patterns, which would conclude that this model may be viable under field conditions.Keywords: well control, fluid mechanics, safety, environment
Procedia PDF Downloads 1715163 Aerobic Capacity Outcomes after an Aerobic Exercise Program with an Upper Body Ergometer in Diabetic Amputees
Authors: Cecilia Estela Jiménez Pérez Campos
Abstract:
Introduction: Amputation comes from a series of complications in diabetic persons; at that point, of the illness evolution they have a deplored aerobic capacity. Adding to that, cardiac rehabs programs are almost base in several activities in a standing position. The cardiac rehabilitation programs have to improve for them, based on scientific advice. Objective: Evaluation of aerobic capacity of diabetic amputee after an aerobic exercise program, with upper limb ergometer. Methodology: The design is longitudinal, prospective, comparative and no randomized. We include all diabetic pelvic limb amputees, who assist to the cardiac rehabilitation. We made 2 groups: an experimental and a control group. The patients did the exercise testing, with the author’s design protocol. The experimental group completed 24 exercise sessions (3 sessions/week), with an intensity determined with the training heart rate. At the end of 8 weeks period, the subjects did a second exercise test. Results: Both groups were a homogeneous sample in age (experimental n=15) 57.6+12.5 years old and (control n=8) 52.5+8.0 years old, sex, occupation, education and economic features. (square chi) (p=0.28). The initial aerobic capacity was similar in both groups. And the aerobic capacity accomplishes after the program was statistically greater in the experimental group than in the control one. The final media VO2peak (mlO2/kg/min) was experimental (17.1+3.8), control (10.5+3.8), p=0.001. (t student). Conclusions: The aerobic capacity improved after an arm ergometer exercise program and the quality of life improve too, in diabetic amputees. So this program is fundamental in diabetic amputee’s rehabilitation management.Keywords: aerobic fitness, metabolic equivalent (MET), oxygen output, upper limb ergometer
Procedia PDF Downloads 2355162 Modeling Route Selection Using Real-Time Information and GPS Data
Authors: William Albeiro Alvarez, Gloria Patricia Jaramillo, Ivan Reinaldo Sarmiento
Abstract:
Understanding the behavior of individuals and the different human factors that influence the choice when faced with a complex system such as transportation is one of the most complicated aspects of measuring in the components that constitute the modeling of route choice due to that various behaviors and driving mode directly or indirectly affect the choice. During the last two decades, with the development of information and communications technologies, new data collection techniques have emerged such as GPS, geolocation with mobile phones, apps for choosing the route between origin and destination, individual service transport applications among others, where an interest has been generated to improve discrete choice models when considering the incorporation of these developments as well as psychological factors that affect decision making. This paper implements a discrete choice model that proposes and estimates a hybrid model that integrates route choice models and latent variables based on the observation on the route of a sample of public taxi drivers from the city of Medellín, Colombia in relation to its behavior, personality, socioeconomic characteristics, and driving mode. The set of choice options includes the routes generated by the individual service transport applications versus the driver's choice. The hybrid model consists of measurement equations that relate latent variables with measurement indicators and utilities with choice indicators along with structural equations that link the observable characteristics of drivers with latent variables and explanatory variables with utilities.Keywords: behavior choice model, human factors, hybrid model, real time data
Procedia PDF Downloads 1525161 Integrating Knowledge Distillation of Multiple Strategies
Authors: Min Jindong, Wang Mingxia
Abstract:
With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.Keywords: object detection, knowledge distillation, convolutional network, model compression
Procedia PDF Downloads 2785160 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy
Procedia PDF Downloads 2485159 Kinetic Study of Physical Quality Changes on Jumbo Squid (Dosidicus gigas) Slices during Application High-Pressure Impregnation
Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Fernanda Marin, Constanza Olivares
Abstract:
This study presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration of jumbo squid (Dosidicus gigas) slice. Diffusion coefficients for both components water and solids were improved by the process pressure, being influenced by pressure level. The working conditions were different pressures such as 100, 250, 400 MPa and pressure atmospheric (0.1 MPa) for time intervals from 30 to 300 seconds and a 15% NaCl concentration. The mathematical expressions used for mass transfer simulations both water and salt were those corresponding to Newton, Henderson and Pabis, Page and Weibull models, where the Weibull and Henderson-Pabis models presented the best fitted to the water and salt experimental data, respectively. The values for water diffusivity coefficients varied from 1.62 to 8.10x10⁻⁹ m²/s whereas that for salt varied among 14.18 to 36.07x10⁻⁹ m²/s for selected conditions. Finally, as to quality parameters studied under the range of experimental conditions studied, the treatment at 250 MPa yielded on the samples a minimum hardness, whereas springiness, cohesiveness and chewiness at 100, 250 and 400 MPa treatments presented statistical differences regarding to unpressurized samples. The colour parameters L* (lightness) increased, however, but b* (yellowish) and a* (reddish) parameters decreased when increasing pressure level. This way, samples presented a brighter aspect and a mildly cooked appearance. The results presented in this study can support the enormous potential of hydrostatic pressure application as a technique important for compounds impregnation under high pressure.Keywords: colour, diffusivity, high pressure, jumbo squid, modelling, texture
Procedia PDF Downloads 3445158 Beam Coding with Orthogonal Complementary Golay Codes for Signal to Noise Ratio Improvement in Ultrasound Mammography
Authors: Y. Kumru, K. Enhos, H. Köymen
Abstract:
In this paper, we report the experimental results on using complementary Golay coded signals at 7.5 MHz to detect breast microcalcifications of 50 µm size. Simulations using complementary Golay coded signals show perfect consistence with the experimental results, confirming the improved signal to noise ratio for complementary Golay coded signals. For improving the success on detecting the microcalcifications, orthogonal complementary Golay sequences having cross-correlation for minimum interference are used as coded signals and compared to tone burst pulse of equal energy in terms of resolution under weak signal conditions. The measurements are conducted using an experimental ultrasound research scanner, Digital Phased Array System (DiPhAS) having 256 channels, a phased array transducer with 7.5 MHz center frequency and the results obtained through experiments are validated by Field-II simulation software. In addition, to investigate the superiority of coded signals in terms of resolution, multipurpose tissue equivalent phantom containing series of monofilament nylon targets, 240 µm in diameter, and cyst-like objects with attenuation of 0.5 dB/[MHz x cm] is used in the experiments. We obtained ultrasound images of monofilament nylon targets for the evaluation of resolution. Simulation and experimental results show that it is possible to differentiate closely positioned small targets with increased success by using coded excitation in very weak signal conditions.Keywords: coded excitation, complementary golay codes, DiPhAS, medical ultrasound
Procedia PDF Downloads 2635157 Evaluating Probable Bending of Frames for Near-Field and Far-Field Records
Authors: Majid Saaly, Shahriar Tavousi Tafreshi, Mehdi Nazari Afshar
Abstract:
Most reinforced concrete structures are designed only under heavy loads have large transverse reinforcement spacing values, and therefore suffer severe failure after intense ground movements. The main goal of this paper is to compare the shear- and axial failure of concrete bending frames available in Tehran using incremental dynamic analysis under near- and far-field records. For this purpose, IDA analyses of 5, 10, and 15-story concrete structures were done under seven far-fault records and five near-faults records. The results show that in two-dimensional models of short-rise, mid-rise and high-rise reinforced concrete frames located on Type-3 soil, increasing the distance of the transverse reinforcement can increase the maximum inter-story drift ratio values up to 37%. According to the existing results on 5, 10, and 15-story reinforced concrete models located on Type-3 soil, records with characteristics such as fling-step and directivity create maximum drift values between floors more than far-fault earthquakes. The results indicated that in the case of seismic excitation modes under earthquake encompassing directivity or fling-step, the probability values of failure and failure possibility increasing rate values are much smaller than the corresponding values of far-fault earthquakes. However, in near-fault frame records, the probability of exceedance occurs at lower seismic intensities compared to far-fault records.Keywords: IDA, failure curve, directivity, maximum floor drift, fling step, evaluating probable bending of frames, near-field and far-field earthquake records
Procedia PDF Downloads 1085156 InSAR Times-Series Phase Unwrapping for Urban Areas
Authors: Hui Luo, Zhenhong Li, Zhen Dong
Abstract:
The analysis of multi-temporal InSAR (MTInSAR) such as persistent scatterer (PS) and small baseline subset (SBAS) techniques usually relies on temporal/spatial phase unwrapping (PU). Unfortunately, it always fails to unwrap the phase for two reasons: 1) spatial phase jump between adjacent pixels larger than π, such as layover and high discontinuous terrain; 2) temporal phase discontinuities such as time varied atmospheric delay. To overcome these limitations, a least-square based PU method is introduced in this paper, which incorporates baseline-combination interferograms and adjacent phase gradient network. Firstly, permanent scatterers (PS) are selected for study. Starting with the linear baseline-combination method, we obtain equivalent 'small baseline inteferograms' to limit the spatial phase difference. Then, phase different has been conducted between connected PSs (connected by a specific networking rule) to suppress the spatial correlated phase errors such as atmospheric artifact. After that, interval phase difference along arcs can be computed by least square method and followed by an outlier detector to remove the arcs with phase ambiguities. Then, the unwrapped phase can be obtained by spatial integration. The proposed method is tested on real data of TerraSAR-X, and the results are also compared with the ones obtained by StaMPS(a software package with 3D PU capabilities). By comparison, it shows that the proposed method can successfully unwrap the interferograms in urban areas even when high discontinuities exist, while StaMPS fails. At last, precise DEM errors can be got according to the unwrapped interferograms.Keywords: phase unwrapping, time series, InSAR, urban areas
Procedia PDF Downloads 1515155 Seismic Behavior of Existing Reinforced Concrete Buildings in California under Mainshock-Aftershock Scenarios
Authors: Ahmed Mantawy, James C. Anderson
Abstract:
Numerous cases of earthquakes (main-shocks) that were followed by aftershocks have been recorded in California. In 1992 a pair of strong earthquakes occurred within three hours of each other in Southern California. The first shock occurred near the community of Landers and was assigned a magnitude of 7.3 then the second shock occurred near the city of Big Bear about 20 miles west of the initial shock and was assigned a magnitude of 6.2. In the same year, a series of three earthquakes occurred over two days in the Cape-Mendocino area of Northern California. The main-shock was assigned a magnitude of 7.0 while the second and the third shocks were both assigned a value of 6.6. This paper investigates the effect of a main-shock accompanied with aftershocks of significant intensity on reinforced concrete (RC) frame buildings to indicate nonlinear behavior using PERFORM-3D software. A 6-story building in San Bruno and a 20-story building in North Hollywood were selected for the study as both of them have RC moment resisting frame systems. The buildings are also instrumented at multiple floor levels as a part of the California Strong Motion Instrumentation Program (CSMIP). Both buildings have recorded responses during past events such as Loma-Prieta and Northridge earthquakes which were used in verifying the response parameters of the numerical models in PERFORM-3D. The verification of the numerical models shows good agreement between the calculated and the recorded response values. Then, different scenarios of a main-shock followed by a series of aftershocks from real cases in California were applied to the building models in order to investigate the structural behavior of the moment-resisting frame system. The behavior was evaluated in terms of the lateral floor displacements, the ductility demands, and the inelastic behavior at critical locations. The analysis results showed that permanent displacements may have happened due to the plastic deformation during the main-shock that can lead to higher displacements during after-shocks. Also, the inelastic response at plastic hinges during the main-shock can change the hysteretic behavior during the aftershocks. Higher ductility demands can also occur when buildings are subjected to trains of ground motions compared to the case of individual ground motions. A general conclusion is that the occurrence of aftershocks following an earthquake can lead to increased damage within the elements of an RC frame buildings. Current code provisions for seismic design do not consider the probability of significant aftershocks when designing a new building in zones of high seismic activity.Keywords: reinforced concrete, existing buildings, aftershocks, damage accumulation
Procedia PDF Downloads 2805154 Hybrid Velocity Control Approach for Tethered Aerial Vehicle
Authors: Lovesh Goyal, Pushkar Dave, Prajyot Jadhav, GonnaYaswanth, Sakshi Giri, Sahil Dharme, Rushika Joshi, Rishabh Verma, Shital Chiddarwar
Abstract:
With the rising need for human-robot interaction, researchers have proposed and tested multiple models with varying degrees of success. A few of these models performed on aerial platforms are commonly known as Tethered Aerial Systems. These aerial vehicles may be powered continuously by a tether cable, which addresses the predicament of the short battery life of quadcopters. This system finds applications to minimize humanitarian efforts for industrial, medical, agricultural, and service uses. However, a significant challenge in employing such systems is that it necessities attaining smooth and secure robot-human interaction while ensuring that the forces from the tether remain within the standard comfortable range for the humans. To tackle this problem, a hybrid control method that could switch between two control techniques: constant control input and the steady-state solution, is implemented. The constant control approach is implemented when a person is far from the target location, and error is thought to be eventually constant. The controller switches to the steady-state approach when the person reaches within a specific range of the goal position. Both strategies take into account human velocity feedback. This hybrid technique enhances the outcomes by assisting the person to reach the desired location while decreasing the human's unwanted disturbance throughout the process, thereby keeping the interaction between the robot and the subject smooth.Keywords: unmanned aerial vehicle, tethered system, physical human-robot interaction, hybrid control
Procedia PDF Downloads 985153 Contrast-to-Noise Ratio Comparison of Different Calcification Types in Dual Energy Breast Imaging
Authors: Vaia N. Koukou, Niki D. Martini, George P. Fountos, Christos M. Michail, Athanasios Bakas, Ioannis S. Kandarakis, George C. Nikiforidis
Abstract:
Various substitute materials of calcifications are used in phantom measurements and simulation studies in mammography. These include calcium carbonate, calcium oxalate, hydroxyapatite and aluminum. The aim of this study is to compare the contrast-to-noise ratio (CNR) values of the different calcification types using the dual energy method. The constructed calcification phantom consisted of three different calcification types and thicknesses: hydroxyapatite, calcite and calcium oxalate of 100, 200, 300 thicknesses. The breast tissue equivalent materials were polyethylene and polymethyl methacrylate slabs simulating adipose tissue and glandular tissue, respectively. The total thickness was 4.2 cm with 50% fixed glandularity. The low- (LE) and high-energy (HE) images were obtained from a tungsten anode using 40 kV filtered with 0.1 mm cadmium and 70 kV filtered with 1 mm copper, respectively. A high resolution complementary metal-oxide-semiconductor (CMOS) active pixel sensor (APS) X-ray detector was used. The total mean glandular dose (MGD) and entrance surface dose (ESD) from the LE and HE images were constrained to typical levels (MGD=1.62 mGy and ESD=1.92 mGy). On average, the CNR of hydroxyapatite calcifications was 1.4 times that of calcite calcifications and 2.5 times that of calcium oxalate calcifications. The higher CNR values of hydroxyapatite are attributed to its attenuation properties compared to the other calcification materials, leading to higher contrast in the dual energy image. This work was supported by Grant Ε.040 from the Research Committee of the University of Patras (Programme K. Karatheodori).Keywords: calcification materials, CNR, dual energy, X-rays
Procedia PDF Downloads 3575152 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms
Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.
Abstract:
Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment
Procedia PDF Downloads 4025151 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 945150 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column
Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan
Abstract:
Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill
Procedia PDF Downloads 75