Search results for: precision feed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2051

Search results for: precision feed

701 Machine Learning Classification of Fused Sentinel-1 and Sentinel-2 Image Data Towards Mapping Fruit Plantations in Highly Heterogenous Landscapes

Authors: Yingisani Chabalala, Elhadi Adam, Khalid Adem Ali

Abstract:

Mapping smallholder fruit plantations using optical data is challenging due to morphological landscape heterogeneity and crop types having overlapped spectral signatures. Furthermore, cloud covers limit the use of optical sensing, especially in subtropical climates where they are persistent. This research assessed the effectiveness of Sentinel-1 (S1) and Sentinel-2 (S2) data for mapping fruit trees and co-existing land-use types by using support vector machine (SVM) and random forest (RF) classifiers independently. These classifiers were also applied to fused data from the two sensors. Feature ranks were extracted using the RF mean decrease accuracy (MDA) and forward variable selection (FVS) to identify optimal spectral windows to classify fruit trees. Based on RF MDA and FVS, the SVM classifier resulted in relatively high classification accuracy with overall accuracy (OA) = 0.91.6% and kappa coefficient = 0.91% when applied to the fused satellite data. Application of SVM to S1, S2, S2 selected variables and S1S2 fusion independently produced OA = 27.64, Kappa coefficient = 0.13%; OA= 87%, Kappa coefficient = 86.89%; OA = 69.33, Kappa coefficient = 69. %; OA = 87.01%, Kappa coefficient = 87%, respectively. Results also indicated that the optimal spectral bands for fruit tree mapping are green (B3) and SWIR_2 (B10) for S2, whereas for S1, the vertical-horizontal (VH) polarization band. Including the textural metrics from the VV channel improved crop discrimination and co-existing land use cover types. The fusion approach proved robust and well-suited for accurate smallholder fruit plantation mapping.

Keywords: smallholder agriculture, fruit trees, data fusion, precision agriculture

Procedia PDF Downloads 41
700 Defining Death and Dying in Relation to Information Technology and Advances in Biomedicine

Authors: Evangelos Koumparoudis

Abstract:

The definition of death is a deep philosophical question, and no single meaning can be ascribed to it. This essay focuses on the ontological, epistemological, and ethical aspects of death and dying in view of technological progress in information technology and biomedicine. It starts with the ad hoc 1968 Harvard committee that proposed that the criterion for the definition of death be irreversible coma and then refers to the debate over the whole brain death formula, emphasizing the integrated function of the organism and higher brain formula, taking consciousness and personality as essential human characteristics. It follows with the contribution of information technology in personalized and precision medicine and anti-aging measures aimed at life prolongation. It also touches on the possibility of the creation of human-machine hybrids and how this raises ontological and ethical issues that concern the “cyborgization” of human beings and the conception of the organism and personhood based on a post/transhumanist essence, and, furthermore, if sentient AI capable of autonomous decision-making that might even surpass human intelligence (singularity, superintelligence) deserves moral or legal personhood. Finally, there is the question as to whether death and dying should be redefined at a transcendent level, which is reinforced by already-existing technologies of “virtual after-” life and the possibility of uploading human minds. In the last section, I refer to the current (and future) applications of nanomedicine in diagnostics, therapeutics, implants, and tissue engineering as well as the aspiration to “immortality” by cryonics. The definition of death is reformulated since age and disease elimination may be realized, and the criterion of irreversibility may be challenged.

Keywords: death, posthumanism, infomedicine, nanomedicine, cryonics

Procedia PDF Downloads 58
699 Relationship of Trace Minerals Nutritional Status of Camel (Camelus dromedarius) to Their Contents in Egyptian Feedstuff

Authors: Maha Mohamed Hady Ali, M. A. El-Sayed

Abstract:

Camel (Camelus dromedarius) is very important animal in many arid and semi-arid zones of tropical and subtropical regions as it serves as dual purpose providing meat and milk for human and as draft animal. Camel, like other animal must receive all essential nutrients despite the hostile environment. A study was conducted to evaluate the nutritional status of some micro-minerals of camel under Egyptian environmental condition. Forty five blood samples were collected from apparently healthy male camels with an average age between 2-6 years at the slaughter house in Cairo province, Egypt. The animals were fed mainly on berseem (Trifolium alexandrinum) or concentrate with straw before slaughtering. The collected serum and feedstuff samples were subjected to copper, iron, selenium and zinc analysis using Atomic absorption spectrophotometer. The data showed variation in the level of copper, iron, selenium and zinc in the serum of the dromedary camel as well as in the feedstuffs. Furthermore, the results indicated that the micro- minerals status of feeds may not always reflected as such in camel blood suggesting some role of bioavailability. The main reason for the lack of such reflection seems to be the wide diversity exists in the surrounding environment (forages and plants) as well as the bioavailability of such minerals. Since the requirement of micro-minerals have not been established for camel, more researches must be focused on this topic.

Keywords: camel, copper, egypt, feed stuff, iron, selenium, zinc

Procedia PDF Downloads 513
698 The Study and the Use of the Bifunctional Catalyst Pt/Re for Obtaining High Octane Number of the Gasoline

Authors: Menouar Hanafi

Abstract:

The original function of the process of platforming is to develop heavy naphtha (HSRN), coming from the atmospheric unit of distillation with a weak octane number (NO=44), to obtain a mixture of fuels â number octane raised by catalytically supporting specific groups of chemical reactions. The installation is divided into two sections: Section hydrobon. Section platforming. The rafinat coming from the bottom of column 12C2 to feed the section platforming, is divided into two parts whose flows are controlled and mixed with gas rich in hydrogen. Bottom of the column, we obtain stabilized reformat which is aspired by there pump to ensure the heating of the column whereas a part is sent towards storage after being cooled by the air cooler and the condenser. In catalytic catalyst of reforming, there is voluntarily associated a hydrogenating function-dehydrogenating, brought by platinum deposited, with an acid function brought by the alumina support (Al 2 0 3). The mechanism of action of this bifunctional catalyst depends on the severity of the operation, of the quality of the load and the type of catalyst. The catalyst used in the catalytic process of reforming is a very elaborate bifunctional catalyst whose performances are constantly improved thanks to the experimental research supported on an increasingly large comprehension of the phenomena. The American company Universel 0i1 petroleum (UOP) marketed several series of bimetallic catalysts such as R16, R20, R30, and R62 consisted Platinum/Rhenium on an acid support consisted the alumina added with a halogenous compound (chlorine).

Keywords: platforming, amelioration, octane number, catalyst

Procedia PDF Downloads 378
697 Local Interpretable Model-agnostic Explanations (LIME) Approach to Email Spam Detection

Authors: Rohini Hariharan, Yazhini R., Blessy Maria Mathew

Abstract:

The task of detecting email spam is a very important one in the era of digital technology that needs effective ways of curbing unwanted messages. This paper presents an approach aimed at making email spam categorization algorithms transparent, reliable and more trustworthy by incorporating Local Interpretable Model-agnostic Explanations (LIME). Our technique assists in providing interpretable explanations for specific classifications of emails to help users understand the decision-making process by the model. In this study, we developed a complete pipeline that incorporates LIME into the spam classification framework and allows creating simplified, interpretable models tailored to individual emails. LIME identifies influential terms, pointing out key elements that drive classification results, thus reducing opacity inherent in conventional machine learning models. Additionally, we suggest a visualization scheme for displaying keywords that will improve understanding of categorization decisions by users. We test our method on a diverse email dataset and compare its performance with various baseline models, such as Gaussian Naive Bayes, Multinomial Naive Bayes, Bernoulli Naive Bayes, Support Vector Classifier, K-Nearest Neighbors, Decision Tree, and Logistic Regression. Our testing results show that our model surpasses all other models, achieving an accuracy of 96.59% and a precision of 99.12%.

Keywords: text classification, LIME (local interpretable model-agnostic explanations), stemming, tokenization, logistic regression.

Procedia PDF Downloads 39
696 Economic and Environmental Impact of the Missouri Grazing Schools

Authors: C. A. Roberts, S. L. Mascaro, J. R. Gerrish, J. L. Horner

Abstract:

Management-intensive Grazing (MiG) is a practice that rotates livestock through paddocks in a way that best matches the nutrient requirements of the animal to the yield and quality of the pasture. In the USA, MiG has been taught to livestock producers throughout the state of Missouri in 2- and 3-day workshops called “Missouri Grazing Schools.” The economic impact of these schools was quantified using IMPLAN software. The model included hectares of adoption, animal performance, carrying capacity, and input costs. To date, MiG, as taught in the Missouri Grazing Schools, has been implemented on more than 70,000 hectares in Missouri. The economic impact of these schools is presently $125 million USD per year added to the state economy. This magnitude of impact is the result not only of widespread adoption but also because of increased livestock carrying capacity; in Missouri, a capacity increase of 25 to 30% has been well documented. Additional impacts have been MiG improving forage quality and reducing the cost of feed and fertilizer. The environmental impact of MiG in the state of Missouri is currently being estimated. Environmental impact takes into account the reduction in the application of commercial fertilizers; in MiG systems, nitrogen is supplied by N fixation from legumes, and much of the P and K is recycled naturally by well-distributed manure. The environmental impact also estimates carbon sequestration and methane production; MiG can increase carbon sequestration and reduce methane production in comparison to default grazing practices and feedlot operations in the USA.

Keywords: agricultural education, forage quality, management-intensive grazing, nutrient cycling, stock density, sustainable agriculture

Procedia PDF Downloads 190
695 Effect of Genuine Missing Data Imputation on Prediction of Urinary Incontinence

Authors: Suzan Arslanturk, Mohammad-Reza Siadat, Theophilus Ogunyemi, Ananias Diokno

Abstract:

Missing data is a common challenge in statistical analyses of most clinical survey datasets. A variety of methods have been developed to enable analysis of survey data to deal with missing values. Imputation is the most commonly used among the above methods. However, in order to minimize the bias introduced due to imputation, one must choose the right imputation technique and apply it to the correct type of missing data. In this paper, we have identified different types of missing values: missing data due to skip pattern (SPMD), undetermined missing data (UMD), and genuine missing data (GMD) and applied rough set imputation on only the GMD portion of the missing data. We have used rough set imputation to evaluate the effect of such imputation on prediction by generating several simulation datasets based on an existing epidemiological dataset (MESA). To measure how well each dataset lends itself to the prediction model (logistic regression), we have used p-values from the Wald test. To evaluate the accuracy of the prediction, we have considered the width of 95% confidence interval for the probability of incontinence. Both imputed and non-imputed simulation datasets were fit to the prediction model, and they both turned out to be significant (p-value < 0.05). However, the Wald score shows a better fit for the imputed compared to non-imputed datasets (28.7 vs. 23.4). The average confidence interval width was decreased by 10.4% when the imputed dataset was used, meaning higher precision. The results show that using the rough set method for missing data imputation on GMD data improve the predictive capability of the logistic regression. Further studies are required to generalize this conclusion to other clinical survey datasets.

Keywords: rough set, imputation, clinical survey data simulation, genuine missing data, predictive index

Procedia PDF Downloads 160
694 Effect of Fortification of Expressed Human Breast Milk with Olive Oil and Skimmed Milk in Improving Weight Gain in Very Low Birth Weight Neonates and Shortening Their Length of Hospital Stay

Authors: Sumrina Kousar

Abstract:

Objective: The aim of this study was to observe the effect of fortification of expressed human breast milk with olive oil and skimmed milk in improving weight gain in very low birth weight neonates and shortening their length of hospital stay. Study Design and place: A randomized controlled trial was carried out at the Combined Military Hospital Lahore from March 2018 to March 2019. Methods: Neonates admitted with very low birth weight and gestational age of < 34 weeks were included in the study. Sixty babies were enrolled using non-probability consecutive sampling; a random number table was used to allocate them into a fortification group and a control group. The control group received expressed milk alone, while olive oil 1 ml twice daily and skimmed milk 1 gram in every third feed were added to expressed milk in the fortification group. Data was analyzed on SPSS 20. Proportions were compared by applying the chi-square test. An independent sample t-test was applied for comparing means. A p-value of ≤ 0.05 was considered significant. Results: The study comprised of 60 neonates, with 30 in each of the groups. Weight gain was 24.83±5.63 in the fortification group and 11.72±3.95 in the control group (p =< 0.001). Mean hospital stay was 20.5716.511 in the fortification group and 27.678.89 in the control group (p =< 0.043). Conclusion: Olive oil and skimmed milk fortification of breast milk was effective for weight gain and reducing the length of hospital stay in very low birth weight neonates.

Keywords: fortification, olive oil, skimmed milk, weight gain

Procedia PDF Downloads 157
693 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 70
692 Corrosion Response of Friction Stir Processed Mg-Zn-Zr-RE Alloy

Authors: Vasanth C. Shunmugasamy, Bilal Mansoor

Abstract:

Magnesium alloys are increasingly being considered for structural systems across different industrial sectors, including precision components of biomedical devices, owing to their high specific strength, stiffness and biodegradability. However, Mg alloys exhibit a high corrosion rate that restricts their application as a biomaterial. For safe use as biomaterial, it is essential to control their corrosion rates. Mg alloy corrosion is influenced by several factors, such as grain size, precipitates and texture. In Mg alloys, microgalvanic coupling between the α-Mg matrix and secondary precipitates can exist, which results in an increased corrosion rate. The present research addresses this challenge by engineering the microstructure of a biodegradable Mg–Zn–RE–Zr alloy by friction stir processing (FSP), a severe plastic deformation process. The FSP-processed Mg alloys showed improved corrosion resistance and mechanical properties. FSPed Mg alloy showed refined grains, a strong basal texture and broken and uniformly distributed secondary precipitates in the stir zone. Mg, alloy base material, exposed to In vitro corrosion medium showed micro galvanic coupling between precipitate and matrix, resulting in the unstable passive layer. However, FS processed alloy showed uniform corrosion owing to stable surface film formation. The stable surface film is attributed to refined grains, preferred texture and distribution of precipitates. The research results show promising potential for Mg alloy to be developed as a biomaterial.

Keywords: biomaterials, severe plastic deformation, magnesium alloys, corrosion

Procedia PDF Downloads 24
691 Count of Trees in East Africa with Deep Learning

Authors: Nubwimana Rachel, Mugabowindekwe Maurice

Abstract:

Trees play a crucial role in maintaining biodiversity and providing various ecological services. Traditional methods of counting trees are time-consuming, and there is a need for more efficient techniques. However, deep learning makes it feasible to identify the multi-scale elements hidden in aerial imagery. This research focuses on the application of deep learning techniques for tree detection and counting in both forest and non-forest areas through the exploration of the deep learning application for automated tree detection and counting using satellite imagery. The objective is to identify the most effective model for automated tree counting. We used different deep learning models such as YOLOV7, SSD, and UNET, along with Generative Adversarial Networks to generate synthetic samples for training and other augmentation techniques, including Random Resized Crop, AutoAugment, and Linear Contrast Enhancement. These models were trained and fine-tuned using satellite imagery to identify and count trees. The performance of the models was assessed through multiple trials; after training and fine-tuning the models, UNET demonstrated the best performance with a validation loss of 0.1211, validation accuracy of 0.9509, and validation precision of 0.9799. This research showcases the success of deep learning in accurate tree counting through remote sensing, particularly with the UNET model. It represents a significant contribution to the field by offering an efficient and precise alternative to conventional tree-counting methods.

Keywords: remote sensing, deep learning, tree counting, image segmentation, object detection, visualization

Procedia PDF Downloads 54
690 Advancements in Laser Welding Process: A Comprehensive Model for Predictive Geometrical, Metallurgical, and Mechanical Characteristics

Authors: Seyedeh Fatemeh Nabavi, Hamid Dalir, Anooshiravan Farshidianfar

Abstract:

Laser welding is pivotal in modern manufacturing, offering unmatched precision, speed, and efficiency. Its versatility in minimizing heat-affected zones, seamlessly joining dissimilar materials, and working with various metals makes it indispensable for crafting intricate automotive components. Integration into automated systems ensures consistent delivery of high-quality welds, thereby enhancing overall production efficiency. Noteworthy are the safety benefits of laser welding, including reduced fumes and consumable materials, which align with industry standards and environmental sustainability goals. As the automotive sector increasingly demands advanced materials and stringent safety and quality standards, laser welding emerges as a cornerstone technology. A comprehensive model encompassing thermal dynamic and characteristics models accurately predicts geometrical, metallurgical, and mechanical aspects of the laser beam welding process. Notably, Model 2 showcases exceptional accuracy, achieving remarkably low error rates in predicting primary and secondary dendrite arm spacing (PDAS and SDAS). These findings underscore the model's reliability and effectiveness, providing invaluable insights and predictive capabilities crucial for optimizing welding processes and ensuring superior productivity, efficiency, and quality in the automotive industry.

Keywords: laser welding process, geometrical characteristics, mechanical characteristics, metallurgical characteristics, comprehensive model, thermal dynamic

Procedia PDF Downloads 40
689 Application of the Finite Window Method to a Time-Dependent Convection-Diffusion Equation

Authors: Raoul Ouambo Tobou, Alexis Kuitche, Marcel Edoun

Abstract:

The FWM (Finite Window Method) is a new numerical meshfree technique for solving problems defined either in terms of PDEs (Partial Differential Equation) or by a set of conservation/equilibrium laws. The principle behind the FWM is that in such problem each element of the concerned domain is interacting with its neighbors and will always try to adapt to keep in equilibrium with respect to those neighbors. This leads to a very simple and robust problem solving scheme, well suited for transfer problems. In this work, we have applied the FWM to an unsteady scalar convection-diffusion equation. Despite its simplicity, it is well known that convection-diffusion problems can be challenging to be solved numerically, especially when convection is highly dominant. This has led researchers to set the scalar convection-diffusion equation as a benchmark one used to analyze and derive the required conditions or artifacts needed to numerically solve problems where convection and diffusion occur simultaneously. We have shown here that the standard FWM can be used to solve convection-diffusion equations in a robust manner as no adjustments (Upwinding or Artificial Diffusion addition) were required to obtain good results even for high Peclet numbers and coarse space and time steps. A comparison was performed between the FWM scheme and both a first order implicit Finite Volume Scheme (Upwind scheme) and a third order implicit Finite Volume Scheme (QUICK Scheme). The results of the comparison was that for equal space and time grid spacing, the FWM yields a much better precision than the used Finite Volume schemes, all having similar computational cost and conditioning number.

Keywords: Finite Window Method, Convection-Diffusion, Numerical Technique, Convergence

Procedia PDF Downloads 325
688 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 239
687 Study of the Feasibility of Submerged Arc Welding(SAW) on Mild Steel Plate IS 2062 Grade B at Zero Degree Celsius

Authors: Ajay Biswas, Swapan Bhaumik, Saurav Datta, Abhijit Bhowmik

Abstract:

A series of experiments has been carried out to study the feasibility of submerged arc welding (SAW) on mild steel plate of designation IS 2062 grade B. Specimen temperature of which is reduced to zero degree Celsius whereas the ambient temperature is about 25-27 degree Celsius. To observe this, bead on plate submerged arc welding is formed on the specimen plate of heavy duty mild steel of designation IS 2062 grade B, fitted on the special fixture ensuring zero degree Celsius temperature to the specimen plate. Sixteen numbers of cold samples is welded by varying the most influencing parameters viz. voltage, wire feed rate, travel speed, and electrode stick-out at four different levels. Another sixteen numbers of specimens are at normal room temperature are welded by applying same combination of parameters. Those sixteen numbers of specimens are selected based on the design of experiment of Taguchi‘s L16 orthogonal array with the intension of reducing the number of experimental runs. Different attributes of bead geometry of the entire sample for both the situations are measured and compared. It is established that submerged arc welding is feasible at zero degree Celsius on mild steel plate of designation IS 2062 grade B and optimization of the process parameters can also be drawn as a clear response of parameters are obtained.

Keywords: submerged arc welding, zero degree celsius, Taguchi’s design of experiment, geometry of weldment

Procedia PDF Downloads 440
686 Feasibility Study of Submerged Arc Welding (SAW) on Mild Steel Plate IS 2062 Grade B at Zero Degree Celsius

Authors: Ajay Biswas, Abhijit Bhowmik, Saurav Datta, Swapan Bhaumik

Abstract:

A series of experiments has been carried out to study the feasibility of submerged arc welding (SAW) on mild steel plate of designation IS 2062 grade B. Specimen temperature of which is reduced to zero degree Celsius whereas the ambient temperature is about 25-27 degree Celsius. To observe this, bead on plate submerged arc welding is formed on the specimen plate of heavy duty mild steel of designation IS 2062 grade B, fitted on the special fixture ensuring zero degree Celsius temperature to the specimen plate. Sixteen numbers of cold samples is welded by varying the most influencing parameters viz. Voltage, wire feed rate, travel speed and electrode stick-out at four different levels. Another sixteen numbers of specimens are at normal room temperature are welded by applying same combination of parameters. Those sixteen numbers of specimens are selected based on the design of experiment of Taguchi‘s L16 orthogonal array with the intension of reducing the number of experimental runs. Different attributes of bead geometry of the entire sample for both the situations are measured and compared. It is established that submerged arc welding is feasible at zero degree Celsius on mild steel plate of designation IS 2062 grade B and optimization of the process parameters can also be drawn as a clear response of parameters are obtained.

Keywords: geometry of weldment, submerged arc welding, Taguchi’s design of experiment, zero degree Celsius

Procedia PDF Downloads 427
685 Dynamic Corrosion Prevention through Magneto-Responsive Nanostructure with Controllable Hydrophobicity

Authors: Anne McCarthy, Anna Kim, Yin Song, Kyoo Jo, Donald Cropek, Sungmin Hong

Abstract:

Corrosion prevention remains an indispensable concern across a spectrum of industries, demanding inventive and adaptable methodologies to effectively tackle the ever-evolving obstacles presented by corrosive surroundings. This abstract introduces a pioneering approach to corrosion prevention that amalgamates the distinct attributes of magneto-responsive polymers with finely adjustable hydrophobicity inspired by the structure of cicada wings, effectively deterring bacterial proliferation and biofilm formation. The proposed strategy entails the creation of an innovative array of magneto-responsive nanostructures endowed with the capacity to dynamically modulate their hydrophobic characteristics. This dynamic control over hydrophobicity facilitates active repulsion of water and corrosive agents on demand. Additionally, the cyclic motion generated by magnetic activation prevents the biofilms formation and rejection. Thus, the synergistic interplay between magneto-active nanostructures and hydrophobicity manipulation establishes a versatile defensive mechanism against diverse corrosive agents. This study introduces a novel method for corrosion prevention, harnessing the advantages of magneto-active nanostructures and the precision of hydrophobicity adjustment, resulting in water-repellency, effective biofilm removal, and offering a promising solution to handle corrosion-related challenges. We believe that the combined effect will significantly contribute to extending asset lifespan, improving safety, and reducing maintenance costs in the face of corrosion threats.

Keywords: magneto-active material, nanoimprinting, corrosion prevention, hydrophobicity

Procedia PDF Downloads 55
684 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 178
683 Plackett-Burman Design for Microencapsulation of Blueberry Bioactive Compounds

Authors: Feyza Tatar, Alime Cengiz, Dilara Sandikçi, Muhammed Dervisoglu, Talip Kahyaoglu

Abstract:

Blueberries are known for their bioactive properties such as high anthocyanin contents, antioxidant activities and potential health benefits. However, anthocyanins are sensitive to environmental conditions during processes. The objective of this study was to evaluate the effects of spray drying conditions on the blueberry microcapsules by Plackett-Burman experimental design. Inlet air temperature (120 and 180°C), feed pump rate (20% and 40%), DE of maltodextrin (6 and 15 DE), coating concentration (10% and 30%) and source of blueberry (Duke and Darrow) were independent variables, tested at high (+1) and low (-1) levels. Encapsulation efficiency (based on total phenol) of blueberry microcapsules was the dependent variable. In addition, anthocyanin content, antioxidant activity, water solubility, water activity and bulk density were measured for blueberry powders. The antioxidant activity of blueberry powders ranged from 72 to 265 mmol Trolox/g and anthocyanin content was changed from 528 to 5500 mg GAE/100g. Encapsulation efficiency was significantly affected (p<0.05) by inlet air temperature and coating concentration. Encapsulation efficiency increased with increasing inlet air temperature and decreasing coating concentration. The highest encapsulation efficiency could be produced by spray drying at 180°C inlet air temperature, 40% pump rate, 6 DE of maltodextrin, 13% maltodextrin concentration and source of duke blueberry.

Keywords: blueberry, microencapsulation, Plackett-Burman design, spray drying

Procedia PDF Downloads 280
682 Fermented Unripe Plantain (Musa paradisiacal) Peel Meal as a Replacement for Maize in the Diet of Nile Tilapia (Oreochromis niloticus) Fingerlings

Authors: N. A. Bamidele, S. O. Obasa, I. O. Taiwo, I. Abdulraheem, O. C. Odebiyi, A. A. Adeoye, O. E. Babalola, O. V. Uzamere

Abstract:

A feeding trial was conducted to investigate the effect of fermented unripe plantain peel meal (FUP) on growth performance, nutrients digestibility and economic indices of production of Nile tilapia, Oreochromis niloticus fingerlings. Fingerlings (150) of Nile tilapia (1.70±0.1g) were stocked at 10 per plastic tank. Five iso-nitrogenous diets containing 40% crude protein in which maize meal was replaced by fermented unripe plantain peel meal at 0% (FUP0), 25% (FUP25), 50% (FUP50), 75% (FUP75) and 100% (FUP100) were formulated and prepared. The fingerlings were fed at 5% body weight per day for 56 days. There was no significant difference (p > 0.05) in all the growth parameters among the treatments. Feed conversion ratio of 1.35 in fish fed diet FUP25 was not significantly different (P > 0.05) from 1.42 of fish fed diet FUP0. Apparent protein digestibility of 86.94% in fish fed diet FUP100 was significantly higher (p < 0.05) than 70.37% in fish fed diet FUP0 while apparent carbohydrate of 88.34% in fish fed diet FUP0 was significantly different (p < 0.05) from 70.29% of FUP100. Red blood cell (4.30 ml/mm3) of fish fed diet FUP100 was not significantly different from 4.13 ml/mm3 of fish fed diet FUP50. The highest percentage profit of 88.85% in fish fed diet FUP100 was significantly higher than 66.68% in fish fed diet FUP0 while the profit index of 1.89 in fish fed diet FUP100 was significantly different from 1.67 in fish fed diet FUP0. Therefore, fermented unripe plantain peel meal can completely replace maize in the diet of O. niloticus fingerlings.

Keywords: fermentation, fish diets, plantain peel, tilapia

Procedia PDF Downloads 527
681 Gas Network Noncooperative Game

Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos

Abstract:

The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.

Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition

Procedia PDF Downloads 144
680 AgriInnoConnect Pro System Using Iot and Firebase Console

Authors: Amit Barde, Dipali Khatave, Vaishali Savale, Atharva Chavan, Sapna Wagaj, Aditya Jilla

Abstract:

AgriInnoConnect Pro is an advanced agricultural automation system designed to enhance irrigation efficiency and overall farm management through IoT technology. Using MIT App Inventor, Telegram, Arduino IDE, and Firebase Console, it provides a user-friendly interface for farmers. Key hardware includes soil moisture sensors, DHT11 sensors, a 12V motor, a solenoid valve, a stepdown transformer, Smart Fencing, and AC switches. The system operates in automatic and manual modes. In automatic mode, the ESP32 microcontroller monitors soil moisture and autonomously controls irrigation to optimize water usage. In manual mode, users can control the irrigation motor via a mobile app. Telegram bots enable remote operation of the solenoid valve and electric fencing, enhancing farm security. Additionally, the system upgrades conventional devices to smart ones using AC switches, broadening automation capabilities. AgriInnoConnect Pro aims to improve farm productivity and resource management, addressing the critical need for sustainable water conservation and providing a comprehensive solution for modern farm management. The integration of smart technologies in AgriInnoConnect Pro ensures precision farming practices, promoting efficient resource allocation and sustainable agricultural development.

Keywords: agricultural automation, IoT, soil moisture sensor, ESP32, MIT app inventor, telegram bot, smart farming, remote control, firebase console

Procedia PDF Downloads 22
679 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 75
678 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 272
677 Real-Time Generative Architecture for Mesh and Texture

Authors: Xi Liu, Fan Yuan

Abstract:

In the evolving landscape of physics-based machine learning (PBML), particularly within fluid dynamics and its applications in electromechanical engineering, robot vision, and robot learning, achieving precision and alignment with researchers' specific needs presents a formidable challenge. In response, this work proposes a methodology that integrates neural transformation with a modified smoothed particle hydrodynamics model for generating transformed 3D fluid simulations. This approach is useful for nanoscale science, where the unique and complex behaviors of viscoelastic medium demand accurate neurally-transformed simulations for materials understanding and manipulation. In electromechanical engineering, the method enhances the design and functionality of fluid-operated systems, particularly microfluidic devices, contributing to advancements in nanomaterial design, drug delivery systems, and more. The proposed approach also aligns with the principles of PBML, offering advantages such as multi-fluid stylization and consistent particle attribute transfer. This capability is valuable in various fields where the interaction of multiple fluid components is significant. Moreover, the application of neurally-transformed hydrodynamical models extends to manufacturing processes, such as the production of microelectromechanical systems, enhancing efficiency and cost-effectiveness. The system's ability to perform neural transfer on 3D fluid scenes using a deep learning algorithm alongside physical models further adds a layer of flexibility, allowing researchers to tailor simulations to specific needs across scientific and engineering disciplines.

Keywords: physics-based machine learning, robot vision, robot learning, hydrodynamics

Procedia PDF Downloads 60
676 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 144
675 FDX1, a Cuproptosis-Related Gene, Identified as a Potential Target for Human Ovarian Aging

Authors: Li-Te Lin, Chia-Jung Li, Kuan-Hao Tsui

Abstract:

Cuproptosis, a newly identified cell death mechanism, has attracted attention for its association with various diseases. However, the genetic interplay between cuproptosis and ovarian aging remains largely unexplored. This study aims to address this gap by analyzing datasets related to ovarian aging and cuproptosis. Spatial transcriptome analyses were conducted in the ovaries of both young and aged female mice to elucidate the role of FDX1. Comprehensive bioinformatics analyses, facilitated by R software, identified FDX1 as a potential cuproptosis-related gene with implications for ovarian aging. Clinical infertility biopsies were examined to validate these findings, showing consistent results in elderly infertile patients. Furthermore, pharmacogenomic analyses of ovarian cell lines explored the intricate association between FDX1 expression levels and sensitivity to specific small molecule drugs. Spatial transcriptome analyses revealed a significant reduction in FDX1 expression in aging ovaries, supported by consistent findings in biopsies from elderly infertile patients. Pharmacogenomic investigations indicated that modulating FDX1 could influence drug responses in ovarian-related therapies. This study pioneers the identification of FDX1 as a cuproptosis-related gene linked to ovarian aging. These findings not only contribute to understanding the mechanisms of ovarian aging but also position FDX1 as a potential diagnostic biomarker and therapeutic target. Further research may establish FDX1's pivotal role in advancing precision medicine and therapies for ovarian-related conditions.

Keywords: cuproptosis, FDX1, ovarian aging, biomarker

Procedia PDF Downloads 28
674 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain

Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA

Abstract:

In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.

Keywords: BER, DWT, extreme leaning machine (ELM), PSNR

Procedia PDF Downloads 302
673 Wind Power Assessment for Turkey and Evaluation by APLUS Code

Authors: Ibrahim H. Kilic, A. B. Tugrul

Abstract:

Energy is a fundamental component in economic development and energy consumption is an index of prosperity and the standard of living. The consumption of energy per capita has increased significantly over the last decades, as the standard of living has improved. Turkey’s geographical location has several advantages for extensive use of wind power. Among the renewable sources, Turkey has very high wind energy potential. Information such as installation capacity of wind power plants in installation, under construction and license stages in the country are reported in detail. Some suggestions are presented in order to increase the wind power installation capacity of Turkey. Turkey’s economic and social development has led to a massive increase in demand for electricity over the last decades. Since the Turkey has no major oil or gas reserves, it is highly dependent on energy imports and is exposed to energy insecurity in the future. But Turkey does have huge potential for renewable energy utilization. There has been a huge growth in the construction of wind power plants and small hydropower plants in recent years. To meet the growing energy demand, the Turkish Government has adopted incentives for investments in renewable energy production. Wind energy investments evaluated the impact of feed-in tariffs (FIT) based on three scenarios that are optimistic, realistic and pessimistic with APLUS software that is developed for rational evaluation for energy market. Results of the three scenarios are evaluated in the view of electricity market for Turkey.

Keywords: APLUS, energy policy, renewable energy, wind power, Turkey

Procedia PDF Downloads 295
672 A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem

Authors: Brandon Foggo, Nanpeng Yu

Abstract:

Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.

Keywords: distribution network, machine learning, network topology, phase identification, smart grid

Procedia PDF Downloads 291