Search results for: axial flux induction machine
3641 Sorting Maize Haploids from Hybrids Using Single-Kernel Near-Infrared Spectroscopy
Authors: Paul R Armstrong
Abstract:
Doubled haploids (DHs) have become an important breeding tool for creating maize inbred lines, although several bottlenecks in the DH production process limit wider development, application, and adoption of the technique. DH kernels are typically sorted manually and represent about 10% of the seeds in a much larger pool where the remaining 90% are hybrid siblings. This introduces time constraints on DH production and manual sorting is often not accurate. Automated sorting based on the chemical composition of the kernel can be effective, but devices, namely NMR, have not achieved the sorting speed to be a cost-effective replacement to manual sorting. This study evaluated a single kernel near-infrared reflectance spectroscopy (skNIR) platform to accurately identify DH kernels based on oil content. The skNIR platform is a higher-throughput device, approximately 3 seeds/s, that uses spectra to predict oil content of each kernel from maize crosses intentionally developed to create larger than normal oil differences, 1.5%-2%, between DH and hybrid kernels. Spectra from the skNIR were used to construct a partial least squares regression (PLS) model for oil and for a categorical reference model of 1 (DH kernel) or 2 (hybrid kernel) and then used to sort several crosses to evaluate performance. Two approaches were used for sorting. The first used a general PLS model developed from all crosses to predict oil content and then used for sorting each induction cross, the second was the development of a specific model from a single induction cross where approximately fifty DH and one hundred hybrid kernels used. This second approach used a categorical reference value of 1 and 2, instead of oil content, for the PLS model and kernels selected for the calibration set were manually referenced based on traditional commercial methods using coloration of the tip cap and germ areas. The generalized PLS oil model statistics were R2 = 0.94 and RMSE = .93% for kernels spanning an oil content of 2.7% to 19.3%. Sorting by this model resulted in extracting 55% to 85% of haploid kernels from the four induction crosses. Using the second method of generating a model for each cross yielded model statistics ranging from R2s = 0.96 to 0.98 and RMSEs from 0.08 to 0.10. Sorting in this case resulted in 100% correct classification but required models that were cross. In summary, the first generalized model oil method could be used to sort a significant number of kernels from a kernel pool but was not close to the accuracy of developing a sorting model from a single cross. The penalty for the second method is that a PLS model would need to be developed for each individual cross. In conclusion both methods could find useful application in the sorting of DH from hybrid kernels.Keywords: NIR, haploids, maize, sorting
Procedia PDF Downloads 3023640 Preparation and Removal Properties of Hollow Fiber Membranes for Drinking Water
Authors: Seung Moon Woo, Youn Suk Chung, Sang Yong Nam
Abstract:
In the present time, we need advanced water treatment technology for separation of virus and bacteria in effluent which occur epidemic and waterborne diseases. Water purification system is mainly divided into two categorizations like reverse osmosis (RO) and ultrafiltration (UF). Membrane used in these systems requires higher durability because of operating in harsh condition. Of these, the membrane using in UF system has many advantages like higher efficiency and lower energy consume for water treatment compared with RO system. In many kinds of membrane, hollow fiber type membrane is possible to make easily and to get optimized property by control of various spinning conditions such as temperature of coagulation bath, concentration of polymer, addition of additive, air gap and internal coagulation. In this study, polysulfone hollow fiber membrane was successfully prepared by phase inversion method for separation of virus and bacteria. When we prepare the hollow fiber membrane, we controlled various factors such as the polymer concentration, air gap and internal coagulation to investigate effect to membrane property. Morphology of surface and cross section of membrane were measured by field emission scanning electron microscope (FE-SEM). Water flux of membrane was measured using test modules. Mean pore diameter of membrane was calculated using rejection of polystyrene (PS) latex beads for separation of virus and bacteria. Flux and mean flow pore diameter of prepared membrane show 1.5 LPM, 0.03 μm at 1.0 kgf/cm2. The bacteria and virus removal performance of prepared UF membranes were over 6 logs.Keywords: hollow fiber membrane, drinking water, ultrafiltration, bacteria
Procedia PDF Downloads 2483639 WebAppShield: An Approach Exploiting Machine Learning to Detect SQLi Attacks in an Application Layer in Run-time
Authors: Ahmed Abdulla Ashlam, Atta Badii, Frederic Stahl
Abstract:
In recent years, SQL injection attacks have been identified as being prevalent against web applications. They affect network security and user data, which leads to a considerable loss of money and data every year. This paper presents the use of classification algorithms in machine learning using a method to classify the login data filtering inputs into "SQLi" or "Non-SQLi,” thus increasing the reliability and accuracy of results in terms of deciding whether an operation is an attack or a valid operation. A method Web-App auto-generated twin data structure replication. Shielding against SQLi attacks (WebAppShield) that verifies all users and prevents attackers (SQLi attacks) from entering and or accessing the database, which the machine learning module predicts as "Non-SQLi" has been developed. A special login form has been developed with a special instance of data validation; this verification process secures the web application from its early stages. The system has been tested and validated, up to 99% of SQLi attacks have been prevented.Keywords: SQL injection, attacks, web application, accuracy, database
Procedia PDF Downloads 1533638 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine
Authors: Hira Lal Gope, Hidekazu Fukai
Abstract:
The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine
Procedia PDF Downloads 1453637 Design and Development of an Autonomous Beach Cleaning Vehicle
Authors: Mahdi Allaoua Seklab, Süleyman BaşTürk
Abstract:
In the quest to enhance coastal environmental health, this study introduces a fully autonomous beach cleaning machine, a breakthrough in leveraging green energy and advanced artificial intelligence for ecological preservation. Designed to operate independently, the machine is propelled by a solar-powered system, underscoring a commitment to sustainability and the use of renewable energy in autonomous robotics. The vehicle's autonomous navigation is achieved through a sophisticated integration of LIDAR and a camera system, utilizing an SSD MobileNet V2 object detection model for accurate and real-time trash identification. The SSD framework, renowned for its efficiency in detecting objects in various scenarios, is coupled with the lightweight and precise highly MobileNet V2 architecture, making it particularly suited for the computational constraints of on-board processing in mobile robotics. Training of the SSD MobileNet V2 model was conducted on Google Colab, harnessing cloud-based GPU resources to facilitate a rapid and cost-effective learning process. The model was refined with an extensive dataset of annotated beach debris, optimizing the parameters using the Adam optimizer and a cross-entropy loss function to achieve high-precision trash detection. This capability allows the machine to intelligently categorize and target waste, leading to more effective cleaning operations. This paper details the design and functionality of the beach cleaning machine, emphasizing its autonomous operational capabilities and the novel application of AI in environmental robotics. The results showcase the potential of such technology to fill existing gaps in beach maintenance, offering a scalable and eco-friendly solution to the growing problem of coastal pollution. The deployment of this machine represents a significant advancement in the field, setting a new standard for the integration of autonomous systems in the service of environmental stewardship.Keywords: autonomous beach cleaning machine, renewable energy systems, coastal management, environmental robotics
Procedia PDF Downloads 303636 A Machine Learning-Based Approach to Capture Extreme Rainfall Events
Authors: Willy Mbenza, Sho Kenjiro
Abstract:
Increasing efforts are directed towards a better understanding and foreknowledge of extreme precipitation likelihood, given the adverse effects associated with their occurrence. This knowledge plays a crucial role in long-term planning and the formulation of effective emergency response. However, predicting extreme events reliably presents a challenge to conventional empirical/statistics due to the involvement of numerous variables spanning different time and space scales. In the recent time, Machine Learning has emerged as a promising tool for predicting the dynamics of extreme precipitation. ML techniques enables the consideration of both local and regional physical variables that have a strong influence on the likelihood of extreme precipitation. These variables encompasses factors such as air temperature, soil moisture, specific humidity, aerosol concentration, among others. In this study, we develop an ML model that incorporates both local and regional variables while establishing a robust relationship between physical variables and precipitation during the downscaling process. Furthermore, the model provides valuable information on the frequency and duration of a given intensity of precipitation.Keywords: machine learning (ML), predictions, rainfall events, regional variables
Procedia PDF Downloads 903635 Enhancing Sell-In and Sell-Out Forecasting Using Ensemble Machine Learning Method
Authors: Vishal Das, Tianyi Mao, Zhicheng Geng, Carmen Flores, Diego Pelloso, Fang Wang
Abstract:
Accurate sell-in and sell-out forecasting is a ubiquitous problem in the retail industry. It is an important element of any demand planning activity. As a global food and beverage company, Nestlé has hundreds of products in each geographical location that they operate in. Each product has its sell-in and sell-out time series data, which are forecasted on a weekly and monthly scale for demand and financial planning. To address this challenge, Nestlé Chilein collaboration with Amazon Machine Learning Solutions Labhas developed their in-house solution of using machine learning models for forecasting. Similar products are combined together such that there is one model for each product category. In this way, the models learn from a larger set of data, and there are fewer models to maintain. The solution is scalable to all product categories and is developed to be flexible enough to include any new product or eliminate any existing product in a product category based on requirements. We show how we can use the machine learning development environment on Amazon Web Services (AWS) to explore a set of forecasting models and create business intelligence dashboards that can be used with the existing demand planning tools in Nestlé. We explored recent deep learning networks (DNN), which show promising results for a variety of time series forecasting problems. Specifically, we used a DeepAR autoregressive model that can group similar time series together and provide robust predictions. To further enhance the accuracy of the predictions and include domain-specific knowledge, we designed an ensemble approach using DeepAR and XGBoost regression model. As part of the ensemble approach, we interlinked the sell-out and sell-in information to ensure that a future sell-out influences the current sell-in predictions. Our approach outperforms the benchmark statistical models by more than 50%. The machine learning (ML) pipeline implemented in the cloud is currently being extended for other product categories and is getting adopted by other geomarkets.Keywords: sell-in and sell-out forecasting, demand planning, DeepAR, retail, ensemble machine learning, time-series
Procedia PDF Downloads 2763634 In vitro P-Glycoprotein Modulation: Combinatorial Approach Using Natural Products
Authors: Jagdish S. Patel, Piyush Chudasama
Abstract:
Context: Over-expression of P-glycoprotein (P-gp) plays critical role in absorption of many drug candidates which results into lower bioavailability of the drug. P-glycoprotein also over expresses in many pathological conditions like diabetes, affecting the drug therapy. Modulation of P-gp expression using inhibitors can help in designing novel formulation enhancing the bioavailability of the drug in question. Objectives: The main focus of the study was to develop advanced glycation end products (AGEs) induced P-gp over expression in Caco-2 cells. Curcumin, piperine and epigallocatechin gallate were used to evaluate their P-gp inhibitory action using combinatorial approach. Materials and methods: Methylglyoxal (MG) induced P-gp over expression was checked in Caco-2 cells using real time PCR. P-gp inhibitory effects of the phytochemicals were measured after induction with MG alone and in combination of any two compounds. Cytotoxicity of each of the phytochemical was evaluated using MTT assay. Results: Induction with MG (100mM) significantly induced the over expression of P-glycoprotein in Caco-2 cells after 24 hr. Curcumin, piperine and epigallocatechin gallate alone significantly reduced the level of P-gp within 6 hr of treatment period monitored by real time PCR. The combination of any two phytochemical also down regulated the expression of P-gp in cells. Combinations of Curcumin and epigallocatechin gallate have shown significant down regulation when compared with other two combinations. Conclusions: Combinatorial approach for down regulating the expression of P-gp, in pathological conditions like diabetes, has demonstrated promising approach for therapeutic purpose.Keywords: p-glycoprotein, curcumin, piperine, epigallocatechin gallate, p-gp inhibition
Procedia PDF Downloads 3343633 Physics-Informed Machine Learning for Displacement Estimation in Solid Mechanics Problem
Authors: Feng Yang
Abstract:
Machine learning (ML), especially deep learning (DL), has been extensively applied to many applications in recently years and gained great success in solving different problems, including scientific problems. However, conventional ML/DL methodologies are purely data-driven which have the limitations, such as need of ample amount of labelled training data, lack of consistency to physical principles, and lack of generalizability to new problems/domains. Recently, there is a growing consensus that ML models need to further take advantage of prior knowledge to deal with these limitations. Physics-informed machine learning, aiming at integration of physics/domain knowledge into ML, has been recognized as an emerging area of research, especially in the recent 2 to 3 years. In this work, physics-informed ML, specifically physics-informed neural network (NN), is employed and implemented to estimate the displacements at x, y, z directions in a solid mechanics problem that is controlled by equilibrium equations with boundary conditions. By incorporating the physics (i.e. the equilibrium equations) into the learning process of NN, it is showed that the NN can be trained very efficiently with a small set of labelled training data. Experiments with different settings of the NN model and the amount of labelled training data were conducted, and the results show that very high accuracy can be achieved in fulfilling the equilibrium equations as well as in predicting the displacements, e.g. in setting the overall displacement of 0.1, a root mean square error (RMSE) of 2.09 × 10−4 was achieved.Keywords: deep learning, neural network, physics-informed machine learning, solid mechanics
Procedia PDF Downloads 1503632 Machine Learning Driven Analysis of Kepler Objects of Interest to Identify Exoplanets
Authors: Akshat Kumar, Vidushi
Abstract:
This paper identifies 27 KOIs, 26 of which are currently classified as candidates and one as false positives that have a high probability of being confirmed. For this purpose, 11 machine learning algorithms were implemented on the cumulative kepler dataset sourced from the NASA exoplanet archive; it was observed that the best-performing model was HistGradientBoosting and XGBoost with a test accuracy of 93.5%, and the lowest-performing model was Gaussian NB with a test accuracy of 54%, to test model performance F1, cross-validation score and RUC curve was calculated. Based on the learned models, the significant characteristics for confirm exoplanets were identified, putting emphasis on the object’s transit and stellar properties; these characteristics were namely koi_count, koi_prad, koi_period, koi_dor, koi_ror, and koi_smass, which were later considered to filter out the potential KOIs. The paper also calculates the Earth similarity index based on the planetary radius and equilibrium temperature for each KOI identified to aid in their classification.Keywords: Kepler objects of interest, exoplanets, space exploration, machine learning, earth similarity index, transit photometry
Procedia PDF Downloads 763631 An Analysis of Machine Translation: Instagram Translation vs Human Translation on the Perspective Translation Quality
Authors: Aulia Fitri
Abstract:
This aims to seek which part of the linguistics with the common mistakes occurred between Instagram translation and human translation. Instagram is a social media account that is widely used by people in the world. Everyone with the Instagram account can consume the captions and pictures that are shared by their friends, celebrity, and public figures across countries. Instagram provides the machine translation under its caption space that will assist users to understand the language of their non-native. The researcher takes samples from an Indonesian public figure whereas the account is followed by many followers. The public figure tries to help her followers from other countries understand her posts by putting up the English version after the Indonesian version. However, the research on Instagram account has not been done yet even though the account is widely used by the worldwide society. There are 20 samples that will be analysed on the perspective of translation quality and linguistics tools. As the MT, Instagram tends to give a literal translation without regarding the topic meant. On the other hand, the human translation tends to exaggerate the translation which leads a different meaning in English. This is an interesting study to discuss when the human nature and robotic-system influence the translation result.Keywords: human translation, machine translation (MT), translation quality, linguistic tool
Procedia PDF Downloads 3243630 Development and Validation of Cylindrical Linear Oscillating Generator
Authors: Sungin Jeong
Abstract:
This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator
Procedia PDF Downloads 1953629 Influence of Low and Extreme Heat Fluxes on Thermal Degradation of Carbon Fibre-Reinforced Polymers
Authors: Johannes Bibinger, Sebastian Eibl, Hans-Joachim Gudladt
Abstract:
This study considers the influence of different irradiation scenarios on the thermal degradation of carbon fiber-reinforced polymers (CFRP). Real threats are simulated, such as fires with long-lasting low heat fluxes and nuclear heat flashes with short-lasting high heat fluxes. For this purpose, coated and uncoated quasi-isotropic samples of the commercially available CFRP HexPly® 8552/IM7 are thermally irradiated from one side by a cone calorimeter and a xenon short-arc lamp with heat fluxes between 5 and 175 W/cm² at varying time intervals. The specimen temperature is recorded on the front and backside as well as at different laminate depths. The CFRP is non-destructively tested with ultrasonic testing, infrared spectroscopy (ATR-FTIR), scanning electron microscopy (SEM), and micro-focused computed X-Ray tomography (μCT). Destructive tests are performed to evaluate the mechanical properties in terms of interlaminar shear strength (ILSS), compressive and tensile strength. The irradiation scenarios vary significantly in heat flux and exposure time. Thus, different heating rates, radiation effects, and temperature distributions occur. This leads to unequal decomposition processes, which affect the sensitivity of the strength type and damage behaviour of the specimens. However, with the use of surface coatings, thermal degradation of composite materials can be delayed.Keywords: CFRP, one-sided thermal damage, high heat flux, heating rate, non-destructive and destructive testing
Procedia PDF Downloads 1133628 Risk Factors of Becoming NEET Youth in Iran: A Machine Learning Approach
Authors: Hamed Rahmani, Wim Groot
Abstract:
The term "youth not in employment, education or training (NEET)" refers to a combination of youth unemployment and school dropout. This study investigates the variables that increase the risk of becoming NEET in Iran. A selection bias-adjusted Probit model was employed using machine learning to identify these risk factors. We used cross-sectional data obtained from the Statistical Centre of Iran and the Ministry of Cooperatives Labour and Social Welfare that was taken from the labour force survey conducted in the spring of 2021. We look at years of education, work experience, housework, the number of children under the age of six in the home, family education, birthplace, and the amount of land owned by households. Results show that hours spent performing domestic chores enhance the likelihood of youth becoming NEET, and years of education and years of potential work experience decrease the chance of being NEET. The findings also show that female youth born in cities were less likely than those born in rural regions to become NEET.Keywords: NEET youth, probit, CART, machine learning, unemployment
Procedia PDF Downloads 1093627 Development of Computational Approach for Calculation of Hydrogen Solubility in Hydrocarbons for Treatment of Petroleum
Authors: Abdulrahman Sumayli, Saad M. AlShahrani
Abstract:
For the hydrogenation process, knowing the solubility of hydrogen (H2) in hydrocarbons is critical to improve the efficiency of the process. We investigated the H2 solubility computation in four heavy crude oil feedstocks using machine learning techniques. Temperature, pressure, and feedstock type were considered as the inputs to the models, while the hydrogen solubility was the sole response. Specifically, we employed three different models: Support Vector Regression (SVR), Gaussian process regression (GPR), and Bayesian ridge regression (BRR). To achieve the best performance, the hyper-parameters of these models are optimized using the whale optimization algorithm (WOA). We evaluated the models using a dataset of solubility measurements in various feedstocks, and we compared their performance based on several metrics. Our results show that the WOA-SVR model tuned with WOA achieves the best performance overall, with an RMSE of 1.38 × 10− 2 and an R-squared of 0.991. These findings suggest that machine learning techniques can provide accurate predictions of hydrogen solubility in different feedstocks, which could be useful in the development of hydrogen-related technologies. Besides, the solubility of hydrogen in the four heavy oil fractions is estimated in different ranges of temperatures and pressures of 150 ◦C–350 ◦C and 1.2 MPa–10.8 MPa, respectivelyKeywords: temperature, pressure variations, machine learning, oil treatment
Procedia PDF Downloads 693626 A Combined Meta-Heuristic with Hyper-Heuristic Approach to Single Machine Production Scheduling Problem
Authors: C. E. Nugraheni, L. Abednego
Abstract:
This paper is concerned with minimization of mean tardiness and flow time in a real single machine production scheduling problem. Two variants of genetic algorithm as meta-heuristic are combined with hyper-heuristic approach are proposed to solve this problem. These methods are used to solve instances generated with real world data from a company. Encouraging results are reported.Keywords: hyper-heuristics, evolutionary algorithms, production scheduling, meta-heuristic
Procedia PDF Downloads 3813625 Biopsy Proven Polyoma (BK) Virus in Saudi Kidney Recipients – Prevalence, Clinicopathological Features and Clinico-Pathological Correlations
Authors: Sarah Hamdan Al-Jahdali, Khaled Alsaad, Abdullah Al-Sayyari
Abstract:
Objectives: To study the prevalence, clinicopathological features, risk factors and outcome of biopsy proven polyoma (BK) virus infection among Saudi kidney transplant recipients and compare them to negative BK virus group. Methods: We retrospectively reviewed the charts of all the patients with biopsy-proven polyoma (BK) virus infection in King Abdulaziz Medical City in Riyadh between 2005 and 2011. The details of clinical presentation, the indication for kidney biopsy, the laboratory findings at presentation, the natural history of the disease, thepathological findings, the prognosis as well as the response to therapy were all recorded. Results: Kidney biopsy was performed in 37 cases of unexplained graft dysfunction. BK virus was found in 10 (27%). Out of those 10, 3 (30%) ended with graft failure. BK virus occurred in all patients who received ATG induction therapy 100% versus 59.3% in the non BK virus patients (p=0.06). Furthermore, the risk of BK virus was much less in those who received acyclovir as an anti-viral prophylaxis as compared to those who did not receive it (p=0.01). Also, patients with BK virus weighed much less (mean 46.7±20.6 Kgs) than those without BK virus at time of transplantation (mean 64.3±12.1). Graft survival was better among deceased donor kidneys compared to living ones (P=0.016) and with older age (P=0.005). Conclusion: Our findings suggest the involvement of ATG induction therapy, the lack of antiviral prophylaxis therapy and lower weight at transplant as significant risk factors for the development of BK virus infection.Keywords: BKVAN, BKV, kidney transpant, Saudi Arabia
Procedia PDF Downloads 2853624 Bioincision of Gmelina Arborea Roxb. Heartwood with Inonotus Dryophilus (Berk.) Murr. for Improved Chemical Uptake and Penetration
Authors: A. O. Adenaiya, S. F. Curling, O. Y. Ogunsanwo, G . A. Ormondroyd
Abstract:
Treatment of wood with chemicals in order to prolong its service life may prove difficult in some refractory wood species. This impermeability in wood is usually due to biochemical changes which occur during heartwood formation. Bioincision, which is a short-term, controlled microbial decomposition of wood, is one of the promising approaches capable of improving the amenability of refractory wood to chemical treatments. Gmelina Arborea, a mainstay timber species in Nigeria, has impermeable heartwood due to the excessive tyloses which occlude its vessels. Therefore, the chemical uptake and penetration in Gmelina arborea heartwood bioincised with Inonotus dryophilus fungus was investigated. Five mature Gmelina Arborea trees were harvested at the Departmental plantation in Ajibode, Ibadan, Nigeria and a bolt of 300 cm was obtained from the basal portion of each tree. The heartwood portion of the bolts was extracted and converted into dimensions 20 mm x 20 mm x 60 mm and subsequently conditioned (200C at 65% Relative Humidity). Twenty wood samples each were bioincised with the white-rot fungus Inonotus dryophilus (ID, 999) for 3, 5, 7 and 9 weeks using standard procedure, while a set of sterile control samples were prepared. Ten of each bioincised and control sample were pressure-treated with 5% tanalith preservative, while the other ten of each bioincised and control samples were pressure-treated with a liquid dye for easy traceability of the chemical in the wood, both using a full cell treatment process. The bioincised and control samples were evaluated for their Weight Loss before chemical treatment (WL, %), Preservative Absorption (PA, Kg/m3), Preservative Retention (PR, Kg/m3), Axial Absorption (AA, Kg/m3), Lateral Absorption (LA, Kg/m3), Axial Penetration Depth (APD, mm), Radial Penetration Depth (RPD, mm), and Tangential Penetration Depth (TPD, mm). The data obtained were analyzed using ANOVA at α0.05. Results show that the weight loss was least in the samples bioincised for three weeks (0.09%) and highest after 7 weeks of bioincision (0.48%). The samples bioincised for 3 weeks had the least PA (106.72 Kg/m3) and PR (5.87 Kg/m3), while the highest PA (134.9 Kg/m3) and PR were observed after 7 weeks of bioincision (7.42 Kg/m3). The AA ranged from 27.28 Kg/m3 (3 weeks) to 67.05 Kg/m3 (5 weeks), while the LA was least after 5 weeks of incubation (28.1 Kg/m3) and highest after 9 weeks (71.74 Kg/m3). Significantly lower APD was observed in control samples (6.97 mm) than in the samples bioincised after 9weeks (19.22 mm). The RPD increased from 0.08 mm (control samples) to 3.48 mm (5 weeks), while TPD ranged from 0.38 mm (control samples) to 0.63 mm (9 weeks), implying that liquid flow in the wood was predominantly through the axial pathway. Bioincising G. arborea heartwood with I. dryophilus fungus for 9 weeks is capable of enhancing chemical uptake and deeper penetration of chemicals in the wood through the degradation of the occluding vessel tyloses, which is accompanied by a minimal degradation of the polymeric wood constituents.Keywords: Bioincision, chemical uptake, penetration depth, refractory wood, tyloses
Procedia PDF Downloads 1063623 Heat Transfer Performance of a Small Cold Plate with Uni-Directional Porous Copper for Cooling Power Electronics
Authors: K. Yuki, R. Tsuji, K. Takai, S. Aramaki, R. Kibushi, N. Unno, K. Suzuki
Abstract:
A small cold plate with uni-directional porous copper is proposed for cooling power electronics such as an on-vehicle inverter with the heat generation of approximately 500 W/cm2. The uni-directional porous copper with the pore perpendicularly orienting the heat transfer surface is soldered to a grooved heat transfer surface. This structure enables the cooling liquid to evaporate in the pore of the porous copper and then the vapor to discharge through the grooves. In order to minimize the cold plate, a double flow channel concept is introduced for the design of the cold plate. The cold plate consists of a base plate, a spacer, and a vapor discharging plate, totally 12 mm in thickness. The base plate has multiple nozzles of 1.0 mm in diameter for the liquid supply and 4 slits of 2.0 mm in width for vapor discharging, and is attached onto the top surface of the porous copper plate of 20 mm in diameter and 5.0 mm in thickness. The pore size is 0.36 mm and the porosity is 36 %. The cooling liquid flows into the porous copper as an impinging jet flow from the multiple nozzles, and then the vapor, which is generated in the pore, is discharged through the grooves and the vapor slits outside the cold plate. A heated test section consists of the cold plate, which was explained above, and a heat transfer copper block with 6 cartridge heaters. The cross section of the heat transfer block is reduced in order to increase the heat flux. The top surface of the block is the grooved heat transfer surface of 10 mm in diameter at which the porous copper is soldered. The grooves are fabricated like latticework, and the width and depth are 1.0 mm and 0.5 mm, respectively. By embedding three thermocouples in the cylindrical part of the heat transfer block, the temperature of the heat transfer surface ant the heat flux are extrapolated in a steady state. In this experiment, the flow rate is 0.5 L/min and the flow velocity at each nozzle is 0.27 m/s. The liquid inlet temperature is 60 °C. The experimental results prove that, in a single-phase heat transfer regime, the heat transfer performance of the cold plate with the uni-directional porous copper is 2.1 times higher than that without the porous copper, though the pressure loss with the porous copper also becomes higher than that without the porous copper. As to the two-phase heat transfer regime, the critical heat flux increases by approximately 35% by introducing the uni-directional porous copper, compared with the CHF of the multiple impinging jet flow. In addition, we confirmed that these heat transfer data was much higher than that of the ordinary single impinging jet flow. These heat transfer data prove high potential of the cold plate with the uni-directional porous copper from the view point of not only the heat transfer performance but also energy saving.Keywords: cooling, cold plate, uni-porous media, heat transfer
Procedia PDF Downloads 2953622 PaSA: A Dataset for Patent Sentiment Analysis to Highlight Patent Paragraphs
Authors: Renukswamy Chikkamath, Vishvapalsinhji Ramsinh Parmar, Christoph Hewel, Markus Endres
Abstract:
Given a patent document, identifying distinct semantic annotations is an interesting research aspect. Text annotation helps the patent practitioners such as examiners and patent attorneys to quickly identify the key arguments of any invention, successively providing a timely marking of a patent text. In the process of manual patent analysis, to attain better readability, recognising the semantic information by marking paragraphs is in practice. This semantic annotation process is laborious and time-consuming. To alleviate such a problem, we proposed a dataset to train machine learning algorithms to automate the highlighting process. The contributions of this work are: i) we developed a multi-class dataset of size 150k samples by traversing USPTO patents over a decade, ii) articulated statistics and distributions of data using imperative exploratory data analysis, iii) baseline Machine Learning models are developed to utilize the dataset to address patent paragraph highlighting task, and iv) future path to extend this work using Deep Learning and domain-specific pre-trained language models to develop a tool to highlight is provided. This work assists patent practitioners in highlighting semantic information automatically and aids in creating a sustainable and efficient patent analysis using the aptitude of machine learning.Keywords: machine learning, patents, patent sentiment analysis, patent information retrieval
Procedia PDF Downloads 933621 Simulation-Based Validation of Safe Human-Robot-Collaboration
Authors: Titanilla Komenda
Abstract:
Human-machine-collaboration defines a direct interaction between humans and machines to fulfil specific tasks. Those so-called collaborative machines are used without fencing and interact with humans in predefined workspaces. Even though, human-machine-collaboration enables a flexible adaption to variable degrees of freedom, industrial applications are rarely found. The reasons for this are not technical progress but rather limitations in planning processes ensuring safety for operators. Until now, humans and machines were mainly considered separately in the planning process, focusing on ergonomics and system performance respectively. Within human-machine-collaboration, those aspects must not be seen in isolation from each other but rather need to be analysed in interaction. Furthermore, a simulation model is needed that can validate the system performance and ensure the safety for the operator at any given time. Following on from this, a holistic simulation model is presented, enabling a simulative representation of collaborative tasks – including both, humans and machines. The presented model does not only include a geometry and a motion model of interacting humans and machines but also a numerical behaviour model of humans as well as a Boole’s probabilistic sensor model. With this, error scenarios can be simulated by validating system behaviour in unplanned situations. As these models can be defined on the basis of Failure Mode and Effects Analysis as well as probabilities of errors, the implementation in a collaborative model is discussed and evaluated regarding limitations and simulation times. The functionality of the model is shown on industrial applications by comparing simulation results with video data. The analysis shows the impact of considering human factors in the planning process in contrast to only meeting system performance. In this sense, an optimisation function is presented that meets the trade-off between human and machine factors and aids in a successful and safe realisation of collaborative scenarios.Keywords: human-machine-system, human-robot-collaboration, safety, simulation
Procedia PDF Downloads 3613620 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network
Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson
Abstract:
The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0
Procedia PDF Downloads 1823619 Development of pm2.5 Forecasting System in Seoul, South Korea Using Chemical Transport Modeling and ConvLSTM-DNN
Authors: Ji-Seok Koo, Hee‑Yong Kwon, Hui-Young Yun, Kyung-Hui Wang, Youn-Seo Koo
Abstract:
This paper presents a forecasting system for PM2.5 levels in Seoul, South Korea, leveraging a combination of chemical transport modeling and ConvLSTM-DNN machine learning technology. Exposure to PM2.5 has known detrimental impacts on public health, making its prediction crucial for establishing preventive measures. Existing forecasting models, like the Community Multiscale Air Quality (CMAQ) and Weather Research and Forecasting (WRF), are hindered by their reliance on uncertain input data, such as anthropogenic emissions and meteorological patterns, as well as certain intrinsic model limitations. The system we've developed specifically addresses these issues by integrating machine learning and using carefully selected input features that account for local and distant sources of PM2.5. In South Korea, the PM2.5 concentration is greatly influenced by both local emissions and long-range transport from China, and our model effectively captures these spatial and temporal dynamics. Our PM2.5 prediction system combines the strengths of advanced hybrid machine learning algorithms, convLSTM and DNN, to improve upon the limitations of the traditional CMAQ model. Data used in the system include forecasted information from CMAQ and WRF models, along with actual PM2.5 concentration and weather variable data from monitoring stations in China and South Korea. The system was implemented specifically for Seoul's PM2.5 forecasting.Keywords: PM2.5 forecast, machine learning, convLSTM, DNN
Procedia PDF Downloads 563618 Comparative Evaluation of Accuracy of Selected Machine Learning Classification Techniques for Diagnosis of Cancer: A Data Mining Approach
Authors: Rajvir Kaur, Jeewani Anupama Ginige
Abstract:
With recent trends in Big Data and advancements in Information and Communication Technologies, the healthcare industry is at the stage of its transition from clinician oriented to technology oriented. Many people around the world die of cancer because the diagnosis of disease was not done at an early stage. Nowadays, the computational methods in the form of Machine Learning (ML) are used to develop automated decision support systems that can diagnose cancer with high confidence in a timely manner. This paper aims to carry out the comparative evaluation of a selected set of ML classifiers on two existing datasets: breast cancer and cervical cancer. The ML classifiers compared in this study are Decision Tree (DT), Support Vector Machine (SVM), k-Nearest Neighbor (k-NN), Logistic Regression, Ensemble (Bagged Tree) and Artificial Neural Networks (ANN). The evaluation is carried out based on standard evaluation metrics Precision (P), Recall (R), F1-score and Accuracy. The experimental results based on the evaluation metrics show that ANN showed the highest-level accuracy (99.4%) when tested with breast cancer dataset. On the other hand, when these ML classifiers are tested with the cervical cancer dataset, Ensemble (Bagged Tree) technique gave better accuracy (93.1%) in comparison to other classifiers.Keywords: artificial neural networks, breast cancer, classifiers, cervical cancer, f-score, machine learning, precision, recall
Procedia PDF Downloads 2783617 The Improvement of Turbulent Heat Flux Parameterizations in Tropical GCMs Simulations Using Low Wind Speed Excess Resistance Parameter
Authors: M. O. Adeniyi, R. T. Akinnubi
Abstract:
The parameterization of turbulent heat fluxes is needed for modeling land-atmosphere interactions in Global Climate Models (GCMs). However, current GCMs still have difficulties with producing reliable turbulent heat fluxes for humid tropical regions, which may be due to inadequate parameterization of the roughness lengths for momentum (z0m) and heat (z0h) transfer. These roughness lengths are usually expressed in term of excess resistance factor (κB^(-1)), and this factor is used to account for different resistances for momentum and heat transfers. In this paper, a more appropriate excess resistance factor (〖 κB〗^(-1)) suitable for low wind speed condition was developed and incorporated into the aerodynamic resistance approach (ARA) in the GCMs. Also, the performance of various standard GCMs κB^(-1) schemes developed for high wind speed conditions were assessed. Based on the in-situ surface heat fluxes and profile measurements of wind speed and temperature from Nigeria Micrometeorological Experimental site (NIMEX), new κB^(-1) was derived through application of the Monin–Obukhov similarity theory and Brutsaert theoretical model for heat transfer. Turbulent flux parameterizations with this new formula provides better estimates of heat fluxes when compared with others estimated using existing GCMs κB^(-1) schemes. The derived κB^(-1) MBE and RMSE in the parameterized QH ranged from -1.15 to – 5.10 Wm-2 and 10.01 to 23.47 Wm-2, while that of QE ranged from - 8.02 to 6.11 Wm-2 and 14.01 to 18.11 Wm-2 respectively. The derived 〖 κB〗^(-1) gave better estimates of QH than QE during daytime. The derived 〖 κB〗^(-1)=6.66〖 Re〗_*^0.02-5.47, where Re_* is the Reynolds number. The derived κB^(-1) scheme which corrects a well documented large overestimation of turbulent heat fluxes is therefore, recommended for most regional models within the tropic where low wind speed is prevalent.Keywords: humid, tropic, excess resistance factor, overestimation, turbulent heat fluxes
Procedia PDF Downloads 2073616 The Influence of Salt Body of J. Ech Cheid on the Maturity History of the Cenomanian: Turonian Source Rock
Authors: Mohamed Malek Khenissi, Mohamed Montassar Ben Slama, Anis Belhaj Mohamed, Moncef Saidi
Abstract:
Northern Tunisia is well known by its different and complex structural and geological zones that have been the result of a geodynamic history that extends from the early Mesozoic era to the actual period. One of these zones is the salt province, where the Halokinesis process is manifested by a number of NE/SW salt structures such as Jebel Ech-Cheid which represents masses of materials characterized by a high plasticity and low density. The salt masses extrusions that have been developed due to an extension that started from the late Triassic to late Cretaceous. The evolution of salt bodies within sedimentary basins have not only contributed to modify the architecture of the basin, but it also has certain geochemical effects which touch mainly source rocks that surround it. It has been demonstrated that the presence of salt structures within sedimentary basins can influence its temperature distribution and thermal history. Moreover, it has been creating heat flux anomalies that may affect the maturity of organic matter and the timing of hydrocarbon generation. Field samples of the Bahloul source rock (Cenomanan-Tunonian) were collected from different sights from all around Ech Cheid salt structure and evaluated using Rock-eval pyrolysis and GC/MS techniques in order to assess the degree of maturity evolution and the heat flux anomalies in the different zones analyze. The Total organic Carbon (TOC) values range between 1 to 9% and the (Tmax) ranges between 424 and 445°C, also the distribution of the source rock biomarkers both saturated and aromatic changes in a regular fashions with increasing maturity and this are shown in the chromatography results such as Ts/(Ts+Tm) ratios, 22S/(22S+22R) values for C31 homohopanes, ββ/(ββ+αα)20R and 20S/(20S+20R) ratios for C29 steranes which gives a consistent maturity indications and assessment of the field samples. These analyses are carried to interpret the maturity evolution and the heat flux around Ech Cheid salt structure through the geological history. These analyses also aim to demonstrate that the salt structure can have a direct effect on the geothermal gradient of the basin and on the maturity of the Bahloul Formation source rock. The organic matter has reached different stages of thermal maturity, but delineate a general increasing maturity trend. Our study confirms that the J. Ech Cheid salt body have on the first hand: a huge influence on the local distribution of anoxic depocentre at least within Cenomanian-Turonian time. In the second hand, the thermal anomaly near the salt mass has affected the maturity of Bahloul Formation.Keywords: Bahloul formation, depocentre, GC/MS, rock-eval
Procedia PDF Downloads 2413615 Aerodynamic Heating Analysis of Hypersonic Flow over Blunt-Nosed Bodies Using Computational Fluid Dynamics
Authors: Aakash Chhunchha, Assma Begum
Abstract:
The qualitative aspects of hypersonic flow over a range of blunt bodies have been extensively analyzed in the past. It is well known that the curvature of a body’s geometry in the sonic region predominantly dictates the bow shock shape and its standoff distance from the body, while the surface pressure distribution depends on both the sonic region and on the local body shape. The present study is an extension to analyze the hypersonic flow characteristics over several blunt-nosed bodies using modern Computational Fluid Dynamics (CFD) tools to determine the shock shape and its effect on the heat flux around the body. 4 blunt-nosed models with cylindrical afterbodies were analyzed for a flow at a Mach number of 10 corresponding to the standard atmospheric conditions at an altitude of 50 km. The nose radii of curvature of the models range from a hemispherical nose to a flat nose. Appropriate numerical models and the supplementary convergence techniques that were implemented for the CFD analysis are thoroughly described. The flow contours are presented highlighting the key characteristics of shock wave shape, shock standoff distance and the sonic point shift on the shock. The variation of heat flux, due to different shock detachments for various models is comprehensively discussed. It is observed that the more the bluntness of the nose radii, the farther the shock stands from the body; and consequently, the less the surface heating at the nose. The results obtained from the CFD analyses are compared with approximated theoretical engineering correlations. Overall, a satisfactory agreement is observed between the two.Keywords: aero-thermodynamics, blunt-nosed bodies, computational fluid dynamics (CFD), hypersonic flow
Procedia PDF Downloads 1443614 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions
Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly
Abstract:
Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability
Procedia PDF Downloads 893613 Machine Learning Based Gender Identification of Authors of Entry Programs
Authors: Go Woon Kwak, Siyoung Jun, Soyun Maeng, Haeyoung Lee
Abstract:
Entry is an education platform used in South Korea, created to help students learn to program, in which they can learn to code while playing. Using the online version of the entry, teachers can easily assign programming homework to the student and the students can make programs simply by linking programming blocks. However, the programs may be made by others, so that the authors of the programs should be identified. In this paper, as the first step toward author identification of entry programs, we present an artificial neural network based classification approach to identify genders of authors of a program written in an entry. A neural network has been trained from labeled training data that we have collected. Our result in progress, although preliminary, shows that the proposed approach could be feasible to be applied to the online version of entry for gender identification of authors. As future work, we will first use a machine learning technique for age identification of entry programs, which would be the second step toward the author identification.Keywords: artificial intelligence, author identification, deep neural network, gender identification, machine learning
Procedia PDF Downloads 3243612 The Effect of Naringenin on the Apoptosis in T47D Cell Line of Breast Cancer
Authors: AliAkbar Hafezi, Jahanbakhsh Asadi, Majid Shahbazi, Alijan Tabarraei, Nader Mansour Samaei, Hamed Sheibak, Roghaye Gharaei
Abstract:
Background: Breast cancer is the most common cancer in women. In most cancer cells, apoptosis is blocked. As for the importance of apoptosis in cancer cell death and the role of different genes in its induction or inhibition, the search for compounds that can begin the process of apoptosis in tumor cells is discussed as a new strategy in anticancer drug discovery. The aim of this study was to investigate the effect of Naringenin (NGEN) on the apoptosis in the T47D cell line of breast cancer. Materials and Methods: In this experimental study in vitro, the T47D cell line of breast cancer was selected as a sample. The cells at 24, 48, and 72 hours were treated with doses of 20, 200, and 1000 µm of Naringenin. Then, the transcription levels of the genes involved in apoptosis, including Bcl-2, Bax, Caspase 3, Caspase 8, Caspase 9, P53, PARP-1, and FAS, were assessed using Real Time-PCR. The collected data were analyzed using IBM SPSS Statistics 24.0. Results: The results showed that Naringenin at doses of 20, 200, and 1000 µm in all three times of 24, 48, and 72 hours increased the expression of Caspase 3, P53, PARP-1 and FAS and reduced the expression of Bcl-2 and increased the Bax/Bcl-2 ratio, nevertheless in none of the studied doses and times, had not a significant effect on the expression of Bax, Caspase 8 and Caspase 9. Conclusion: This study indicates that Naringenin can reduce the growth of some cancer cells and cause their deaths through increased apoptosis and decreased anti-apoptotic Bcl-2 gene expression and, resulting in the induction of apoptosis via both internal and external pathways.Keywords: apoptosis, breast cancer, naringenin, T47D cell line
Procedia PDF Downloads 53