Search results for: methodological combination
560 Design and Implementation of Generative Models for Odor Classification Using Electronic Nose
Authors: Kumar Shashvat, Amol P. Bhondekar
Abstract:
In the midst of the five senses, odor is the most reminiscent and least understood. Odor testing has been mysterious and odor data fabled to most practitioners. The delinquent of recognition and classification of odor is important to achieve. The facility to smell and predict whether the artifact is of further use or it has become undesirable for consumption; the imitation of this problem hooked on a model is of consideration. The general industrial standard for this classification is color based anyhow; odor can be improved classifier than color based classification and if incorporated in machine will be awfully constructive. For cataloging of odor for peas, trees and cashews various discriminative approaches have been used Discriminative approaches offer good prognostic performance and have been widely used in many applications but are incapable to make effectual use of the unlabeled information. In such scenarios, generative approaches have better applicability, as they are able to knob glitches, such as in set-ups where variability in the series of possible input vectors is enormous. Generative models are integrated in machine learning for either modeling data directly or as a transitional step to form an indeterminate probability density function. The algorithms or models Linear Discriminant Analysis and Naive Bayes Classifier have been used for classification of the odor of cashews. Linear Discriminant Analysis is a method used in data classification, pattern recognition, and machine learning to discover a linear combination of features that typifies or divides two or more classes of objects or procedures. The Naive Bayes algorithm is a classification approach base on Bayes rule and a set of qualified independence theory. Naive Bayes classifiers are highly scalable, requiring a number of restraints linear in the number of variables (features/predictors) in a learning predicament. The main recompenses of using the generative models are generally a Generative Models make stronger assumptions about the data, specifically, about the distribution of predictors given the response variables. The Electronic instrument which is used for artificial odor sensing and classification is an electronic nose. This device is designed to imitate the anthropological sense of odor by providing an analysis of individual chemicals or chemical mixtures. The experimental results have been evaluated in the form of the performance measures i.e. are accuracy, precision and recall. The investigational results have proven that the overall performance of the Linear Discriminant Analysis was better in assessment to the Naive Bayes Classifier on cashew dataset.Keywords: odor classification, generative models, naive bayes, linear discriminant analysis
Procedia PDF Downloads 390559 Using GIS and AHP Model to Explore the Parking Problem in Khomeinishahr
Authors: Davood Vatankhah, Reza Mokhtari Malekabadi, Mohsen Saghaei
Abstract:
Function of urban transportation systems depends on the existence of the required infrastructures, appropriate placement of different components, and the cooperation of these components with each other. Establishing various neighboring parking spaces in city neighborhood in order to prevent long-term and inappropriate parking of cars in the allies is one of the most effective operations in reducing the crowding and density of the neighborhoods. Every place with a certain application attracts a number of daily travels which happen throughout the city. A large percentage of the people visiting these places go to these travels by their own cars; therefore, they need a space to park their cars. The amount of this need depends on the usage function and travel demand of the place. The study aims at investigating the spatial distribution of the public parking spaces, determining the effective factors in locating, and their combination in GIS environment in Khomeinishahr of Isfahan city. Ultimately, the study intends to create an appropriate pattern for locating parking spaces, determining the request for parking spaces of the traffic areas, choosing the proper places for providing the required public parking spaces, and also proposing new spots in order to promote quality and quantity aspects of the city in terms of enjoying public parking spaces. Regarding the method, the study is based on applied purpose and regarding nature, it is analytic-descriptive. The population of the study includes people of the center of Khomeinishahr which is located on Northwest of Isfahan having about 5000 hectares of geographic area and the population of 241318 people are in the center of Komeinishahr. In order to determine the sample size, Cochran formula was used and according to the population of 26483 people of the studied area, 231 questionnaires were used. Data analysis was carried out by usage of SPSS software and after estimating the required space for parking spaces, initially, the effective criteria in locating the public parking spaces are weighted by the usage of Analytic Hierarchical Process in the Arc GIS software. Then, appropriate places for establishing parking spaces were determined by fuzzy method of Order Weighted Average (OWA). The results indicated that locating of parking spaces in Khomeinishahr have not been carried out appropriately and per capita of the parking spaces is not desirable in relation to the population and request; therefore, in addition to the present parking lots, 1434 parking lots are needed in the area of the study for each day; therefore, there is not a logical proportion between parking request and the number of parking lots in Khomeinishahr.Keywords: GIS, locating, parking, khomeinishahr
Procedia PDF Downloads 311558 Protective Role of Curcumin against Ionising Radiation of Gamma Ray
Authors: Turban Kar, Maitree Bhattacharyya
Abstract:
Curcumin, a dietary antioxidant has been identified as a wonder molecule to possess therapeutic properties protecting the cellular macromolecules from oxidative damage. In our experimental study, we have explored the effectiveness of curcumin in protecting the structural paradigm of Human Serum Albumin (HSA) when exposed to gamma irradiation. HSA, being an important transport protein of the circulatory system, is involved in binding of variety of metabolites, drugs, dyes and fatty acids due to the presence of hydrophobic pockets inside the structure. HSA is also actively involved in the transportation of drugs and metabolites to their targets, because of its long half-life and regulation of osmotic blood pressure. Gamma rays, in its increasing concentration, results in structural alteration of the protein and superoxide radical generation. Curcumin, on the other hand, mitigates the damage, which has been evidenced in the following experiments. Our study explores the possibility for protection by curcumin during the molecular and conformational changes of HSA when exposed to gamma irradiation. We used a combination of spectroscopic methods to probe the conformational ensemble of the irradiated HSA and finally evaluated the extent of restoration by curcumin. SDS - PAGE indicated the formation of cross linked aggregates as a consequence of increasing exposure of gamma radiation. CD and FTIR spectroscopy inferred significant decrease in alpha helix content of HSA from 57% to 15% with increasing radiation doses. Steady state and time resolved fluorescence studies complemented the spectroscopic measurements when lifetime decay was significantly reduced from 6.35 ns to 0.37 ns. Hydrophobic and bityrosine study showed the effectiveness of curcumin for protection against radiation induced free radical generation. Moreover, bityrosine and hydrophobic profiling of gamma irradiated HSA in presence and absence of curcumin provided light on the formation of ROS species generation and the protective (magical) role of curcumin. The molecular mechanism of curcumin protection to HSA from gamma irradiation is yet unknown, though a possible explanation has been proposed in this work using Thioflavin T assay. It was elucidated, that when HSA is irradiated at low dose of gamma radiation in presence of curcumin, it is capable of retaining the native characteristic properties to a greater extent indicating stabilization of molecular structure. Thus, curcumin may be utilized as a therapeutic strategy to protect cellular proteins.Keywords: Bityrosine content, conformational change, curcumin, gamma radiation, human serum albumin
Procedia PDF Downloads 156557 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails
Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali
Abstract:
When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis
Procedia PDF Downloads 50556 Co-Creational Model for Blended Learning in a Flipped Classroom Environment Focusing on the Combination of Coding and Drone-Building
Authors: A. Schuchter, M. Promegger
Abstract:
The outbreak of the COVID-19 pandemic has shown us that online education is so much more than just a cool feature for teachers – it is an essential part of modern teaching. In online math teaching, it is common to use tools to share screens, compute and calculate mathematical examples, while the students can watch the process. On the other hand, flipped classroom models are on the rise, with their focus on how students can gather knowledge by watching videos and on the teacher’s use of technological tools for information transfer. This paper proposes a co-educational teaching approach for coding and engineering subjects with the help of drone-building to spark interest in technology and create a platform for knowledge transfer. The project combines aspects from mathematics (matrices, vectors, shaders, trigonometry), physics (force, pressure and rotation) and coding (computational thinking, block-based programming, JavaScript and Python) and makes use of collaborative-shared 3D Modeling with clara.io, where students create mathematics knowhow. The instructor follows a problem-based learning approach and encourages their students to find solutions in their own time and in their own way, which will help them develop new skills intuitively and boost logically structured thinking. The collaborative aspect of working in groups will help the students develop communication skills as well as structural and computational thinking. Students are not just listeners as in traditional classroom settings, but play an active part in creating content together by compiling a Handbook of Knowledge (called “open book”) with examples and solutions. Before students start calculating, they have to write down all their ideas and working steps in full sentences so other students can easily follow their train of thought. Therefore, students will learn to formulate goals, solve problems, and create a ready-to use product with the help of “reverse engineering”, cross-referencing and creative thinking. The work on drones gives the students the opportunity to create a real-life application with a practical purpose, while going through all stages of product development.Keywords: flipped classroom, co-creational education, coding, making, drones, co-education, ARCS-model, problem-based learning
Procedia PDF Downloads 121555 Potential of Ozonation and Phytoremediation to Reduce Hydrocarbon Levels Remaining after the Pilot Scale Microbial Based Bioremediation (Land-Farming) of a Heavily Polluted Soil
Authors: Hakima Althalb
Abstract:
Petroleum contamination of sandy soils is a severe environmental problem in Libya, but relatively little work has been carried out to optimize the bioremediation of such heavily contaminated soil, particularly at a pilot scale. The purpose of this research was to determine the potential for the microbial-based bioremediation of hydrocarbon-contaminated soil obtained from an oil refinery in Libya and to assess the potential of both ozonation and phytoremediation (both applied after initial bioremediation) to reduce residual hydrocarbon levels. Plots containing 500 kg soil (triplicates) (contaminated soil diluted with clean soil 50% volume) were set up, (designated as Land Treatment Units; LTUs) containing five different nutrient levels and mixtures (Urea + NPK (nitrogen; phosphor; potassium) mixtures) to obtain C:N:P ratios 100:10:1, and monitored for 90 days. Hydrocarbon levels, microbial numbers, and toxicity (EC50 using luminescent microbial based tests) were assessed. Hydrocarbon levels in non-diluted and diluted soil ranged from 20 733-22 366 mg/kg and from 16 000-17 000 mg/kg respectively. Although all the land treatment units revealed a significant hydrocarbon reduction over time, the highest reduction in hydrocarbon levels obtained was around 60%. For example, 63% hydrocarbon removal was observed using a mixture of urea and NPK with a C:N:P ratio of 100:10:1). Soil toxicity (as assessed using luminescence based toxicity assays) reduced in line with the reduction in total petroleum hydrocarbons observed. However, as relatively high residual TPH (total petroleum hydrocarbon) levels (ranging from 6033-14166mg/kg) were still present after initial bioremediation two ‘post-treatments’ (phytoremediation and ozonation) were attempted to remove residual hydrocarbons remaining. Five locally grown (agriculturally important) plant species were tested. The germination of all plants examined was strongly inhibited (80-100%) and seedlings failed to grow well in the contaminated soil, indicating that the previously bioremediated soils were still toxic to the plants. Subsequent ozonation followed by another bioremediation of soil was more successful than phytoremediation. But even the most promising successful treatment in this study (ozonation for 6 hours at 25ppm followed by bioremediation) still only removed approximately 31% of the residual hydrocarbons. Overall, this work showed that the bioremediation of such highly contaminated soils is difficult and that a combination of treatments would be required to achieve successful remediation. Even after initial dilution and bioremediation the soils remained toxic to plant growth and were therefore not suitable for phytoremediation.Keywords: bioremediation, petroleum hydrocarbons, ozone, phytoremediation
Procedia PDF Downloads 184554 Comparison of Developed Statokinesigram and Marker Data Signals by Model Approach
Authors: Boris Barbolyas, Kristina Buckova, Tomas Volensky, Cyril Belavy, Ladislav Dedik
Abstract:
Background: Based on statokinezigram, the human balance control is often studied. Approach to human postural reaction analysis is based on a combination of stabilometry output signal with retroreflective marker data signal processing, analysis, and understanding, in this study. The study shows another original application of Method of Developed Statokinesigram Trajectory (MDST), too. Methods: In this study, the participants maintained quiet bipedal standing for 10 s on stabilometry platform. Consequently, bilateral vibration stimuli to Achilles tendons in 20 s interval was applied. Vibration stimuli caused that human postural system took the new pseudo-steady state. Vibration frequencies were 20, 60 and 80 Hz. Participant's body segments - head, shoulders, hips, knees, ankles and little fingers were marked by 12 retroreflective markers. Markers positions were scanned by six cameras system BTS SMART DX. Registration of their postural reaction lasted 60 s. Sampling frequency was 100 Hz. For measured data processing were used Method of Developed Statokinesigram Trajectory. Regression analysis of developed statokinesigram trajectory (DST) data and retroreflective marker developed trajectory (DMT) data were used to find out which marker trajectories most correlate with stabilometry platform output signals. Scaling coefficients (λ) between DST and DMT by linear regression analysis were evaluated, too. Results: Scaling coefficients for marker trajectories were identified for all body segments. Head markers trajectories reached maximal value and ankle markers trajectories had a minimal value of scaling coefficient. Hips, knees and ankles markers were approximately symmetrical in the meaning of scaling coefficient. Notable differences of scaling coefficient were detected in head and shoulders markers trajectories which were not symmetrical. The model of postural system behavior was identified by MDST. Conclusion: Value of scaling factor identifies which body segment is predisposed to postural instability. Hypothetically, if statokinesigram represents overall human postural system response to vibration stimuli, then markers data represented particular postural responses. It can be assumed that cumulative sum of particular marker postural responses is equal to statokinesigram.Keywords: center of pressure (CoP), method of developed statokinesigram trajectory (MDST), model of postural system behavior, retroreflective marker data
Procedia PDF Downloads 351553 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations
Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang
Abstract:
Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of AMPS, tBA, and cross-linker MBA on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200nm to 800nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in natural gas hydrate-bearing sediments.Keywords: temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments
Procedia PDF Downloads 176552 Application of Deep Learning and Ensemble Methods for Biomarker Discovery in Diabetic Nephropathy through Fibrosis and Propionate Metabolism Pathways
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Diabetic nephropathy (DN) is a major complication of diabetes, with fibrosis and propionate metabolism playing critical roles in its progression. Identifying biomarkers linked to these pathways may provide novel insights into DN diagnosis and treatment. This study aims to identify biomarkers associated with fibrosis and propionate metabolism in DN. Analyze the biological pathways and regulatory mechanisms of these biomarkers. Develop a machine learning model to predict DN-related biomarkers and validate their functional roles. Publicly available transcriptome datasets related to DN (GSE96804 and GSE104948) were obtained from the GEO database (https://www.ncbi.nlm.nih.gov/gds), and 924 propionate metabolism-related genes (PMRGs) and 656 fibrosis-related genes (FRGs) were identified. The analysis began with the extraction of DN-differentially expressed genes (DN-DEGs) and propionate metabolism-related DEGs (PM-DEGs), followed by the intersection of these with fibrosis-related genes to identify key intersected genes. Instead of relying on traditional models, we employed a combination of deep neural networks (DNNs) and ensemble methods such as Gradient Boosting Machines (GBM) and XGBoost to enhance feature selection and biomarker discovery. Recursive feature elimination (RFE) was coupled with these advanced algorithms to refine the selection of the most critical biomarkers. Functional validation was conducted using convolutional neural networks (CNN) for gene set enrichment and immunoinfiltration analysis, revealing seven significant biomarkers—SLC37A4, ACOX2, GPD1, ACE2, SLC9A3, AGT, and PLG. These biomarkers are involved in critical biological processes such as fatty acid metabolism and glomerular development, providing a mechanistic link to DN progression. Furthermore, a TF–miRNA–mRNA regulatory network was constructed using natural language processing models to identify 8 transcription factors and 60 miRNAs that regulate these biomarkers, while a drug–gene interaction network revealed potential therapeutic targets such as UROKINASE–PLG and ATENOLOL–AGT. This integrative approach, leveraging deep learning and ensemble models, not only enhances the accuracy of biomarker discovery but also offers new perspectives on DN diagnosis and treatment, specifically targeting fibrosis and propionate metabolism pathways.Keywords: diabetic nephropathy, deep neural networks, gradient boosting machines (GBM), XGBoost
Procedia PDF Downloads 12551 Photocatalytic Disintegration of Naphthalene and Naphthalene Similar Compounds in Indoors Air
Authors: Tobias Schnabel
Abstract:
Naphthalene and naphthalene similar compounds are a common problem in the indoor air of buildings from the 1960s and 1970s in Germany. Often tar containing roof felt was used under the concrete floor to prevent humidity to come through the floor. This tar containing roof felt has high concentrations of PAH (Polycyclic aromatic hydrocarbon) and naphthalene. Naphthalene easily evaporates and contaminates the indoor air. Especially after renovations and energetically modernization of the buildings, the naphthalene concentration rises because no forced air exchange can happen. Because of this problem, it is often necessary to change the floors after renovation of the buildings. The MFPA Weimar (Material research and testing facility) developed in cooperation a project with LEJ GmbH and Reichmann Gebäudetechnik GmbH. It is a technical solution for the disintegration of naphthalene in naphthalene, similar compounds in indoor air with photocatalytic reforming. Photocatalytic systems produce active oxygen species (hydroxyl radicals) through trading semiconductors on a wavelength of their bandgap. The light energy separates the charges in the semiconductor and produces free electrons in the line tape and defect electrons. The defect electrons can react with hydroxide ions to hydroxyl radicals. The produced hydroxyl radicals are a strong oxidation agent, and can oxidate organic matter to carbon dioxide and water. During the research, new titanium oxide catalysator surface coatings were developed. This coating technology allows the production of very porous titan oxide layer on temperature stable carrier materials. The porosity allows the naphthalene to get easily absorbed by the surface coating, what accelerates the reaction of the heterogeneous photocatalysis. The photocatalytic reaction is induced by high power and high efficient UV-A (ultra violet light) Leds with a wavelength of 365nm. Various tests in emission chambers and on the reformer itself show that a reduction of naphthalene in important concentrations between 2 and 250 µg/m³ is possible. The disintegration rate was at least 80%. To reduce the concentration of naphthalene from 30 µg/m³ to a level below 5 µg/m³ in a usual 50 ² classroom, an energy of 6 kWh is needed. The benefits of the photocatalytic indoor air treatment are that every organic compound in the air can be disintegrated and reduced. The use of new photocatalytic materials in combination with highly efficient UV leds make a safe and energy efficient reduction of organic compounds in indoor air possible. At the moment the air cleaning systems take the step from prototype stage into the usage in real buildings.Keywords: naphthalene, titandioxide, indoor air, photocatalysis
Procedia PDF Downloads 144550 Redirecting Photosynthetic Electron Flux in the Engineered Cyanobacterium synechocystis Sp. Pcc 6803 by the Deletion of Flavodiiron Protein Flv3
Authors: K. Thiel, P. Patrikainen, C. Nagy, D. Fitzpatrick, E.-M. Aro, P. Kallio
Abstract:
Photosynthetic cyanobacteria have been recognized as potential future biotechnological hosts for the direct conversion of CO₂ into chemicals of interest using sunlight as the solar energy source. However, in order to develop commercially viable systems, the flux of electrons from the photosynthetic light reactions towards specified target chemicals must be significantly improved. The objective of the study was to investigate whether the autotrophic production efficiency of specified end-metabolites can be improved in engineered cyanobacterial cells by rescuing excited electrons that are normally lost to molecular oxygen due to the cyanobacterial flavodiiron protein Flv1/3. Natively Flv1/3 dissipates excess electrons in the photosynthetic electron transfer chain by directing them to molecular oxygen in Mehler-like reaction to protect photosystem I. To evaluate the effect of flavodiiron inactivation on autotrophic production efficiency in the cyanobacterial host Synechocystis sp. PCC 6803 (Synechocystis), sucrose was selected as the quantitative reporter and a representative of a potential end-product of interest. The concept is based on the native property of Synechocystis to produce sucrose as an intracellular osmoprotectant when exposed to high external ion concentrations, in combination with the introduction of a heterologous sucrose permease (CscB from Escherichia coli), which transports the sucrose out from the cell. In addition, cell growth, photosynthetic gas fluxes using membrane inlet mass spectrometry and endogenous storage compounds were analysed to illustrate the consequent effects of flv deletion on pathway flux distributions. The results indicate that a significant proportion of the electrons can be lost to molecular oxygen via Flv1/3 even when the cells are grown under high CO₂ and that the inactivation of flavodiiron activity can enhance the photosynthetic electron flux towards optionally available sinks. The flux distribution is dependent on the light conditions and the genetic context of the Δflv mutants, and favors the production of either sucrose or one of the two storage compounds, glycogen or polyhydroxybutyrate. As a conclusion, elimination of the native Flv1/3 reaction and concomitant introduction of an engineered product pathway as an alternative sink for excited electrons could enhance the photosynthetic electron flux towards the target endproduct without compromising the fitness of the host.Keywords: cyanobacterial engineering, flavodiiron proteins, redirecting electron flux, sucrose
Procedia PDF Downloads 125549 Field Environment Sensing and Modeling for Pears towards Precision Agriculture
Authors: Tatsuya Yamazaki, Kazuya Miyakawa, Tomohiko Sugiyama, Toshitaka Iwatani
Abstract:
The introduction of sensor technologies into agriculture is a necessary step to realize Precision Agriculture. Although sensing methodologies themselves have been prevailing owing to miniaturization and reduction in costs of sensors, there are some difficulties to analyze and understand the sensing data. Targeting at pears ’Le Lectier’, which is particular to Niigata in Japan, cultivation environmental data have been collected at pear fields by eight sorts of sensors: field temperature, field humidity, rain gauge, soil water potential, soil temperature, soil moisture, inner-bag temperature, and inner-bag humidity sensors. With regard to the inner-bag temperature and humidity sensors, they are used to measure the environment inside the fruit bag used for pre-harvest bagging of pears. In this experiment, three kinds of fruit bags were used for the pre-harvest bagging. After over 100 days continuous measurement, volumes of sensing data have been collected. Firstly, correlation analysis among sensing data measured by respective sensors reveals that one sensor can replace another sensor so that more efficient and cost-saving sensing systems can be proposed to pear farmers. Secondly, differences in characteristic and performance of the three kinds of fruit bags are clarified by the measurement results by the inner-bag environmental sensing. It is found that characteristic and performance of the inner-bags significantly differ from each other by statistical analysis. Lastly, a relational model between the sensing data and the pear outlook quality is established by use of Structural Equation Model (SEM). Here, the pear outlook quality is related with existence of stain, blob, scratch, and so on caused by physiological impair or diseases. Conceptually SEM is a combination of exploratory factor analysis and multiple regression. By using SEM, a model is constructed to connect independent and dependent variables. The proposed SEM model relates the measured sensing data and the pear outlook quality determined on the basis of farmer judgement. In particularly, it is found that the inner-bag humidity variable relatively affects the pear outlook quality. Therefore, inner-bag humidity sensing might help the farmers to control the pear outlook quality. These results are supported by a large quantity of inner-bag humidity data measured over the years 2014, 2015, and 2016. The experimental and analytical results in this research contribute to spreading Precision Agriculture technologies among the farmers growing ’Le Lectier’.Keywords: precision agriculture, pre-harvest bagging, sensor fusion, structural equation model
Procedia PDF Downloads 314548 Enhancement of Shelflife of Malta Fruit with Active Packaging
Authors: Rishi Richa, N. C. Shahi, J. P. Pandey, S. S. Kautkar
Abstract:
Citrus fruits rank third in area and production after banana and mango in India. Sweet oranges are the second largest citrus fruits cultivated in the country. Andhra Pradesh, Maharashtra, Karnataka, Punjab, Haryana, Rajasthan, and Uttarakhand are the main sweet orange-growing states. Citrus fruits occupy a leading position in the fruit trade of Uttarakhand, is casing about 14.38% of the total area under fruits and contributing nearly 17.75 % to the total fruit production. Malta is grown in most of the hill districts of the Uttarakhand. Malta common is having high acceptability due to its attractive colour, distinctive flavour, and taste. The excellent quality fruits are generally available for only one or two months. However due to its less shelf-life, Malta can not be stored for longer time under ambient conditions and cannot be transported to distant places. Continuous loss of water adversely affects the quality of Malta during storage and transportation. Method of picking, packaging, and cold storage has detrimental effects on moisture loss. The climatic condition such as ambient temperature, relative humidity, wind condition (aeration) and microbial attack greatly influences the rate of moisture loss and quality. Therefore, different agro-climatic zone will have different moisture loss pattern. The rate of moisture loss can be taken as one of the quality parameters in combination of one or more parameter such as RH, and aeration. The moisture contents of the fruits and vegetables determine their freshness. Hence, it is important to maintain initial moisture status of fruits and vegetable for prolonged period after the harvest. Keeping all points in views, effort was made to store Malta at ambient condition. In this study, the response surface method and experimental design were applied for optimization of independent variables to enhance the shelf life of four months stored malta. Box-Benkhen design, with, 12 factorial points and 5 replicates at the centre point were used to build a model for predicting and optimizing storage process parameters. The independent parameters, viz., scavenger (3, 4 and 5g), polythene thickness (75, 100 and 125 gauge) and fungicide concentration (100, 150 and 200ppm) were selected and analyzed. 5g scavenger, 125 gauge and 200ppm solution of fungicide are the optimized value for storage which may enhance life up to 4months.Keywords: Malta fruit, scavenger, packaging, shelf life
Procedia PDF Downloads 280547 Predicting Recessions with Bivariate Dynamic Probit Model: The Czech and German Case
Authors: Lukas Reznak, Maria Reznakova
Abstract:
Recession of an economy has a profound negative effect on all involved stakeholders. It follows that timely prediction of recessions has been of utmost interest both in the theoretical research and in practical macroeconomic modelling. Current mainstream of recession prediction is based on standard OLS models of continuous GDP using macroeconomic data. This approach is not suitable for two reasons: the standard continuous models are proving to be obsolete and the macroeconomic data are unreliable, often revised many years retroactively. The aim of the paper is to explore a different branch of recession forecasting research theory and verify the findings on real data of the Czech Republic and Germany. In the paper, the authors present a family of discrete choice probit models with parameters estimated by the method of maximum likelihood. In the basic form, the probits model a univariate series of recessions and expansions in the economic cycle for a given country. The majority of the paper deals with more complex model structures, namely dynamic and bivariate extensions. The dynamic structure models the autoregressive nature of recessions, taking into consideration previous economic activity to predict the development in subsequent periods. Bivariate extensions utilize information from a foreign economy by incorporating correlation of error terms and thus modelling the dependencies of the two countries. Bivariate models predict a bivariate time series of economic states in both economies and thus enhance the predictive performance. A vital enabler of timely and successful recession forecasting are reliable and readily available data. Leading indicators, namely the yield curve and the stock market indices, represent an ideal data base, as the pieces of information is available in advance and do not undergo any retroactive revisions. As importantly, the combination of yield curve and stock market indices reflect a range of macroeconomic and financial market investors’ trends which influence the economic cycle. These theoretical approaches are applied on real data of Czech Republic and Germany. Two models for each country were identified – each for in-sample and out-of-sample predictive purposes. All four followed a bivariate structure, while three contained a dynamic component.Keywords: bivariate probit, leading indicators, recession forecasting, Czech Republic, Germany
Procedia PDF Downloads 249546 The Decision-Making Process of the Central Banks of Brazil and India in Regional Integration: A Comparative Analysis of MERCOSUR and SAARC (2003-2014)
Authors: Andre Sanches Siqueira Campos
Abstract:
Central banks can play a significant role in promoting regional economic and monetary integration by strengthening the payment and settlement systems. However, close coordination and cooperation require facilitating the implementation of reforms at domestic and cross-border levels in order to benchmark with international standards and commitments to the liberal order. This situation reflects the normative power of the regulatory globalization dimension of strong states, which may drive or constrain regional integration. In the MERCOSUR and SAARC regions, central banks have set financial initiatives that could facilitate South America and South Asia regions to move towards convergence integration and facilitate trade and investments connectivities. This is qualitative method research based on a combination of the Process-Tracing method with Qualitative Comparative Analysis (QCA). This research approaches multiple forms of data based on central banks, regional organisations, national governments, and financial institutions supported by existing literature. The aim of this research is to analyze the decision-making process of the Central Bank of Brazil (BCB) and the Reserve Bank of India (RBI) towards regional financial cooperation by identifying connectivity instruments that foster, gridlock, or redefine cooperation. The BCB and The RBI manage the monetary policy of the largest economies of those regions, which makes regional cooperation a relevant framework to understand how they provide an effective institutional arrangement for regional organisations to achieve some of their key policies and economic objectives. The preliminary conclusion is that both BCB and RBI demonstrate a reluctance to deepen regional cooperation because of the existing economic, political, and institutional asymmetries. Deepening regional cooperation is constrained by the interests of central banks in protecting their economies from risks of instability due to different degrees of development between countries in their regions and international financial crises that have impacted the international system in the 21st century. Reluctant regional integration also provides autonomy for national development and political ground for the contestation of Global Financial Governance by Brazil and India.Keywords: Brazil, central banks, decision-making process, global financial governance, India, MERCOSUR, connectivity, payment system, regional cooperation, SAARC
Procedia PDF Downloads 114545 An Institutional Mapping and Stakeholder Analysis of ASEAN’s Preparedness for Nuclear Power Disaster
Authors: Nur Azha Putra Abdul Azim, Denise Cheong, S. Nivedita
Abstract:
Currently, there are no nuclear power reactors among the Association of Southeast Asian Nations (ASEAN) member states (AMS) but there are seven operational nuclear research reactors, and Indonesia is about to construct the region’s first experimental power reactor by the end of the decade. If successful, the experimental power reactor will lay the foundation for the country’s and region’s first nuclear power plant. Despite projecting confidence during the period of nuclear power renaissance in the region in the last decade, none of the AMS has committed to a political decision on the use of nuclear energy and this is largely due to the Fukushima nuclear power accident in 2011. Of the ten AMS, Vietnam, Indonesia and Malaysia have demonstrated the most progress in developing nuclear energy based on the nuclear power infrastructure development assessments made by the International Atomic Energy Agency. Of these three states, Vietnam came closest to building its first nuclear power plant but decided to delay construction further due to safety and security concerns. Meanwhile, Vietnam along with Indonesia and Malaysia continue with their nuclear power infrastructure development and the remaining SEA states, with the exception of Brunei and Singapore, continue to build their expertise and capacity for nuclear power energy. At the current rate of progress, Indonesia is expected to make a national decision on the use of nuclear power by 2023 while Malaysia, the Philippines, and Thailand have included the use of nuclear power in their mid to long-term power development plans. Vietnam remains open to nuclear power but has not placed a timeline. The medium to short-term power development projection in the region suggests that the use of nuclear energy in the region is a matter of 'when' rather than 'if'. In lieu of the prospects for nuclear energy in Southeast Asia (SEA), this presentation will review the literature on ASEAN radiological emergency and preparedness response (EPR) plans and examine ASEAN’s disaster management and emergency framework. Through a combination of institutional mapping and stakeholder analysis methods, which we examine in the context of the international EPR, and nuclear safety and security regimes, we will identify the issues and challenges in developing a regional radiological EPR framework in the SEA. We will conclude with the observation that ASEAN faces serious structural, institutional and governance challenges due to the AMS inherent political structures and history of interstate conflicts, and propose that ASEAN should either enlarge the existing scope of its disaster management and response framework or that its radiological EPR framework should exist as a separate entity.Keywords: nuclear power, nuclear accident, ASEAN, Southeast Asia
Procedia PDF Downloads 153544 Attitudes Towards the Supernatural in Benjamin Britten’s The Turn of the Screw
Authors: Yaou Zhang
Abstract:
Background: Relatively little scholarly attention has been paid to the production of Benjamin Britten’s chamber opera The Turn of the Screw. As one of Britten’s most remarkable operas. The story of the libretto was from Henry James’s novella of the same name. The novella was created in 1898 and one of the primary questions addressed to people in the story is “how real the ghosts are,” which leads the story to a huge ambiguity in readers’ minds. Aims: This research focuses on the experience of seeing the opera on stage over several decades. This study of opera productions over time not only provides insight into how stage performances can alter audience members' perceptions of the opera in the present but also reveals a landscape of shifting aesthetics and receptions. Methods: To examine the hypotheses in interpretation and reception, the qualitative analysis is used to examine the figures of ghosts in different productions across the time from 1954 to 2021 in the UK: by accessing recordings, newspapers, and reviews for the productions that are sourced from online and physical archives. For instance, the field research is conducted on the topic by arranging interviews with the creative team and visiting Opera North in Leeds and Britten-Pears Foundation. The collected data reveals the “hidden identity” in creative teams’ interpretations, social preferences, and rediscover that have previously remained unseen. Results: This research presents an angle of Britten’s Screw by using the third position; it shows how the attention moved from the stage of “do the ghosts really exist” to “traumatised children.” Discussion: Critics and audiences have debated whether the governess hallucinates the ghosts in the opera for decades. While, in recent years, directors of new productions have given themselves the opportunity to go deeper into Britten's musical structure and offer the opera more space to be interpreted, rather than debating if "ghosts actually exist" or "the psychological problems of the governess." One can consider and reflect that the questionable actions of the children are because they are suffering from trauma, whether the trauma comes from the ghosts, the hallucinating governess, or some prior experiences: various interpretations cause one result that children are the recipients of trauma. Arguably, the role of the supernatural is neither simply one of the elements of a ghost story nor simply one of the parts of the ambiguity between the supernatural and the hallucination of the governess; rather, the ghosts and the hallucinating governess can exist at the same time - the combination of the supernatural’s and the governess’s behaviours on stage generates a sharper and more serious angle that draws our attention to the traumatized children.Keywords: benjamin britten, chamber opera, production, reception, staging, the turn of the screw
Procedia PDF Downloads 108543 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger
Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans
Abstract:
Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model
Procedia PDF Downloads 550542 Meeting the Energy Balancing Needs in a Fully Renewable European Energy System: A Stochastic Portfolio Framework
Authors: Iulia E. Falcan
Abstract:
The transition of the European power sector towards a clean, renewable energy (RE) system faces the challenge of meeting power demand in times of low wind speed and low solar radiation, at a reasonable cost. This is likely to be achieved through a combination of 1) energy storage technologies, 2) development of the cross-border power grid, 3) installed overcapacity of RE and 4) dispatchable power sources – such as biomass. This paper uses NASA; derived hourly data on weather patterns of sixteen European countries for the past twenty-five years, and load data from the European Network of Transmission System Operators-Electricity (ENTSO-E), to develop a stochastic optimization model. This model aims to understand the synergies between the four classes of technologies mentioned above and to determine the optimal configuration of the energy technologies portfolio. While this issue has been addressed before, it was done so using deterministic models that extrapolated historic data on weather patterns and power demand, as well as ignoring the risk of an unbalanced grid-risk stemming from both the supply and the demand side. This paper aims to explicitly account for the inherent uncertainty in the energy system transition. It articulates two levels of uncertainty: a) the inherent uncertainty in future weather patterns and b) the uncertainty of fully meeting power demand. The first level of uncertainty is addressed by developing probability distributions for future weather data and thus expected power output from RE technologies, rather than known future power output. The latter level of uncertainty is operationalized by introducing a Conditional Value at Risk (CVaR) constraint in the portfolio optimization problem. By setting the risk threshold at different levels – 1%, 5% and 10%, important insights are revealed regarding the synergies of the different energy technologies, i.e., the circumstances under which they behave as either complements or substitutes to each other. The paper concludes that allowing for uncertainty in expected power output - rather than extrapolating historic data - paints a more realistic picture and reveals important departures from results of deterministic models. In addition, explicitly acknowledging the risk of an unbalanced grid - and assigning it different thresholds - reveals non-linearity in the cost functions of different technology portfolio configurations. This finding has significant implications for the design of the European energy mix.Keywords: cross-border grid extension, energy storage technologies, energy system transition, stochastic portfolio optimization
Procedia PDF Downloads 171541 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials
Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar
Abstract:
Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.Keywords: body armor, impact resistance, Kevlar, shear thickening fluid
Procedia PDF Downloads 242540 Anaerobic Digestion of Green Wastes at Different Solids Concentrations and Temperatures to Enhance Methane Generation
Authors: A. Bayat, R. Bello-Mendoza, D. G. Wareham
Abstract:
Two major categories of green waste are fruit and vegetable (FV) waste and garden and yard (GY) waste. Although, anaerobic digestions (AD) is able to manage FV waste; there is less confidence in the conditions for AD to handle GY wastes (grass, leaves, trees and bush trimmings); mainly because GY contains lignin and other recalcitrant organics. GY in the dry state (TS ≥ 15 %) can be digested at mesophilic temperatures; however, little methane data has been reported under thermophilic conditions, where conceivably better methane yields could be achieved. In addition, it is suspected that at lower solids concentrations, the methane yield could be increased. As such, the aim of this research is to find the temperature and solids concentration conditions that produce the most methane; under two different temperature regimes (mesophilic, thermophilic) and three solids states (i.e. 'dry', 'semi-dry' and 'wet'). Twenty liters of GY waste was collected from a public park located in the northern district in Tehran. The clippings consisted of freshly cut grass as well as dry branches and leaves. The GY waste was chopped before being fed into a mechanical blender that reduced it to a paste-like consistency. An initial TS concentration of approximately 38 % was achieved. Four hundred mL of anaerobic inoculum (average total solids (TS) concentration of 2.03 ± 0.131 % of which 73.4% were volatile solid (VS), soluble chemical oxygen demand (sCOD) of 4.59 ± 0.3 g/L) was mixed with the GY waste substrate paste (along with distilled water) to achieve a TS content of approximately 20 %. For comparative purposes, approximately 20 liters of FV waste was ground in the same manner as the GY waste. Since FV waste has a much higher natural water content than GY, it was dewatered to obtain a starting TS concentration in the dry solid-state range (TS ≥ 15 %). Three samples were dewatered to an average starting TS concentration of 32.71 %. The inoculum was added (along with distilled water) to dilute the initial FV TS concentrations down to semi-dry conditions (10-15 %) and wet conditions (below 10 %). Twelve 1-L batch bioreactors were loaded simultaneously with either GY or FV waste at TS solid concentrations ranging from 3.85 ± 1.22 % to 20.11 ± 1.23 %. The reactors were sealed and were operated for 30 days while being immersed in water baths to maintain a constant temperature of 37 ± 0.5 °C (mesophilic) or 55 ± 0.5 °C (thermophilic). A maximum methane yield of 115.42 (L methane/ kg VS added) was obtained for the GY thermophilic-wet AD combination. Methane yield was enhanced by 240 % compared to the GY waste mesophilic-dry condition. The results confirm that high temperature regimes and small solids concentrations are conditions that enhance methane yield from GY waste. A similar trend was observed for the anaerobic digestion of FV waste. Furthermore, a maximum value of VS (53 %) and sCOD (84 %) reduction was achieved during the AD of GY waste under the thermophilic-wet condition.Keywords: anaerobic digestion, thermophilic, mesophilic, total solids concentration
Procedia PDF Downloads 142539 Virtual Reality in COVID-19 Stroke Rehabilitation: Preliminary Outcomes
Authors: Kasra Afsahi, Maryam Soheilifar, S. Hossein Hosseini
Abstract:
Background: There is growing evidence that Cerebral Vascular Accident (CVA) can be a consequence of Covid-19 infection. Understanding novel treatment approaches are important in optimizing patient outcomes. Case: This case explores the use of Virtual Reality (VR) in the treatment of a 23-year-old COVID-positive female presenting with left hemiparesis in August 2020. Imaging showed right globus pallidus, thalamus, and internal capsule ischemic stroke. Conventional rehabilitation was started two weeks later, with virtual reality (VR) included. This game-based virtual reality (VR) technology developed for stroke patients was based on upper extremity exercises and functions for stroke. Physical examination showed left hemiparesis with muscle strength 3/5 in the upper extremity and 4/5 in the lower extremity. The range of motion of the shoulder was 90-100 degrees. The speech exam showed a mild decrease in fluency. Mild lower lip dynamic asymmetry was seen. Babinski was positive on the left. Gait speed was decreased (75 steps per minute). Intervention: Our game-based VR system was developed based on upper extremity physiotherapy exercises for post-stroke patients to increase the active, voluntary movement of the upper extremity joints and improve the function. The conventional program was initiated with active exercises, shoulder sanding for joint ROMs, walking shoulder, shoulder wheel, and combination movements of the shoulder, elbow, and wrist joints, alternative flexion-extension, pronation-supination movements, Pegboard and Purdo pegboard exercises. Also, fine movements included smart gloves, biofeedback, finger ladder, and writing. The difficulty of the game increased at each stage of the practice with progress in patient performances. Outcome: After 6 weeks of treatment, gait and speech were normal and upper extremity strength was improved to near normal status. No adverse effects were noted. Conclusion: This case suggests that VR is a useful tool in the treatment of a patient with covid-19 related CVA. The safety of newly developed instruments for such cases provides new approaches to improve the therapeutic outcomes and prognosis as well as increased satisfaction rate among patients.Keywords: covid-19, stroke, virtual reality, rehabilitation
Procedia PDF Downloads 143538 Deep Learning for Qualitative and Quantitative Grain Quality Analysis Using Hyperspectral Imaging
Authors: Ole-Christian Galbo Engstrøm, Erik Schou Dreier, Birthe Møller Jespersen, Kim Steenstrup Pedersen
Abstract:
Grain quality analysis is a multi-parameterized problem that includes a variety of qualitative and quantitative parameters such as grain type classification, damage type classification, and nutrient regression. Currently, these parameters require human inspection, a multitude of instruments employing a variety of sensor technologies, and predictive model types or destructive and slow chemical analysis. This paper investigates the feasibility of applying near-infrared hyperspectral imaging (NIR-HSI) to grain quality analysis. For this study two datasets of NIR hyperspectral images in the wavelength range of 900 nm - 1700 nm have been used. Both datasets contain images of sparsely and densely packed grain kernels. The first dataset contains ~87,000 image crops of bulk wheat samples from 63 harvests where protein value has been determined by the FOSS Infratec NOVA which is the golden industry standard for protein content estimation in bulk samples of cereal grain. The second dataset consists of ~28,000 image crops of bulk grain kernels from seven different wheat varieties and a single rye variety. In the first dataset, protein regression analysis is the problem to solve while variety classification analysis is the problem to solve in the second dataset. Deep convolutional neural networks (CNNs) have the potential to utilize spatio-spectral correlations within a hyperspectral image to simultaneously estimate the qualitative and quantitative parameters. CNNs can autonomously derive meaningful representations of the input data reducing the need for advanced preprocessing techniques required for classical chemometric model types such as artificial neural networks (ANNs) and partial least-squares regression (PLS-R). A comparison between different CNN architectures utilizing 2D and 3D convolution is conducted. These results are compared to the performance of ANNs and PLS-R. Additionally, a variety of preprocessing techniques from image analysis and chemometrics are tested. These include centering, scaling, standard normal variate (SNV), Savitzky-Golay (SG) filtering, and detrending. The results indicate that the combination of NIR-HSI and CNNs has the potential to be the foundation for an automatic system unifying qualitative and quantitative grain quality analysis within a single sensor technology and predictive model type.Keywords: deep learning, grain analysis, hyperspectral imaging, preprocessing techniques
Procedia PDF Downloads 100537 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 265536 Sphere in Cube Grid Approach to Modelling of Shale Gas Production Using Non-Linear Flow Mechanisms
Authors: Dhruvit S. Berawala, Jann R. Ursin, Obrad Slijepcevic
Abstract:
Shale gas is one of the most rapidly growing forms of natural gas. Unconventional natural gas deposits are difficult to characterize overall, but in general are often lower in resource concentration and dispersed over large areas. Moreover, gas is densely packed into the matrix through adsorption which accounts for large volume of gas reserves. Gas production from tight shale deposits are made possible by extensive and deep well fracturing which contacts large fractions of the formation. The conventional reservoir modelling and production forecasting methods, which rely on fluid-flow processes dominated by viscous forces, have proved to be very pessimistic and inaccurate. This paper presents a new approach to forecast shale gas production by detailed modeling of gas desorption, diffusion and non-linear flow mechanisms in combination with statistical representation of these processes. The representation of the model involves a cube as a porous media where free gas is present and a sphere (SiC: Sphere in Cube model) inside it where gas is adsorbed on to the kerogen or organic matter. Further, the sphere is considered consisting of many layers of adsorbed gas in an onion-like structure. With pressure decline, the gas desorbs first from the outer most layer of sphere causing decrease in its molecular concentration. The new available surface area and change in concentration triggers the diffusion of gas from kerogen. The process continues until all the gas present internally diffuses out of the kerogen, gets adsorbs onto available surface area and then desorbs into the nanopores and micro-fractures in the cube. Each SiC idealizes a gas pathway and is characterized by sphere diameter and length of the cube. The diameter allows to model gas storage, diffusion and desorption; the cube length takes into account the pathway for flow in nanopores and micro-fractures. Many of these representative but general cells of the reservoir are put together and linked to a well or hydraulic fracture. The paper quantitatively describes these processes as well as clarifies the geological conditions under which a successful shale gas production could be expected. A numerical model has been derived which is then compiled on FORTRAN to develop a simulator for the production of shale gas by considering the spheres as a source term in each of the grid blocks. By applying SiC to field data, we demonstrate that the model provides an effective way to quickly access gas production rates from shale formations. We also examine the effect of model input properties on gas production.Keywords: adsorption, diffusion, non-linear flow, shale gas production
Procedia PDF Downloads 166535 Flexural Properties of Typha Fibers Reinforced Polyester Composite
Authors: Sana Rezig, Yosr Ben Mlik, Mounir Jaouadi, Foued Khoffi, Slah Msahli, Bernard Durand
Abstract:
Increasing interest in environmental concerns, natural fibers are once again being considered as reinforcements for polymer composites. The main objective of this study is to explore another natural resource, Typha fiber; which is renewable without production cost and available abundantly in nature. The aim of this study was to study the flexural properties of composite resin with and without reinforcing Typha leaf and stem fibers. The specimens were made by the hand-lay-up process using polyester matrix. In our work, we focused on the effect of various treatment conditions (sea water, alkali treatment and a combination of the two treatments), as a surface modifier, on the flexural properties of the Typha fibers reinforced polyester composites. Moreover, weight ratio of Typha leaf or stem fibers was investigated. Besides, both fibers from leaf and stem of Typha plant were used to evaluate the reinforcing effect. Another parameter, which is reinforcement structure, was investigated. In fact, a first composite was made with air-laid nonwoven structure of fibers. A second composite was with a mixture of fibers and resin for each kind of treatment. Results show that alkali treatment and combined process provided better mechanical properties of composites in comparison with fiber treated by sea water. The fiber weight ratio influenced the flexural properties of composites. Indeed, a maximum value of flexural strength of 69.8 and 62,32 MPa with flexural modulus of 6.16 and 6.34 GPawas observed respectively for composite reinforced with leaf and stem fibers for 12.6 % fiber weight ratio. For the different treatments carried out, the treatment using caustic soda, whether alone or after retting seawater, show the best results because it improves adhesion between the polyester matrix and the fibers of reinforcement. SEM photographs were made to ascertain the effects of the surface treatment of the fibers. By varying the structure of the fibers of Typha, the reinforcement used in bulk shows more effective results as that used in the non-woven structure. In addition, flexural strength rises with about (65.32 %) in the case of composite reinforced with a mixture of 12.6% leaf fibers and (27.45 %) in the case of a composite reinforced with a nonwoven structure of 12.6 % of leaf fibers. Thus, to better evaluate the effect of the fiber origin, the reinforcing structure, the processing performed and the reinforcement factor on the performance of composite materials, a statistical study was performed using Minitab. Thus, ANOVA was used, and the patterns of the main effects of these parameters and interaction between them were established. Statistical analysis, the fiber treatment and reinforcement structure seem to be the most significant parameters.Keywords: flexural properties, fiber treatment, structure and weight ratio, SEM photographs, Typha leaf and stem fibers
Procedia PDF Downloads 417534 Time-Interval between Rectal Cancer Surgery and Reintervention for Anastomotic Leakage and the Effects of a Defunctioning Stoma: A Dutch Population-Based Study
Authors: Anne-Loes K. Warps, Rob A. E. M. Tollenaar, Pieter J. Tanis, Jan Willem T. Dekker
Abstract:
Anastomotic leakage after colorectal cancer surgery remains a severe complication. Early diagnosis and treatment are essential to prevent further adverse outcomes. In the literature, it has been suggested that earlier reintervention is associated with better survival, but anastomotic leakage can occur with a highly variable time interval to index surgery. This study aims to evaluate the time-interval between rectal cancer resection with primary anastomosis creation and reoperation, in relation to short-term outcomes, stratified for the use of a defunctioning stoma. Methods: Data of all primary rectal cancer patients that underwent elective resection with primary anastomosis during 2013-2019 were extracted from the Dutch ColoRectal Audit. Analyses were stratified for defunctioning stoma. Anastomotic leakage was defined as a defect of the intestinal wall or abscess at the site of the colorectal anastomosis for which a reintervention was required within 30 days. Primary outcomes were new stoma construction, mortality, ICU admission, prolonged hospital stay and readmission. The association between time to reoperation and outcome was evaluated in three ways: Per 2 days, before versus on or after postoperative day 5 and during primary versus readmission. Results: In total 10,772 rectal cancer patients underwent resection with primary anastomosis. A defunctioning stoma was made in 46.6% of patients. These patients had a lower anastomotic leakage rate (8.2% vs. 11.6%, p < 0.001) and less often underwent a reoperation (45.3% vs. 88.7%, p < 0.001). Early reoperations (< 5 days) had the highest complication and mortality rate. Thereafter the distribution of adverse outcomes was more spread over the 30-day postoperative period for patients with a defunctioning stoma. Median time-interval from primary resection to reoperation for defunctioning stoma patients was 7 days (IQR 4-14) versus 5 days (IQR 3-13 days) for no-defunctioning stoma patients. The mortality rate after primary resection and reoperation were comparable (resp. for defunctioning vs. no-defunctioning stoma 1.0% vs. 0.7%, P=0.106 and 5.0% vs. 2.3%, P=0.107). Conclusion: This study demonstrated that early reinterventions after anastomotic leakage are associated with worse outcomes (i.e. mortality). Maybe the combination of a physiological dip in the cellular immune response and release of cytokines following surgery, as well as a release of endotoxins caused by the bacteremia originating from the leakage, leads to a more profound sepsis. Another explanation might be that early leaks are not contained to the pelvis, leading to a more profound sepsis requiring early reoperations. Leakage with or without defunctioning stoma resulted in a different type of reinterventions and time-interval between surgery and reoperation.Keywords: rectal cancer surgery, defunctioning stoma, anastomotic leakage, time-interval to reoperation
Procedia PDF Downloads 138533 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market
Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh
Abstract:
Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard
Procedia PDF Downloads 58532 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton
Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla
Abstract:
An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton
Procedia PDF Downloads 138531 Targeting Methionine Metabolism In Gastric Cancer; Promising To Improve Chemosensetivity With Non-hetrogeneity
Authors: Nigatu Tadesse, Li Juan, Liuhong Ming
Abstract:
Gastric cancer (GC) is the fifth most common and fourth deadly cancer in the world with limited treatment options at late advanced stage in which surgical therapy is not recommended with chemotherapy remain as the mainstay of treatment. However, the occurrence of chemoresistance as well as intera-tumoral and inter-tumoral heterogeneity of response to targeted and immunotherapy underlined a clear unmet treatment need in gastroenterology. Several molecular and cellular alterations ascribed for chemo resistance in GC including cancer stem cells (CSC) and tumor microenvironment (TME) remodeling. Cancer cells including CSC bears higher metabolic demand and major changes in TME involves alterations of gut microbiota interacting with nutrients metabolism. Metabolic upregulation in lipids, carbohydrates, amino acids, fatty acids biosynthesis pathways identified as a common hall mark in GC. Metabolic addiction to methionine metabolism occurs in many cancer cells to promote the biosynthesis of S-Adenosylmethionine (SAM), a universal methyl donor molecule for high rate of transmethylation in GC and promote cell proliferation. Targeting methionine metabolism found to promotes chemo-sensitivity with treatment non-heterogeneity. Methionine restriction (MR) promoted the arrest of cell cycle at S/G2 phase and enhanced downregulation of GC cells resistance to apoptosis (including ferroptosis), which suggests the potential of synergy with chemotherapies acting at S-phase of the cell cycle as well as inducing cell apoptosis. Accumulated evidences showed both the biogenesis as well as intracellular metabolism of exogenous methionine could be safe and effective target for therapy either alone or in combination with chemotherapies. This review article provides an over view of the upregulation in methionine biosynthesis pathway and the molecular signaling through the PI3K/Akt/mTOR-c-MYC axis to promote metabolic reprograming through activating the expression of L-type aminoacid-1 (LAT1) transporter and overexpression of Methionine adenosyltransferase 2A(MAT2A) for intercellular metabolic conversion of exogenous methionine to SAM in GC, and the potential of targeting with novel therapeutic agents such as methioninase (METase), Methionine adenosyltransferase 2A (MAT2A), c-MYC, methyl like transferase 16 (METTL16) inhibitors that are currently under clinical trial development stages and future perspectives.Keywords: gastric cancer, methionine metabolism, pi3k/akt/mtorc1-c-myc axis, gut microbiota, MAT2A, c-MYC, METTL16, methioninase
Procedia PDF Downloads 51