Search results for: double nearest proportion feature extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5544

Search results for: double nearest proportion feature extraction

4434 The International Legal Protection of Foreign Investment Through Bilateral Investment Treaties and Double Taxation Treaties in the Context of International Investment Law and International Tax Law

Authors: Abdulmajeed Abdullah Alqarni

Abstract:

This paper is devoted a study of the current frameworks applicable to foreign investments at the levels of domestic and international law, with a particular focus on the legitimate balance to be achieved between the rights of the host state and the legal protections owed to foreign investors. At the wider level of analysis, the paper attempts to map and critically examine the relationship between foreign investment and economic development. In doing so, the paper offers a study in how current discourses and practices on investment law can reconcile the competing interests of developing and developed countries. The study draws on the growing economic imperative for developing nations to create a favorable investment climate capable of attracting private foreign investment. It notes that that over the past decades, an abundance of legal standards that establish substantive and procedural protections for legal forms of foreign investments in the host countries have evolved and crystalized. The study then goes on to offer a substantive analysis of legal reforms at the domestic level in countries such as Saudi Arabia before going on to provide an in- depth and substantive examination of the most important instruments developed at the levels of international law: bilateral investment agreements and double taxation agreements. As to its methods, the study draws on case studies and from data assessing the link between double taxation and economic development. Drawing from the extant literature and doctrinal research, and international and comparative jurisprudence, the paper excavates and critically examines contemporary definitions and norms of international investment law, many of which have been given concrete form and specificity in an ever-expanding number of bilateral and multilateral investment treaties. By reconsidering the wider challenges of conflicts of law and jurisdiction, and the competing aims of the modern investment law regime, the study reflects on how bilateral investment treaties might succeed in achieving the dual aims of rights protection and economic sovereignty. Through its examination of the double taxation phenomena, the study goes on to identify key practical challenges raised by the implementation of bilateral treaties whilst also assessing the sufficiency of the domestic and international legal solutions that are proposed in response. In its final analysis, the study aims to contribute to existing scholarship by assessing contemporary legal and economic barriers to the free flow of investment with due regard for the legitimate concerns and diversity of developing nations. It does by situating its analysis of the domestic enforcement of international investment instrument in its wider historical and normative context. By focusing on the economic and legal dimensions of foreign investment, the paper also aims to offer an interdisciplinary and holistic perspective on contemporary issues and developments in investment law while offering practical reform proposals that can be used to be achieve a more equitable balance between the rights and interests of states and private entities in an increasingly trans nationalized sphere of investment regulation and treaty arbitration.

Keywords: foreign investment, bilateral investment treaties, international tax law, double taxation treaties

Procedia PDF Downloads 82
4433 Sentiment Classification of Documents

Authors: Swarnadip Ghosh

Abstract:

Sentiment Analysis is the process of detecting the contextual polarity of text. In other words, it determines whether a piece of writing is positive, negative or neutral.Sentiment analysis of documents holds great importance in today's world, when numerous information is stored in databases and in the world wide web. An efficient algorithm to illicit such information, would be beneficial for social, economic as well as medical purposes. In this project, we have developed an algorithm to classify a document into positive or negative. Using our algorithm, we obtained a feature set from the data, and classified the documents based on this feature set. It is important to note that, in the classification, we have not used the independence assumption, which is considered by many procedures like the Naive Bayes. This makes the algorithm more general in scope. Moreover, because of the sparsity and high dimensionality of such data, we did not use empirical distribution for estimation, but developed a method by finding degree of close clustering of the data points. We have applied our algorithm on a movie review data set obtained from IMDb and obtained satisfactory results.

Keywords: sentiment, Run's Test, cross validation, higher dimensional pmf estimation

Procedia PDF Downloads 394
4432 Efficient Feature Fusion for Noise Iris in Unconstrained Environment

Authors: Yao-Hong Tsai

Abstract:

This paper presents an efficient fusion algorithm for iris images to generate stable feature for recognition in unconstrained environment. Recently, iris recognition systems are focused on real scenarios in our daily life without the subject’s cooperation. Under large variation in the environment, the objective of this paper is to combine information from multiple images of the same iris. The result of image fusion is a new image which is more stable for further iris recognition than each original noise iris image. A wavelet-based approach for multi-resolution image fusion is applied in the fusion process. The detection of the iris image is based on Adaboost algorithm and then local binary pattern (LBP) histogram is then applied to texture classification with the weighting scheme. Experiment showed that the generated features from the proposed fusion algorithm can improve the performance for verification system through iris recognition.

Keywords: image fusion, iris recognition, local binary pattern, wavelet

Procedia PDF Downloads 365
4431 Understanding Cognitive Fatigue From FMRI Scans With Self-supervised Learning

Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Fillia Makedon, Glenn Wylie

Abstract:

Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that records neural activations in the brain by capturing the blood oxygen level in different regions based on the task performed by a subject. Given fMRI data, the problem of predicting the state of cognitive fatigue in a person has not been investigated to its full extent. This paper proposes tackling this issue as a multi-class classification problem by dividing the state of cognitive fatigue into six different levels, ranging from no-fatigue to extreme fatigue conditions. We built a spatio-temporal model that uses convolutional neural networks (CNN) for spatial feature extraction and a long short-term memory (LSTM) network for temporal modeling of 4D fMRI scans. We also applied a self-supervised method called MoCo (Momentum Contrast) to pre-train our model on a public dataset BOLD5000 and fine-tuned it on our labeled dataset to predict cognitive fatigue. Our novel dataset contains fMRI scans from Traumatic Brain Injury (TBI) patients and healthy controls (HCs) while performing a series of N-back cognitive tasks. This method establishes a state-of-the-art technique to analyze cognitive fatigue from fMRI data and beats previous approaches to solve this problem.

Keywords: fMRI, brain imaging, deep learning, self-supervised learning, contrastive learning, cognitive fatigue

Procedia PDF Downloads 183
4430 Study The Role Effect of Poly Pyrrole on LiFePO4 as Positive Electrode

Authors: Atef Youssef, Marwa Mostafa Moharam

Abstract:

The effects of poly pyrrole (PP) addition on LiFePO4 have been studied by electrochemical impedance spectroscopy (EIS), cyclic voltammetry (CV), and galvanostatic measurements. PP was prepared with LiFePO₄ in different ways, such as chemically dispersion, insinuation polymerization, and electrochemically polymerization. The EIS results showed that the charge transfer resistance (Rct) of LiFePO₄ was decreased by adding 10% PP polymerized in a situation to 153 vs. 1660  for bare LiFePO₄. The CV curves show that 10% PP added LiFePO₄ had higher electrochemical reactivity for lithium insertion and extraction than the un-doped material. The mean redox potential is E1/2 = 3.45 V vs. Li+/Li. The first discharge curve of the 10% poly pyrrole doped LiFePO₄ showed a mainly flat voltage plateau over the 3.45–3.5 V range, indicating the lithium extraction and insertion reactions between LiFePO₄ and FePO₄. A specific discharge capacity of cells prepared from in-situ 10% PP added LiFePO4to was about 210 vs. 65 mAhg-1 for bare LiFePO₄.

Keywords: liFePO₄, poly pyrrole addition, positive electrode, lithium battery

Procedia PDF Downloads 195
4429 Towards a Complete Automation Feature Recognition System for Sheet Metal Manufacturing

Authors: Bahaa Eltahawy, Mikko Ylihärsilä, Reino Virrankoski, Esko Petäjä

Abstract:

Sheet metal processing is automated, but the step from product models to the production machine control still requires human intervention. This may cause time consuming bottlenecks in the production process and increase the risk of human errors. In this paper we present a system, which automatically recognizes features from the CAD-model of the sheet metal product. By using these features, the system produces a complete model of the particular sheet metal product. Then the model is used as an input for the sheet metal processing machine. Currently the system is implemented, capable to recognize more than 11 of the most common sheet metal structural features, and the procedure is fully automated. This provides remarkable savings in the production time, and protects against the human errors. This paper presents the developed system architecture, applied algorithms and system software implementation and testing.

Keywords: feature recognition, automation, sheet metal manufacturing, CAD, CAM

Procedia PDF Downloads 349
4428 Desirable Fatty Acids in Meat of Cattle Fed Different Levels of Lipid-Based Diets

Authors: Tiago N. P. Valente, Erico S. Lima, Roberto O. Roça

Abstract:

Introduction: Research has stimulated animal production studies on solutions to decrease the level of saturated fatty acids and increase unsaturated in foods of animal origin. The objective of this study was to determine the effect of the dietary inclusion of lipid-based diets on the fatty acid profiles from finishing cattle. Materials and Methods: The study was carried out in the Chapéu de Couro Farm in Aguaí/SP, Brazil. A group of 39 uncastrated Nellore cattle. Mean age of the animals was 36 months, and initial mean live weight was 494.1 ± 10.1. Animals were randomly assigned to one of three treatments, based on dry matter: feed with control diet 2.50% cottonseed, feed with 11.50% cottonseed, and feed with 3.13% cottonseed added of 1.77% protected lipid. Forage:concentrate ratio was 50:50 on a dry matter basis. Sugar cane chopped was used as forage. After 63 days mean final live weight was 577.01 kg ± 11.34. After slaughter, carcasses were identified and divided into two halves that were kept in a cold chamber for 24 hours at 2°C. Then, part of the M. longissimus thoracis of each animal was removed between the 12th and 13th rib of the left half carcass. The samples steaks were 2.5 cm thick and were identified and stored frozen in a freezer at -18°C. The analysis of methyl esters of fatty acids was carried out in a gas chromatograph. Desirable fatty acids (FADes) were determined by the sum of unsaturated fatty acids and stearic acid (C18:0). Results and Discussion: No differences (P>0.05) between the diets for the proportion of FADes in the meat of the animals in this study, according to the lipid sources used. The inclusion of protected fat or cottonseed in the diet did not change the proportion of FADes in the meat. The proportion mean of FADes in meat in the present study were: as pentadecanoic acid (C15:1 = 0.29%), palmitoleic acid (C16:1 = 4.26%), heptadecanoic acid (C17:1 = 0.07%), oleic acid (C18:1n9c = 37.32%), γ-linolenic acid (0.94%) and α-linolenic acid (1.04%), elaidic acid (C18:1n9t = 0.50%), eicosatrienoic acid (C20:3n3 = 0.03%), eicosapentaenoic acid (C20:5n3 = 0.04%), erucic acid (C22:1n9 = 0.89%), docosadienoic acid (C22:2 = 0.04%) and stearic acid (C18:0 = 21.53%). Conclusions: The add the cottonseed or protected lipid in diet is not affected fatty acids profiles the desirable fatty acids in meat. Acknowledgements: IFGoiano, FAPEG and CNPq (Brazil).

Keywords: beef quality, cottonseed, protected fat, unsaturated fatty acids

Procedia PDF Downloads 286
4427 Review of Different Machine Learning Algorithms

Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui

Abstract:

Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.

Keywords: Data Mining, Web Mining, classification, ML Algorithms

Procedia PDF Downloads 293
4426 The Influence of the Geogrid Layers on the Bearing Capacity of Layered Soils

Authors: S. A. Naeini, H. R. Rahmani, M. Hossein Zade

Abstract:

Many classical bearing capacity theories assume that the natural soil's layers are homogenous for determining the bearing capacity of the soil. But, in many practical projects, we encounter multi-layer soils. Geosynthetic as reinforcement materials have been extensively used in the construction of various structures. In this paper, numerical analysis of the Plate Load Test (PLT) using of ABAQUS software in double-layered soils with different thicknesses of sandy and gravelly layers reinforced with geogrid was considered. The PLT is one of the common filed methods to calculate parameters such as soil bearing capacity, the evaluation of the compressibility and the determination of the Subgrade Reaction module. In fact, the influence of the geogrid layers on the bearing capacity of the layered soils is investigated. Finally, the most appropriate mode for the distance and number of reinforcement layers is determined. Results show that using three layers of geogrid with a distance of 0.3 times the width of the loading plate has the highest efficiency in bearing capacity of double-layer (sand and gravel) soils. Also, the significant increase in bearing capacity between unreinforced and reinforced soil with three layers of geogrid is caused by the condition that the upper layer (gravel) thickness is equal to the loading plate width.

Keywords: bearing capacity, reinforcement, geogrid, plate load test, layered soils

Procedia PDF Downloads 169
4425 Developing Thai-UK Double Degree Programmes: An Exploratory Study Identifying Challenges, Competing Interests and Risks

Authors: Joy Tweed, Jon Pike

Abstract:

In Thailand, a 4.0 policy has been initiated that is designed to prepare and train an appropriate workforce to support the move to a value-based economy. One aspect of support for this policy is a project to encourage the creation of double degree programmes, specifically between Thai and UK universities. This research into the project, conducted with its key players, explores the factors that can either enable or hinder the development of such programmes. It is an area that has received little research attention to date. Key findings focus on differences in quality assurance requirements, attitudes to benefits, risks, and committed levels of institutional support, thus providing valuable input into future policy making. The Transnational Education (TNE) Development Project was initiated in 2015 by the British Council, in conjunction with the Office for Higher Education Commission (OHEC), Thailand. The purpose of the project was to facilitate opportunities for Thai Universities to partner with UK Universities so as to develop double degree programme models. In this arrangement, the student gains both a UK and a Thai qualification, spending time studying in both countries. Twenty-two partnerships were initiated via the project. Utilizing a qualitative approach, data sources included participation in TNE project workshops, peer reviews, and over 20 semi-structured interviews conducted with key informants within the participating UK and Thai universities. Interviews were recorded, transcribed, and analysed for key themes. The research has revealed that the strength of the relationship between the two partner institutions is critical. Successful partnerships are often built on previous personal contact, have senior-level involvement and are strengthened by partnership on different levels, such as research, student exchange, and other forms of mobility. The support of the British Council was regarded as a key enabler in developing these types of projects for those universities that had not been involved in TNE previously. The involvement of industry is apparent in programmes that have high scientific content but not well developed in other subject areas. Factors that hinder the development of partnership programmes include the approval processes and quality requirements of each institution. Significant differences in fee levels between Thai and UK universities provide a challenge and attempts to bridge them require goodwill on the part of the latter that may be difficult to realise. This research indicates the key factors to which attention needs to be given when developing a TNE programme. Early attention to these factors can reduce the likelihood that the partnership will fail to develop. Representatives in both partner universities need to understand their respective processes of development and approval. The research has important practical implications for policy-makers and planners involved with TNE, not only in relation to the specific TNE project but also more widely in relation to the development of TNE programmes in other countries and other subject areas. Future research will focus on assessing the success of the double degree programmes generated by the TNE Development Project from the perspective of universities, policy makers, and industry partners.

Keywords: double-degree, internationalization, partnerships, Thai-UK

Procedia PDF Downloads 100
4424 Modification of Toothpaste Formula Using Pineapple Cobs and Eggshell Waste as a Way to Decrease Dental Caries

Authors: Achmad Buhori, Reza Imam Pratama, Tissa Wiraatmaja, Wanti Megawati

Abstract:

Data from many countries indicates that there is a marked increase of dental caries. The increases in caries appear to occur in lower socioeconomic groups. It is possible that the benefits of prevention of dental caries are not reaching these groups. However, there is a way to decrease dental caries by adding 5% of bromelain and calcium as an active agent in toothpaste. Bromelain can break glutamine-alanine bond and arginine-alanine bond which is a constituent of amino acid that causes dental plague which is one of the factors of dental caries. Calcium help rebuilds the teeth by strengthening and repairing enamel. Bromelain can be found from the extraction of pineapple (Ananas comosus) cobs (88.86-94.22 % of bromelain recovery during extraction based on the enzyme unit) and calcium can be taken from eggshell (95% of dry eggshell consist of calcium). The aim of this experiment is to make a toothpaste which contains bromelain and calcium as an effective, cheap, and healthy way to decrease dental caries around the world.

Keywords: bromelain, calcium, dental caries, dental plague, toothpaste

Procedia PDF Downloads 263
4423 1H-NMR Spectra of Diesel-Biodiesel Blends to Evaluate the Quality and Determine the Adulteration of Biodiesel with Vegetable Oil

Authors: Luis F. Bianchessi, Gustavo G. Shimamoto, Matthieu Tubino

Abstract:

The use of biodiesel has been diffused in Brazil and all over the world by the trading of biodiesel (B100). In Brazil, the diesel oil currently being sold is a blend, containing 7% biodiesel (B7). In this context, it is necessary to develop methods capable of identifying this blend composition, especially regarding the biodiesel quality used for making these blends. In this study, hydrogen nuclear magnetic resonance spectra (1H-NMR) are proposed as a form of identifying and confirming the quality of type B10 blends (10% of biodiesel and 90% of diesel). Furthermore, the presence of vegetable oils, which may be from fuel adulteration or as an evidence of low degree of transesterification conversion during the synthesis of B100, may also be identified. Mixtures of diesel, vegetable oils and their respective biodiesel were prepared. Soybean oil and macauba kernel oil were used as raw material. The diesel proportion remained fixed at 90%. The other proportion (10%) was varied in terms of vegetable oil and biodiesel. The 1H-NMR spectra were obtained for each one of the mixtures, in order to find a correlation between the spectra and the amount of biodiesel, as well as the amount of residual vegetable oil. The ratio of the integral of the methylenic hydrogen H-2 of glycerol (exclusive of vegetable oil) with respect to the integral of the olefinic hydrogens (present in vegetable oil and biodiesel) was obtained. These ratios were correlated with the percentage of vegetable oil in each mixture, from 0% to 10%. The obtained correlation could be described by linear relationships with R2 of 0.9929 for soybean biodiesel and 0.9982 for macauba kernel biodiesel. Preliminary results show that the technique can be used to monitor the biodiesel quality in commercial diesel-biodiesel blends, besides indicating possible adulteration.

Keywords: biodiesel, diesel, biodiesel quality, adulteration

Procedia PDF Downloads 616
4422 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 130
4421 Interaction of Metals with Non-Conventional Solvents

Authors: Evgeny E. Tereshatov, C. M. Folden

Abstract:

Ionic liquids and deep eutectic mixtures represent so-called non-conventional solvents. The former, composed of discrete ions, is a salt with a melting temperature below 100°С. The latter, consisting of hydrogen bond donors and acceptors, is a mixture of at least two compounds, resulting in a melting temperature depression in comparison with that of the individual moiety. These systems also can be water-immiscible, which makes them applicable for metal extraction. This work will cover interactions of In, Tl, Ir, and Rh in hydrochloric acid media with eutectic mixtures and Er, Ir, and At in a gas phase with chemically modified α-detectors. The purpose is to study chemical systems based on non-conventional solvents in terms of their interaction with metals. Once promising systems are found, the next step is to modify the surface of α-detectors used in the online element production at cyclotrons to get the detector chemical selectivity. Initially, the metal interactions are studied by means of the liquid-liquid extraction technique. Then appropriate molecules are chemisorbed on the surrogate surface first to understand the coating quality. Finally, a detector is covered with the same molecule, and the metal sorption on such detectors is studied in the online regime. It was found that chemical treatment of the surface can result in 99% coverage with a monolayer formation. This surface is chemically active and can adsorb metals from hydrochloric acid solutions. Similarly, a detector surface was modified and tested during cyclotron-based experiments. Thus, a procedure of detectors functionalization has been developed, and this opens an interesting opportunity of studying chemisorption of elements which do not have stable isotopes.

Keywords: mechanism, radioisotopes, solvent extraction, gas phase sorption

Procedia PDF Downloads 99
4420 Enhancing the Interpretation of Group-Level Diagnostic Results from Cognitive Diagnostic Assessment: Application of Quantile Regression and Cluster Analysis

Authors: Wenbo Du, Xiaomei Ma

Abstract:

With the empowerment of Cognitive Diagnostic Assessment (CDA), various domains of language testing and assessment have been investigated to dig out more diagnostic information. What is noticeable is that most of the extant empirical CDA-based research puts much emphasis on individual-level diagnostic purpose with very few concerned about learners’ group-level performance. Even though the personalized diagnostic feedback is the unique feature that differentiates CDA from other assessment tools, group-level diagnostic information cannot be overlooked in that it might be more practical in classroom setting. Additionally, the group-level diagnostic information obtained via current CDA always results in a “flat pattern”, that is, the mastery/non-mastery of all tested skills accounts for the two highest proportion. In that case, the outcome does not bring too much benefits than the original total score. To address these issues, the present study attempts to apply cluster analysis for group classification and quantile regression analysis to pinpoint learners’ performance at different proficiency levels (beginner, intermediate and advanced) thus to enhance the interpretation of the CDA results extracted from a group of EFL learners’ reading performance on a diagnostic reading test designed by PELDiaG research team from a key university in China. The results show that EM method in cluster analysis yield more appropriate classification results than that of CDA, and quantile regression analysis does picture more insightful characteristics of learners with different reading proficiencies. The findings are helpful and practical for instructors to refine EFL reading curriculum and instructional plan tailored based on the group classification results and quantile regression analysis. Meanwhile, these innovative statistical methods could also make up the deficiencies of CDA and push forward the development of language testing and assessment in the future.

Keywords: cognitive diagnostic assessment, diagnostic feedback, EFL reading, quantile regression

Procedia PDF Downloads 145
4419 Comparison Physicochemical Properties of Hexane Extracted Aniseed Oil from Cold Press Extraction Residue and Cold Press Aniseed Oil

Authors: Derya Ören, Şeyma Akalın

Abstract:

Cold pres technique is a traditional method to obtain oil. The cold-pressing procedure, involves neither heat nor chemical treatments, so cold press technique has low oil yield and cold pressed herbal material residue still contains some oil. In this study, the oil that is remained in the cold pressed aniseed extracted with hegzan and analysed to determine physicochemical properties and quality parameters. It is found that the aniseed after cold press process contains % 10 oil. Other analysis parametres free fatty acid (FFA) is 2,1 mgKOH/g, peroxide value is 7,6 meq02/kg. Cold pressed aniseed oil values are determined for fatty acid (FFA) value as 2,1 mgKOH/g, peroxide value 4,5 meq02/kg respectively. Also fatty acid composition is analysed, it is found that both of these oil have same fatty acid composition. The main fatty acids are; oleic, linoleic, and palmitic acids.

Keywords: aniseed oil, cold press, extraction, residue

Procedia PDF Downloads 399
4418 Fast Tumor Extraction Method Based on Nl-Means Filter and Expectation Maximization

Authors: Sandabad Sara, Sayd Tahri Yassine, Hammouch Ahmed

Abstract:

The development of science has allowed computer scientists to touch the medicine and bring aid to radiologists as we are presenting it in our article. Our work focuses on the detection and localization of tumors areas in the human brain; this will be a completely automatic without any human intervention. In front of the huge volume of MRI to be treated per day, the radiologist can spend hours and hours providing a tremendous effort. This burden has become less heavy with the automation of this step. In this article we present an automatic and effective tumor detection, this work consists of two steps: the first is the image filtering using the filter Nl-means, then applying the expectation maximization algorithm (EM) for retrieving the tumor mask from the brain MRI and extracting the tumor area using the mask obtained from the second step. To prove the effectiveness of this method multiple evaluation criteria will be used, so that we can compare our method to frequently extraction methods used in the literature.

Keywords: MRI, Em algorithm, brain, tumor, Nl-means

Procedia PDF Downloads 328
4417 Arabic Light Stemmer for Better Search Accuracy

Authors: Sahar Khedr, Dina Sayed, Ayman Hanafy

Abstract:

Arabic is one of the most ancient and critical languages in the world. It has over than 250 million Arabic native speakers and more than twenty countries having Arabic as one of its official languages. In the past decade, we have witnessed a rapid evolution in smart devices, social network and technology sector which led to the need to provide tools and libraries that properly tackle the Arabic language in different domains. Stemming is one of the most crucial linguistic fundamentals. It is used in many applications especially in information extraction and text mining fields. The motivation behind this work is to enhance the Arabic light stemmer to serve the data mining industry and leverage it in an open source community. The presented implementation works on enhancing the Arabic light stemmer by utilizing and enhancing an algorithm that provides an extension for a new set of rules and patterns accompanied by adjusted procedure. This study has proven a significant enhancement for better search accuracy with an average 10% improvement in comparison with previous works.

Keywords: Arabic data mining, Arabic Information extraction, Arabic Light stemmer, Arabic stemmer

Procedia PDF Downloads 302
4416 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 157
4415 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface

Authors: Ping Tan, Xiaomeng Su, Yi Shen

Abstract:

The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.

Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean

Procedia PDF Downloads 112
4414 New Off-Line SPE-GC-MS/MS Method for Determination of Mineral Oil Saturated Hydrocarbons/Mineral Oil Hydrocarbons in Animal Feed, Foods, Infant Formula and Vegetable Oils

Authors: Ovanes Chakoyan

Abstract:

MOH (mineral oil hydrocarbons), which consist of mineral oil saturated hydrocarbons(MOSH) and mineral oil aromatic hydrocarbons(MOAH), are present in various products such as vegetable oils, animal feed, foods, and infant formula. Contamination of foods with mineral oil hydrocarbons, particularly mineral oil aromatic hydrocarbons(MOAH), exhibiting carcinogenic, mutagenic, and hormone-disruptive effects. Identifying toxic substances among the many thousands comprising mineral oils in food samples is a difficult analytical challenge. A method based on an offline-solid phase extraction approach coupled with gas chromatography-triple quadrupole(GC-MS/MS) was developed for the determination of MOSH/MOAH in various products such as vegetable oils, animal feed, foods, and infant formula. A glass solid phase extraction cartridge loaded with 7 g of activated silica gel impregnated with 10 % silver nitrate for removal of olefins and lipids. The MOSH/MOAH fractions were eluated with hexane and hexane: dichloromethane : toluene, respectively. Each eluate was concentrated to 50 µl in toluene and injected on splitless mode into GC-MS/MS. Accuracy of the method was estimated as measurement of recovery of spiked oil samples at 2.0, 15.0, and 30.0 mg kg -1, and recoveries varied from 85 to 105 %. The method was applied to the different types of samples (sunflower meal, chocolate ships, santa milk chocolate, biscuits, infant milk, cornflakes, refined sunflower oil, crude sunflower oil), detecting MOSH up to 56 mg/kg and MOAH up to 5 mg/kg. The limit of quantification(LOQ) of the proposed method was estimated at 0.5 mg/kg and 0.3 mg/kg for MOSH and MOAH, respectively.

Keywords: MOSH, MOAH, GC-MS/MS, foods, solid phase extraction

Procedia PDF Downloads 74
4413 In situ Stabilization of Arsenic in Soils with Birnessite and Goethite

Authors: Saeed Bagherifam, Trevor Brown, Chris Fellows, Ravi Naidu

Abstract:

Over the last century, rapid urbanization, industrial emissions, and mining activities have resulted in widespread contamination of the environment by heavy metal(loid)s. Arsenic (As) is a toxic metalloid belonging to group 15 of the periodic table, which occurs naturally at low concentrations in soils and the earth’s crust, although concentrations can be significantly elevated in natural systems as a result of dispersion from anthropogenic sources, e.g., mining activities. Bioavailability is the fraction of a contaminant in soils that is available for uptake by plants, food chains, and humans and therefore presents the greatest risk to terrestrial ecosystems. Numerous attempts have been made to establish in situ and ex-situ technologies of remedial action for remediation of arsenic-contaminated soils. In situ stabilization techniques are based on deactivation or chemical immobilization of metalloid(s) in soil by means of soil amendments, which consequently reduce the bioavailability (for biota) and bioaccessibility (for humans) of metalloids due to the formation of low-solubility products or precipitates. This study investigated the effectiveness of two different types of synthetic manganese and iron oxides (birnessite and goethite) for stabilization of As in a soil spiked with 1000 mg kg⁻¹ of As and treated with 10% dosages of soil amendments. Birnessite was made using HCl and KMnO₄, and goethite was synthesized by the dropwise addition of KOH into Fe(NO₃) solution. The resulting contaminated soils were subjected to a series of chemical extraction studies including sequential extraction (BCR method), single-step extraction with distilled (DI) water, 2M HNO₃ and simplified bioaccessibility extraction tests (SBET) for estimation of bioaccessible fractions of As in two different soil fractions ( < 250 µm and < 2 mm). Concentrations of As in samples were measured using inductively coupled plasma mass spectrometry (ICP-MS). The results showed that soil with birnessite reduced bioaccessibility of As by up to 92% in both soil fractions. Furthermore, the results of single-step extractions revealed that the application of both birnessite and Goethite reduced DI water and HNO₃ extractable amounts of arsenic by 75, 75, 91, and 57%, respectively. Moreover, the results of the sequential extraction studies showed that both birnessite and goethite dramatically reduced the exchangeable fraction of As in soils. However, the amounts of recalcitrant fractions were higher in birnessite, and Goethite amended soils. The results revealed that the application of both birnessite and goethite significantly reduced bioavailability and the exchangeable fraction of As in contaminated soils, and therefore birnessite and Goethite amendments might be considered as promising adsorbents for stabilization and remediation of As contaminated soils.

Keywords: arsenic, bioavailability, in situ stabilisation, metalloid(s) contaminated soils

Procedia PDF Downloads 131
4412 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 246
4411 Reconfigurable Consensus Achievement of Multi Agent Systems Subject to Actuator Faults in a Leaderless Architecture

Authors: F. Amirarfaei, K. Khorasani

Abstract:

In this paper, reconfigurable consensus achievement of a team of agents with marginally stable linear dynamics and single input channel has been considered. The control algorithm is based on a first order linear protocol. After occurrence of a LOE fault in one of the actuators, using the imperfect information of the effectiveness of the actuators from fault detection and identification module, the control gain is redesigned in a way to still reach consensus. The idea is based on the modeling of change in effectiveness as change of Laplacian matrix. Then as special cases of this class of systems, a team of single integrators as well as double integrators are considered and their behavior subject to a LOE fault is considered. The well-known relative measurements consensus protocol is applied to a leaderless team of single integrator as well as double integrator systems, and Gersgorin disk theorem is employed to determine whether fault occurrence has an effect on system stability and team consensus achievement or not. The analyses show that loss of effectiveness fault in actuator(s) of integrator systems affects neither system stability nor consensus achievement.

Keywords: multi-agent system, actuator fault, stability analysis, consensus achievement

Procedia PDF Downloads 328
4410 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 309
4409 New Method for the Determination of Montelukast in Human Plasma by Solid Phase Extraction Using Liquid Chromatography Tandem Mass Spectrometry

Authors: Vijayalakshmi Marella, NageswaraRaoPilli

Abstract:

This paper describes a simple, rapid and sensitive liquid chromatography / tandem mass spectrometry assay for the determination of montelukast in human plasma using montelukast d6 as an internal standard. Analyte and the internal standard were extracted from 50 µL of human plasma via solid phase extraction technique without evaporation, drying and reconstitution steps. The chromatographic separation was achieved on a C18 column by using a mixture of methanol and 5mM ammonium acetate (80:20, v/v) as the mobile phase at a flow rate of 0.8 mL/min. Good linearity results were obtained during the entire course of validation. Method validation was performed as per FDA guidelines and the results met the acceptance criteria. A run time of 2.5 min for each sample made it possible to analyze more number of samples in short time, thus increasing the productivity. The proposed method was found to be applicable to clinical studies.

Keywords: Montelukast, tandem mass spectrometry, montelukast d6, FDA guidelines

Procedia PDF Downloads 311
4408 Formulation of Suppositories Using Allanblackia Floribunda Butter as a Base

Authors: Mary Konadu

Abstract:

The rectal route for drug administration is becoming attractive to drug formulators because it can avoid hepatic first-pass effects, decrease gastrointestinal side effects and avoid undesirable effects of meals on drug absorption. Suppositories have been recognized as an alternative to the oral route in situations such as when the patient is comatose, unable to swallow, or when the drug produces nausea or vomiting. Effective drug delivery with appropriate pharmaceutical excipient is key in the production of clinically useful preparations. The high cost of available excipients coupled with other disadvantages have led to the exploration of potential excipients from natural sources. Allanblackia floribunda butter, a naturally occurring lipid, is used for medicinal, culinary, and cosmetic purposes. Different extraction methods (solvent (hexane) extraction, traditional/hot water extraction, and cold/screw press extraction) were employed to extract the oil. The different extracts of A. floribunda oil were analyzed for their physicochemical properties and mineral content. The oil was used as a base to formulate Paracetamol and Diclofenac suppositories. Quality control test were carried out on the formulated suppositories. The %age oil yield for hexane extract, hot water extract, and cold press extract were 50.40 ±0.00, 37.36±0.00, and 20.48±0.00, respectively. The acid value, saponification value, iodine value and free fatty acid were 1.159 ± 0.065, 208.51 ± 8.450, 49.877 ± 0.690 and 0.583 ± 0.032 respectively for hexane extract; 3.480 ± 0.055, 204.672±2.863, 49.04 ± 0.76 and 1.747 ± 0.028 respectively for hot water/traditional extract; 4.43 ± 0.055, 192.05±1.56, 49.96 ± 0.29 and 2.23 ± 0.03 respectively for cold press extract. Calcium, sodium, magnesium, potassium, and iron were minerals found to be present in the A. floribunda butter extracts. The uniformity of weight, hardness, disintegration time, and uniformity of content were found to be within the acceptable range. The melting point ranges for all the suppositories were found to be satisfactory. The cumulative drug release (%) of the suppositories at 45 minutes was 90.19±0.00 (Hot water extract), 93.75±0.00 (Cold Pres Extract), and 98.16±0.00 (Hexane Extract) for Paracetamol suppositories. Diclofenac sodium suppositories had a cumulative %age release of 81.60±0.00 (Hot water Extract), 95.33±0.00 (Cold Press Extract), and 99.20±0.00 (Hexane Extract). The physicochemical parameters obtained from this study shows that Allanblackia floribunda seed oil is edible and can be used as a suppository base. The suppository formulation was successful, and the quality control tests conformed to Pharmacopoeia standard.

Keywords: allanblackia foribunda, paracetamol, diclofenac, suppositories

Procedia PDF Downloads 118
4407 Evaluation of Robust Feature Descriptors for Texture Classification

Authors: Jia-Hong Lee, Mei-Yi Wu, Hsien-Tsung Kuo

Abstract:

Texture is an important characteristic in real and synthetic scenes. Texture analysis plays a critical role in inspecting surfaces and provides important techniques in a variety of applications. Although several descriptors have been presented to extract texture features, the development of object recognition is still a difficult task due to the complex aspects of texture. Recently, many robust and scaling-invariant image features such as SIFT, SURF and ORB have been successfully used in image retrieval and object recognition. In this paper, we have tried to compare the performance for texture classification using these feature descriptors with k-means clustering. Different classifiers including K-NN, Naive Bayes, Back Propagation Neural Network , Decision Tree and Kstar were applied in three texture image sets - UIUCTex, KTH-TIPS and Brodatz, respectively. Experimental results reveal SIFTS as the best average accuracy rate holder in UIUCTex, KTH-TIPS and SURF is advantaged in Brodatz texture set. BP neuro network works best in the test set classification among all used classifiers.

Keywords: texture classification, texture descriptor, SIFT, SURF, ORB

Procedia PDF Downloads 364
4406 A Quality Index Optimization Method for Non-Invasive Fetal ECG Extraction

Authors: Lucia Billeci, Gennaro Tartarisco, Maurizio Varanini

Abstract:

Fetal cardiac monitoring by fetal electrocardiogram (fECG) can provide significant clinical information about the healthy condition of the fetus. Despite this potentiality till now the use of fECG in clinical practice has been quite limited due to the difficulties in its measuring. The recovery of fECG from the signals acquired non-invasively by using electrodes placed on the maternal abdomen is a challenging task because abdominal signals are a mixture of several components and the fetal one is very weak. This paper presents an approach for fECG extraction from abdominal maternal recordings, which exploits the characteristics of pseudo-periodicity of fetal ECG. It consists of devising a quality index (fQI) for fECG and of finding the linear combinations of preprocessed abdominal signals, which maximize these fQI (quality index optimization - QIO). It aims at improving the performances of the most commonly adopted methods for fECG extraction, usually based on maternal ECG (mECG) estimating and canceling. The procedure for the fECG extraction and fetal QRS (fQRS) detection is completely unsupervised and based on the following steps: signal pre-processing; maternal ECG (mECG) extraction and maternal QRS detection; mECG component approximation and canceling by weighted principal component analysis; fECG extraction by fQI maximization and fetal QRS detection. The proposed method was compared with our previously developed procedure, which obtained the highest at the Physionet/Computing in Cardiology Challenge 2013. That procedure was based on removing the mECG from abdominal signals estimated by a principal component analysis (PCA) and applying the Independent component Analysis (ICA) on the residual signals. Both methods were developed and tuned using 69, 1 min long, abdominal measurements with fetal QRS annotation of the dataset A provided by PhysioNet/Computing in Cardiology Challenge 2013. The QIO-based and the ICA-based methods were compared in analyzing two databases of abdominal maternal ECG available on the Physionet site. The first is the Abdominal and Direct Fetal Electrocardiogram Database (ADdb) which contains the fetal QRS annotations thus allowing a quantitative performance comparison, the second is the Non-Invasive Fetal Electrocardiogram Database (NIdb), which does not contain the fetal QRS annotations so that the comparison between the two methods can be only qualitative. In particular, the comparison on NIdb was performed defining an index of quality for the fetal RR series. On the annotated database ADdb the QIO method, provided the performance indexes Sens=0.9988, PPA=0.9991, F1=0.9989 overcoming the ICA-based one, which provided Sens=0.9966, PPA=0.9972, F1=0.9969. The comparison on NIdb was performed defining an index of quality for the fetal RR series. The index of quality resulted higher for the QIO-based method compared to the ICA-based one in 35 records out 55 cases of the NIdb. The QIO-based method gave very high performances with both the databases. The results of this study foresees the application of the algorithm in a fully unsupervised way for the implementation in wearable devices for self-monitoring of fetal health.

Keywords: fetal electrocardiography, fetal QRS detection, independent component analysis (ICA), optimization, wearable

Procedia PDF Downloads 274
4405 Differential Impacts of Whole-Growth-Duration Warming on the Grain Yield and Quality between Early and Late Rice

Authors: Shan Huang, Guanjun Huang, Yongjun Zeng, Haiyuan Wang

Abstract:

The impacts of whole-growth warming on grain yield and quality in double rice cropping systems still remain largely unknown. In this study, a two-year field whole-growth warming experiment was conducted with two inbred indica rice cultivars (Zhongjiazao 17 and Xiangzaoxian 45) for early season and two hybrid indica rice cultivars (Wanxiangyouhuazhan and Tianyouhuazhan) for late season. The results showed that whole-growth warming did not affect early rice yield but significantly decreased late rice yield, which was caused by the decreased grain weight that may be related to the increased plant respiration and reduced translocation of dry matter accumulated during the pre-heading phase under warming. Whole-growth warming improved the milling quality of late rice but decreased that of early rice; however, the chalky rice rate and chalkiness degree were increased by 20.7% and 33.9% for early rice and 37.6 % and 51.6% for late rice under warming, respectively. We found that the crude protein content of milled rice was significantly increased by warming in both early and late rice, which would result in deterioration of eating quality. Besides, compared with the control treatment, the setback of late rice was significantly reduced by 17.8 % under warming, while that of early rice was not significantly affected by warming. These results suggest that the negative impacts of whole-growth warming on grain quality may be more severe in early rice than in late rice. Therefore, adaptation in both rice breeding and agronomic practices is needed to alleviate climate warming on the production of a double rice cropping system. Climate-smart agricultural practices ought to be implemented to mitigate the detrimental effects of warming on rice grain quality. For instance, fine-tuning the application rate and timing of inorganic nitrogen fertilizers, along with the introduction of organic amendments and the cultivation of heat-tolerant rice varieties, can help reduce the negative impact of rising temperatures on rice quality. Furthermore, to comprehensively understand the influence of climate warming on rice grain quality, future research should encompass a wider range of rice cultivars and experimental sites.

Keywords: climate warming, double rice cropping, dry matter, grain quality, grain yield

Procedia PDF Downloads 30