Search results for: entity extraction
1639 Green Revolution and Reckless Use of Water and Its Implication on Climate Change Leading to Desertification: Situation of Karnataka, India
Authors: Arun Das
Abstract:
One of the basic objectives of Independent India five decades ago was to meet the increasing demand for food to its growing population. Self-sufficiency was accomplished towards food production and it was attained through launching green revolution program. The green revolution repercussions were not realized at that moment. Many projects were undertaken. Especially, major and minor irrigation projects were executed to harness the river water in the dry land regions of Karnataka. In the elevated topographical lands, extraction of underground water was a solace given by the government to protect the interest of the dry land farmers whose land did not come under the command area. Free borewell digging, pump sets, and electricity were provided. Thus, the self-sufficiency was achieved. Contrary to this, the Continuous long-term extraction of water for agriculture from bore well and in the irrigated tracks has lead to two-way effect such as soil leeching (Alkalinity and Salinity), secondly, depleted underground water to incredible deeps has pushed the natural process to an un-reparable damage which in turn the nature lost to support even a tiny plants like grass to grow, discouraging human and animal habitation, Both the process is silently turning southwestern, central, northeastern and north western regions of Karnataka into desert. The grave situation of Karnataka green revolution is addressed in this paper to alert reckless use of water and also some of the suggestions are recommended based on the ground information.Keywords: alkalinity, desertification, green revolution, salinity, water
Procedia PDF Downloads 2831638 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method
Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat
Abstract:
Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.Keywords: feature extraction, feature selection, image annotation, classification
Procedia PDF Downloads 5861637 Optimization Based Extreme Learning Machine for Watermarking of an Image in DWT Domain
Authors: RAM PAL SINGH, VIKASH CHAUDHARY, MONIKA VERMA
Abstract:
In this paper, we proposed the implementation of optimization based Extreme Learning Machine (ELM) for watermarking of B-channel of color image in discrete wavelet transform (DWT) domain. ELM, a regularization algorithm, works based on generalized single-hidden-layer feed-forward neural networks (SLFNs). However, hidden layer parameters, generally called feature mapping in context of ELM need not to be tuned every time. This paper shows the embedding and extraction processes of watermark with the help of ELM and results are compared with already used machine learning models for watermarking.Here, a cover image is divide into suitable numbers of non-overlapping blocks of required size and DWT is applied to each block to be transformed in low frequency sub-band domain. Basically, ELM gives a unified leaning platform with a feature mapping, that is, mapping between hidden layer and output layer of SLFNs, is tried for watermark embedding and extraction purpose in a cover image. Although ELM has widespread application right from binary classification, multiclass classification to regression and function estimation etc. Unlike SVM based algorithm which achieve suboptimal solution with high computational complexity, ELM can provide better generalization performance results with very small complexity. Efficacy of optimization method based ELM algorithm is measured by using quantitative and qualitative parameters on a watermarked image even though image is subjected to different types of geometrical and conventional attacks.Keywords: BER, DWT, extreme leaning machine (ELM), PSNR
Procedia PDF Downloads 3111636 The Extraction of Sage Essential Oil and the Improvement of Sleeping Quality for Female Menopause by Sage Essential Oil
Authors: Bei Shan Lin, Tzu Yu Huang, Ya Ping Chen, Chun Mel Lu
Abstract:
This research is divided into two parts. The first part is to adopt the method of supercritical carbon dioxide fluid extraction to extract sage essential oil (Salvia officinalis) and to find out the differences when the procedure is under different pressure conditions. Meanwhile, this research is going to probe into the composition of the extracted sage essential oil. The second part will talk about the effect of the aromatherapy with extracted sage essential oil to improve the sleeping quality for women in menopause. The extracted sage substance is tested by inhibiting DPPH radical to identify its antioxidant capacity, and the extracted component was analyzed by gas chromatography-mass spectrometer. Under two different pressure conditions, the extracted experiment gets different results. By 3000 psi, the extracted substance is IC50 180.94mg/L, which is higher than IC50 657.43mg/L by 1800 psi. By 3000 psi, the extracted yield is 1.05%, which is higher than 0.68% by 1800 psi. Through the experimental data, the researcher also can conclude that the extracted substance with 3000psi contains more materials than the one with 1800 psi. The main overlapped materials are the compounds of cyclic ether, flavonoid, and terpenes. Cyclic ether and flavonoids have the function of soothing and calming. They can be applied to relieve cramps and to eliminate menopause disorders. The second part of the research is to apply extracted sage essential oil to aromatherapy for women who are in menopause and to discuss the effect of the improvement for the sleeping quality. This research adopts the approaching of Swedish upper back massage, evaluates the sleeping quality with the Pittsburgh Sleep Quality Index, and detects the changes with heart rate variability apparatus. The experimental group intervenes with extracted sage essential oil to the aromatherapy. The average heart beats detected by the apparatus has a better result in SDNN, low frequency, and high frequency. The performance is better than the control group. According to the statistical analysis of the Pittsburgh Sleep Quality Index, this research has reached the effect of sleep quality improvement. It proves that extracted sage essential oil has a significant effect on increasing the activities of parasympathetic nerves. It is able to improve the sleeping quality for women in menopauseKeywords: supercritical carbon dioxide fluid extraction, Salvia officinalis, aromatherapy, Swedish massage, Pittsburgh sleep quality index, heart rate variability, parasympathetic nerves
Procedia PDF Downloads 1201635 An Exhaustive All-Subsets Examination of Trade Theory on WTO Data
Authors: Masoud Charkhabi
Abstract:
We examine trade theory with this motivation. The full set of World Trade Organization data are organized into country-year pairs, each treated as a different entity. Topological Data Analysis reveals that among the 16 region and 240 region-year pairs there exists in fact a distinguishable group of region-period pairs. The generally accepted periods of shifts from dissimilar-dissimilar to similar-similar trade in goods among regions are examined from this new perspective. The period breaks are treated as cumulative and are flexible. This type of all-subsets analysis is motivated from computer science and is made possible with Lossy Compression and Graph Theory. The results question many patterns in similar-similar to dissimilar-dissimilar trade. They also show indications of economic shifts that only later become evident in other economic metrics.Keywords: econometrics, globalization, network science, topological data, analysis, trade theory, visualization, world trade
Procedia PDF Downloads 3721634 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition
Authors: A. Shoiynbek, K. Kozhakhmet, P. Menezes, D. Kuanyshbay, D. Bayazitov
Abstract:
Speech emotion recognition has received increasing research interest all through current years. There was used emotional speech that was collected under controlled conditions in most research work. Actors imitating and artificially producing emotions in front of a microphone noted those records. There are four issues related to that approach, namely, (1) emotions are not natural, and it means that machines are learning to recognize fake emotions. (2) Emotions are very limited by quantity and poor in their variety of speaking. (3) There is language dependency on SER. (4) Consequently, each time when researchers want to start work with SER, they need to find a good emotional database on their language. In this paper, we propose the approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describe the sequence of actions of the proposed approach. One of the first objectives of the sequence of actions is a speech detection issue. The paper gives a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian languages. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To illustrate the working capacity of the developed model, we have performed an analysis of speech detection and extraction from real tasks.Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset
Procedia PDF Downloads 1011633 Comparison of Polyphonic Profile of a Berry from Two Different Sources, Using an Optimized Extraction Method
Authors: G. Torabian, A. Fathi, P. Valtchev, F. Dehghani
Abstract:
The superior polyphenol content of Sambucus nigra berries has high health potentials for the production of nutraceutical products. Numerous factors influence the polyphenol content of the final products including the berries’ source and the subsequent processing production steps. The aim of this study is to compare the polyphenol content of berries from two different sources and also to optimise the polyphenol extraction process from elderberries. Berries from source B obtained more acceptable physical properties than source A; a single berry from source B was double in size and weight (both wet and dry weight) compared with a source A berry. Despite the appropriate physical characteristics of source B berries, their polyphenolic profile was inferior; as source A berries had 2.3 fold higher total anthocyanin content, and nearly two times greater total phenolic content and total flavonoid content compared to source B. Moreover, the result of this study showed that almost 50 percent of the phenolic content of berries are entrapped within their skin and pulp that potentially cannot be extracted by press juicing. To address this challenge and to increase the total polyphenol yield of the extract, we used cold-shock blade grinding method to break the cell walls. The result of this study showed that using cultivars with higher phenolic content as well as using the whole fruit including juice, skin and pulp can increase polyphenol yield significantly; and thus, may boost the potential of using elderberries as therapeutic products.Keywords: different sources, elderberry, grinding, juicing, polyphenols
Procedia PDF Downloads 2941632 Thermochemical Modelling for Extraction of Lithium from Spodumene and Prediction of Promising Reagents for the Roasting Process
Authors: Allen Yushark Fosu, Ndue Kanari, James Vaughan, Alexandre Changes
Abstract:
Spodumene is a lithium-bearing mineral of great interest due to increasing demand of lithium in emerging electric and hybrid vehicles. The conventional method of processing the mineral for the metal requires inevitable thermal transformation of α-phase to the β-phase followed by roasting with suitable reagents to produce lithium salts for downstream processes. The selection of appropriate reagent for roasting is key for the success of the process and overall lithium recovery. Several researches have been conducted to identify good reagents for the process efficiency, leading to sulfation, alkaline, chlorination, fluorination, and carbonizing as the methods of lithium recovery from the mineral.HSC Chemistry is a thermochemical software that can be used to model metallurgical process feasibility and predict possible reaction products prior to experimental investigation. The software was employed to investigate and explain the various reagent characteristics as employed in literature during spodumene roasting up to 1200°C. The simulation indicated that all used reagents for sulfation and alkaline were feasible in the direction of lithium salt production. Chlorination was only feasible when Cl2 and CaCl2 were used as chlorination agents but not NaCl nor KCl. Depending on the kind of lithium salt formed during carbonizing and fluorination, the process was either spontaneous or nonspontaneous throughout the temperature range investigated. The HSC software was further used to simulate and predict some promising reagents which may be equally good for roasting the mineral for efficient lithium extraction but have not yet been considered by researchers.Keywords: thermochemical modelling, HSC chemistry software, lithium, spodumene, roasting
Procedia PDF Downloads 1591631 Postcolonialism and Feminist Dialogics: Re-Imaging Cultural Exclusion in the Nigerian Feminist Fiction
Authors: Muhammad Dahiru
Abstract:
A contestable polemic in postcolonialism is the Western Universalist conception of the people of a vast continent such as Africa as homogenous. Quite often, the postcolonial African woman is seen as an entity in western cultural and literary feminist theorisations. The debate between the so-called western feminist scholarship and the postcolonial/third world feminists that began in the late 1980s focuses on this universalisation of women’s concerns as monolithic. This article argues that the universalising assumption that all women share similar concerns in not only Africa as a continent but even in Nigeria as a country is misleading because of cultural differences. The article is a dialogic reading of Nigerian literature arguing that there is no culturally normative perspective on Nigerian feminist fiction because of the multifaceted and multicultural concerns of women writers from the different cultural regions in the country. The article concludes that this can better be read and appreciated through the lens of M. M. Bakhtin’s theory of dialogism.Keywords: cultural exclusion, dialogics, Nigerian feminist fiction, postcolonialism
Procedia PDF Downloads 2071630 One-Class Support Vector Machine for Sentiment Analysis of Movie Review Documents
Authors: Chothmal, Basant Agarwal
Abstract:
Sentiment analysis means to classify a given review document into positive or negative polar document. Sentiment analysis research has been increased tremendously in recent times due to its large number of applications in the industry and academia. Sentiment analysis models can be used to determine the opinion of the user towards any entity or product. E-commerce companies can use sentiment analysis model to improve their products on the basis of users’ opinion. In this paper, we propose a new One-class Support Vector Machine (One-class SVM) based sentiment analysis model for movie review documents. In the proposed approach, we initially extract features from one class of documents, and further test the given documents with the one-class SVM model if a given new test document lies in the model or it is an outlier. Experimental results show the effectiveness of the proposed sentiment analysis model.Keywords: feature selection methods, machine learning, NB, one-class SVM, sentiment analysis, support vector machine
Procedia PDF Downloads 5171629 Clinical Validation of an Automated Natural Language Processing Algorithm for Finding COVID-19 Symptoms and Complications in Patient Notes
Authors: Karolina Wieczorek, Sophie Wiliams
Abstract:
Introduction: Patient data is often collected in Electronic Health Record Systems (EHR) for purposes such as providing care as well as reporting data. This information can be re-used to validate data models in clinical trials or in epidemiological studies. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. Mentioning a disease in a discharge letter does not necessarily mean that a patient suffers from this disease. Many of them discuss a diagnostic process, different tests, or discuss whether a patient has a certain disease. The COVID-19 dataset in this study used natural language processing (NLP), an automated algorithm which extracts information related to COVID-19 symptoms, complications, and medications prescribed within the hospital. Free-text patient clinical patient notes are rich sources of information which contain patient data not captured in a structured form, hence the use of named entity recognition (NER) to capture additional information. Methods: Patient data (discharge summary letters) were exported and screened by an algorithm to pick up relevant terms related to COVID-19. Manual validation of automated tools is vital to pick up errors in processing and to provide confidence in the output. A list of 124 Systematized Nomenclature of Medicine (SNOMED) Clinical Terms has been provided in Excel with corresponding IDs. Two independent medical student researchers were provided with a dictionary of SNOMED list of terms to refer to when screening the notes. They worked on two separate datasets called "A” and "B”, respectively. Notes were screened to check if the correct term had been picked-up by the algorithm to ensure that negated terms were not picked up. Results: Its implementation in the hospital began on March 31, 2020, and the first EHR-derived extract was generated for use in an audit study on June 04, 2020. The dataset has contributed to large, priority clinical trials (including International Severe Acute Respiratory and Emerging Infection Consortium (ISARIC) by bulk upload to REDcap research databases) and local research and audit studies. Successful sharing of EHR-extracted datasets requires communicating the provenance and quality, including completeness and accuracy of this data. The results of the validation of the algorithm were the following: precision (0.907), recall (0.416), and F-score test (0.570). Percentage enhancement with NLP extracted terms compared to regular data extraction alone was low (0.3%) for relatively well-documented data such as previous medical history but higher (16.6%, 29.53%, 30.3%, 45.1%) for complications, presenting illness, chronic procedures, acute procedures respectively. Conclusions: This automated NLP algorithm is shown to be useful in facilitating patient data analysis and has the potential to be used in more large-scale clinical trials to assess potential study exclusion criteria for participants in the development of vaccines.Keywords: automated, algorithm, NLP, COVID-19
Procedia PDF Downloads 1021628 Unveiling Bengali Women’s Appreciation of Modernizing Japan
Authors: Lopamudra Malek
Abstract:
It is known to all that Japan was closed till 1853 abruptly; Commodore Culbreath Matthew Perry has played a pivotal role in Japan’s exposure to modernization and facing the real world as an Asian entity. As Japan opened its door for the world, Indians, especially four women from Bengal, visited Japan. They were Hariprova Takeda, Sarojnalini Dutta, Santa Devi and Parul Devi. All of them were from different entities, but there were some bewildering similarities also in their depiction. How they penetrated their exposure to modernizing Japan is the motto of the research. It should be mentioned that two of them were directly influenced by Rabindranath Tagore. The methodology that has been followed while doing this research is depending on secondary source materials, like books, articles, etc. Japan was changing herself relentlessly towards modernization and westernization and these four women had witnessed the changing Japan and how the changing Japan has reflected in their write-ups and autobiography is the fundamental part of the research. As all of them were women, they had compared themselves with Japanese women. The finding of the research is, astonishingly, all of them found and comprehended Japan as a country where women were having more financial sovereignty and freedom of thought comparing to India in those days.Keywords: empowerment, Japan, modernization, women
Procedia PDF Downloads 2141627 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform
Authors: S. Hutasavi, D. Chen
Abstract:
The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.Keywords: built-up area extraction, google earth engine, adaptive thresholding method, rapid mapping
Procedia PDF Downloads 1251626 Optimizing Human Diet Problem Using Linear Programming Approach: A Case Study
Authors: P. Priyanka, S. Shruthi, N. Guruprasad
Abstract:
Health is a common theme in most cultures. In fact all communities have their concepts of health, as part of their culture. Health continues to be a neglected entity. Planning of Human diet should be done very careful by selecting the food items or groups of food items also the composition involved. Low price and good taste of foods are regarded as two major factors for optimal human nutrition. Linear programming techniques have been extensively used for human diet formulation for quiet good number of years. Through the process, we mainly apply “The Simplex Method” which is a very useful statistical tool based on the theorem of Elementary Row Operation from Linear Algebra and also incorporate some other necessary rules set by the Simplex Method to help solve the problem. The study done by us is an attempt to develop a programming model for optimal planning and best use of nutrient ingredients.Keywords: diet formulation, linear programming, nutrient ingredients, optimization, simplex method
Procedia PDF Downloads 5581625 Integration of Internet-Accessible Resources in the Field of Mobile Robots
Authors: B. Madhevan, R. Sakkaravarthi, R. Diya
Abstract:
The number and variety of mobile robot applications are increasing day by day, both in an industry and in our daily lives. First developed as a tool, nowadays mobile robots can be integrated as an entity in Internet-accessible resources. The present work is organized around four potential resources such as cloud computing, Internet of things, Big data analysis and Co-simulation. Further, the focus relies on integrating, analyzing and discussing the need for integrating Internet-accessible resources and the challenges deriving from such integration, and how these issues have been tackled. Hence, the research work investigates the concepts of the Internet-accessible resources from the aspect of the autonomous mobile robots with an overview of the performances of the currently available database systems. IaR is a world-wide network of interconnected objects, can be considered an evolutionary process in mobile robots. IaR constitutes an integral part of future Internet with data analysis, consisting of both physical and virtual things.Keywords: internet-accessible resources, cloud computing, big data analysis, internet of things, mobile robot
Procedia PDF Downloads 3891624 Effect of Extraction Methods on the Fatty Acids and Physicochemical Properties of Serendipity Berry Seed Oil
Authors: Olufunmilola A. Abiodun, Adegbola O. Dauda, Ayobami Ojo, Samson A. Oyeyinka
Abstract:
Serendipity berry (Dioscoreophyllum cumminsii diel) is a tropical dioecious rainforest vine and native to tropical Africa. The vine grows during the raining season and is used mainly as sweetener. The sweetener in the berry is known as monellin which is sweeter than sucrose. The sweetener is extracted from the fruits and the seed is discarded. The discarded seeds contain bitter principles but had high yield of oil. Serendipity oil was extracted using three methods (N-hexane, expression and expression/n-hexane). Fatty acids and physicochemical properties of the oil obtained were determined. The oil obtained was clear, liquid and have odour similar to hydrocarbon. The percentage oil yield was 38.59, 12.34 and 49.57% for hexane, expression and expression-hexane method respectively. The seed contained high percentage of oil especially using combination of expression and hexane. Low percentage of oil was obtained using expression method. The refractive index values obtained were 1.443, 1.442 and 1.478 for hexane, expression and expression-hexane methods respectively. Peroxide value obtained for expression-hexane was higher than those for hexane and expression. The viscosities of the oil were 125.8, 128.76 and 126.87 cm³/s for hexane, expression and expression-hexane methods respectively which showed that the oil from expression method was more viscous than the other oils. The major fatty acids in serendipity seed oil were oleic acid (62.81%), linoleic acid (22.65%), linolenic (6.11%), palmitic acid (5.67%), stearic acid (2.21%) in decreasing order. Oleic acid which is monounsaturated fatty acid had the highest value. Total unsaturated fatty acids were 91.574, 92.256 and 90.426% for hexane, expression, and expression-hexane respectively. Combination of expression and hexane for extraction of serendipity oil produced high yield of oil. The oil could be refined for food and non-food application.Keywords: serendipity seed oil, expression method, fatty acid, hexane
Procedia PDF Downloads 2731623 Techno-Economic Analysis (TEA) of Circular Economy Approach in the Valorisation of Pig Meat Processing Wastes
Authors: Ribeiro A., Vilarinho C., Luisa A., Carvalho J
Abstract:
The pig meat industry generates large volumes of by- and co-products like blood, bones, skin, trimmings, organs, viscera, and skulls, among others, during slaughtering and meat processing and must be treated and disposed of ecologically. The yield of these by-products has been reported to account for about 10% to 15% of the value of the live animal in developed countries, although animal by-products account for about two-thirds of the animal after slaughter. It was selected for further valorization of the principal wastes produced throughout the value chain of pig meat production: Pig Manure, Pig Bones, Fats, Skins, Pig Hair, Wastewater, Wastewater sludges, and other animal subproducts type III. According to the potential valorization options, these wastes will be converted into Biomethane, Fertilizers (phosphorus and digestate), Hydroxyapatite, and protein hydrolysates (Keratin and Collagen). This work includes comprehensive technical and economic analyses (TEA) for each valorization route or applied technology. Metrics such as Net Present Value (NPV), Internal Rate of Return (IRR), and payback periods were used to evaluate economic feasibility. From this analysis, it can be concluded that, for Biogas Production, the scenarios using pig manure, wastewater sludges and mixed grass and leguminous wastes presented a remarkably high economic feasibility. Scenarios showed high economic feasibility with a positive payback period, NPV, and IRR. The optimal scenario combining pig manure with mixed grass and leguminous wastes had a payback period of 1.2 years and produced 427,6269 m³ of biomethane annually. Regarding the Chemical Extraction of Phosphorous and Nitrogen, results proved that the process is economically unviable due to negative cash flows despite high recovery rates. The TEA of Hydrolysis and Extraction of Keratin Hydrolysates indicate that a unit processing and valorizing 10 tons of pig hair per year for the production of keratin hydrolysate has an NPV of 907,940 €, an IRR of 13.07%, and a Payback period of 5.41 years. All of these indicators suggest a highly potential project to explore in the future. On the opposite, the results of Hydrolysis and Extraction of Collagen Hydrolysates showed a process economically unviable with negative cash flows in all scenarios due to the high-fat content in raw materials. In fact, the results from the valorization of 10 tons of pig skin had a negative cash flow of 453 743,88 €. TEA results of Extraction and purification of Hydroxyapatite from Pig Bones with Pyrolysis indicate that unit processing and valorizing 10 tons of pig bones per year for the production of hydroxyapatite has an NPV of 1 274 819,00 €, an IRR of 65.43%, and a Payback period of 1,5 years over a timeline of 10 years with a discount rate of 10%. These valorization routes, circular economy and bio-refinery approach offer significant contributions to sustainable bio-based operations within the agri-food industry. This approach transforms waste into valuable resources, enhancing both environmental and economic outcomes and contributing to a more sustainable and circular bioeconomy.Keywords: techno-economic analysis (TEA), pig meat processing wastes, circular economy, bio-refinery
Procedia PDF Downloads 151622 Kitchenary Metaphors in Hindi-Urdu: A Cognitive Analysis
Authors: Bairam Khan, Premlata Vaishnava
Abstract:
The ability to conceptualize one entity in terms of another allows us to communicate through metaphors. This central feature of human cognition has evolved with the development of language, and the processing of metaphors is without any conscious appraisal and is quite effortless. South Asians, like other speech communities, have been using the kitchenary [culinary] metaphor in a very simple yet interesting way and are known for bringing into new and unique constellations wherever they are. This composite feature of our language is used to communicate in a precise and compact manner and maneuvers the expression. The present study explores the role of kitchenary metaphors in the making and shaping of idioms by applying Cognitive Metaphor Theories. Drawing on examples from a corpus of adverts, print, and electronic media, the study looks at the metaphorical language used by real people in real situations. The overarching theme throughout the course is that kitchenary metaphors are powerful tools of expression in Hindi-Urdu.Keywords: cognitive metaphor theories, kitchenary metaphors, hindi-urdu print, and electronic media, grammatical structure of kitchenary metaphors of hindi-urdu
Procedia PDF Downloads 931621 Automatic Classification of Lung Diseases from CT Images
Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari
Abstract:
Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life of the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or Covidi-19 induced pneumonia. The early prediction and classification of such lung diseases help to reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans have pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publically available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.Keywords: CT scan, Covid-19, deep learning, image processing, lung disease classification
Procedia PDF Downloads 1551620 Rapid Identification and Diagnosis of the Pathogenic Leptospiras through Comparison among Culture, PCR and Real Time PCR Techniques from Samples of Human and Mouse Feces
Authors: S. Rostampour Yasouri, M. Ghane, M. Doudi
Abstract:
Leptospirosis is one of the most significant infectious and zoonotic diseases along with global spreading. This disease is causative agent of economoic losses and human fatalities in various countries, including Northern provinces of Iran. The aim of this research is to identify and compare the rapid diagnostic techniques of pathogenic leptospiras, considering the multifacetedness of the disease from a clinical manifestation and premature death of patients. In the spring and summer of 2020-2022, 25 fecal samples were collected from suspected leptospirosis patients and 25 Fecal samples from mice residing in the rice fields and factories in Tonekabon city. Samples were prepared by centrifugation and passing through membrane filters. Culture technique was used in liquid and solid EMJH media during one month of incubation at 30°C. Then, the media were examined microscopically. DNA extraction was conducted by extraction Kit. Diagnosis of leptospiras was enforced by PCR and Real time PCR (SYBR Green) techniques using lipL32 specific primer. Out of the patients, 11 samples (44%) and 8 samples (32%) were determined to be pathogenic Leptospira by Real time PCR and PCR technique, respectively. Out of the mice, 9 Samples (36%) and 3 samples (12%) were determined to be pathogenic Leptospira by the mentioned techniques, respectively. Although the culture technique is considered to be the gold standard technique, but due to the slow growth of pathogenic Leptospira and lack of colony formation of some species, it is not a fast technique. Real time PCR allowed rapid diagnosis with much higher accuracy compared to PCR because PCR could not completely identify samples with lower microbial load.Keywords: culture, pathogenic leptospiras, PCR, real time PCR
Procedia PDF Downloads 851619 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 3491618 A Robust Spatial Feature Extraction Method for Facial Expression Recognition
Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda
Abstract:
This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure
Procedia PDF Downloads 4251617 Kitchenary Metaphors In Hindi-urdu: A Cognitive Analysis
Authors: Bairam Khan, Premlata Vaishnava
Abstract:
The ability to conceptualize one entity in terms of another allows us to communicate through metaphors. This central feature of human cognition has evolved with the development of language, and the processing of metaphors is without any conscious appraisal and is quite effortless. South Asians, like other speech communities, have been using the kitchenary [culinary] metaphor in a very simple yet interesting way and are known for bringing into new and unique constellations wherever they are. This composite feature of our language is used to communicate in a precise and compact manner and maneuvers the expression. The present study explores the role of kitchenary metaphors in the making and shaping of idioms by applying Cognitive Metaphor Theories. Drawing on examples from a corpus of adverts, print, and electronic media, the study looks at the metaphorical language used by real people in real situations. The overarching theme throughout the course is that kitchenary metaphors are powerful tools of expression in Hindi-Urdu.Keywords: cognitive metaphor theory, source domain, target domain, signifier- signified, kitchenary, ethnocultural elements of south asia and hindi- urdu language
Procedia PDF Downloads 771616 The Long-Term Effects of Immediate Implantation, Early Implantation and Delayed Implantation at Aesthetics Area
Authors: Xing Wang, Lin Feng, Xuan Zou, Hongchen liu
Abstract:
Immediate Implantation after tooth extraction is considered to be the ideal way to retain the alveolar bone, but some scholars believe the aesthetic effect in the Early Implantation case are more reliable. In this study, 89 patients were added to this retrospective study up to 5 years. Assessment indicators was including the survival of the implant (peri-implant infection, implant loosening, shedding, crowns and occlusal), aesthetics (color and fullness gums, papilla height, probing depth, X-ray alveolar crest height, the patient's own aesthetic satisfaction, doctors aesthetics score), repair defects around the implant (peri-implant bone changes in height and thickness, whether the use of autologous bone graft, whether to use absorption/repair manual nonabsorbable material), treatment time, cost and the use of antibiotics.The results demonstrated that there is no significant difference in long-term success rate of immediate implantation, early implantation and delayed implantation (p> 0.05). But the results indicated immediate implantation group could get get better aesthetic results after two years (p< 0.05), but may increase the risk of complications and failures (p< 0.05). High-risk indicators include gingival recession, labial bone wall damage, thin gingival biotypes, planting position and occlusal restoration bad and so on. No matter which type of implanting methods was selected, the extraction methods and bone defect amplification techniques are observed as a significant factors on aesthetic effect (p< 0.05).Keywords: immediate implantation, long-term effects, aesthetics area, dental implants
Procedia PDF Downloads 3561615 In Vitro Antioxidant and Cytotoxic Activities Against Human Oral Cancer and Human Laryngeal Cancer of Limonia acidissima L. Bark Extracts
Authors: Kriyapa lairungruang, Arunporn Itharat
Abstract:
Limonia acidissima L. (LA) (Common name: wood apple, Thai name: ma-khwit) is a medicinal plant which has long been used in Thai traditional medicine. Its bark is used for treatment of diarrhea, abscess, wound healing and inflammation and it is also used in oral cancer. Thus, this research aimed to investigate antioxidant and cytotoxic activities of the LA bark extracts produced by various extraction methods. Different extraction procedures were used to extract LA bark for biological activity testing: boiling in water, maceration with 95% ethanol, maceration with 50% ethanol and water boiling of each the 95% and the 50% ethanolic residues. All extracts were tested for antioxidant activity using DPPH radical scavenging assay, cytotoxic activity against human laryngeal epidermoid carcinoma (HEp-2) cells and human oral epidermoid carcinoma (KB) cells using sulforhodamine B (SRB) assay. The results found that the 95% ethanolic extract of LA bark showed the highest antioxidant activity with EC50 values of 29.76±1.88 µg/ml. For cytotoxic activity, the 50% ethanolic extract showed the best cytotoxic activity against HEp-2 and KB cells with IC50 values of 9.55±1.68 and 18.90±0.86 µg/ml, respectively. This study demonstrated that the 95% ethanolic extract of LA bark showed moderate antioxidant activity and the 50% ethanolic extract provided potent cytotoxic activity against HEp-2 and KB cells. These results confirm the traditional use of LA for the treatment of oral cancer and laryngeal cancer, and also support its ongoing use.Keywords: antioxidant activity, cytotoxic activity, Laryngeal epidermoid carcinoma, Limonia acidissima L., oral epidermoid carcinoma
Procedia PDF Downloads 4781614 Tumor Size and Lymph Node Metastasis Detection in Colon Cancer Patients Using MR Images
Authors: Mohammadreza Hedyehzadeh, Mahdi Yousefi
Abstract:
Colon cancer is one of the most common cancer, which predicted to increase its prevalence due to the bad eating habits of peoples. Nowadays, due to the busyness of people, the use of fast foods is increasing, and therefore, diagnosis of this disease and its treatment are of particular importance. To determine the best treatment approach for each specific colon cancer patients, the oncologist should be known the stage of the tumor. The most common method to determine the tumor stage is TNM staging system. In this system, M indicates the presence of metastasis, N indicates the extent of spread to the lymph nodes, and T indicates the size of the tumor. It is clear that in order to determine all three of these parameters, an imaging method must be used, and the gold standard imaging protocols for this purpose are CT and PET/CT. In CT imaging, due to the use of X-rays, the risk of cancer and the absorbed dose of the patient is high, while in the PET/CT method, there is a lack of access to the device due to its high cost. Therefore, in this study, we aimed to estimate the tumor size and the extent of its spread to the lymph nodes using MR images. More than 1300 MR images collected from the TCIA portal, and in the first step (pre-processing), histogram equalization to improve image qualities and resizing to get the same image size was done. Two expert radiologists, which work more than 21 years on colon cancer cases, segmented the images and extracted the tumor region from the images. The next step is feature extraction from segmented images and then classify the data into three classes: T0N0، T3N1 و T3N2. In this article, the VGG-16 convolutional neural network has been used to perform both of the above-mentioned tasks, i.e., feature extraction and classification. This network has 13 convolution layers for feature extraction and three fully connected layers with the softmax activation function for classification. In order to validate the proposed method, the 10-fold cross validation method used in such a way that the data was randomly divided into three parts: training (70% of data), validation (10% of data) and the rest for testing. It is repeated 10 times, each time, the accuracy, sensitivity and specificity of the model are calculated and the average of ten repetitions is reported as the result. The accuracy, specificity and sensitivity of the proposed method for testing dataset was 89/09%, 95/8% and 96/4%. Compared to previous studies, using a safe imaging technique (MRI) and non-use of predefined hand-crafted imaging features to determine the stage of colon cancer patients are some of the study advantages.Keywords: colon cancer, VGG-16, magnetic resonance imaging, tumor size, lymph node metastasis
Procedia PDF Downloads 591613 Color Image Compression/Encryption/Contour Extraction using 3L-DWT and SSPCE Method
Authors: Ali A. Ukasha, Majdi F. Elbireki, Mohammad F. Abdullah
Abstract:
Data security needed in data transmission, storage, and communication to ensure the security. This paper is divided into two parts. This work interests with the color image which is decomposed into red, green and blue channels. The blue and green channels are compressed using 3-levels discrete wavelet transform. The Arnold transform uses to changes the locations of red image channel pixels as image scrambling process. Then all these channels are encrypted separately using the key image that has same original size and are generating using private keys and modulo operations. Performing the X-OR and modulo operations between the encrypted channels images for image pixel values change purpose. The extracted contours from color images recovery can be obtained with accepted level of distortion using single step parallel contour extraction (SSPCE) method. Experiments have demonstrated that proposed algorithm can fully encrypt 2D Color images and completely reconstructed without any distortion. Also shown that the analyzed algorithm has extremely large security against some attacks like salt and pepper and Jpeg compression. Its proof that the color images can be protected with a higher security level. The presented method has easy hardware implementation and suitable for multimedia protection in real time applications such as wireless networks and mobile phone services.Keywords: SSPCE method, image compression and salt and peppers attacks, bitplanes decomposition, Arnold transform, color image, wavelet transform, lossless image encryption
Procedia PDF Downloads 5181612 Hereditary Angioedema: Case Presentation and Review of Anaesthetic Implications
Authors: Joshua Chew, Vesa Cheng, David Thomson
Abstract:
Background: Hereditary angioedema (HAE) or C1 esterase deficiency is a relatively rare entity that has a potential for significant anesthetic complications. Methods: A literature review was performed of published cases of surgery in patients with HAE. Results were limited to English language only and cases were examined for management strategies and successful prevention of acute attacks. Results: The literature revealed the successful use of C1 esterase inhibitors as the most common agent in surgical prophylaxis therapy. Other therapeutic targets described included kallikrein inhibitors and bradykinin B2 receptor antagonists. Conclusions: Therapeutic targets that exist for the management of acute attacks in HAE have been successfully employed in the setting of surgery. The data is currently limited and could not be used as a firm evidence base, but the limited outcomes seen are positive and reassuring for the prospective anesthetic management of this potentially fatal condition.Keywords: anesthesia, C1 esterase deficiency, hereditary angioedema, surgical prophylaxis
Procedia PDF Downloads 4041611 Speech Detection Model Based on Deep Neural Networks Classifier for Speech Emotions Recognition
Authors: Aisultan Shoiynbek, Darkhan Kuanyshbay, Paulo Menezes, Akbayan Bekarystankyzy, Assylbek Mukhametzhanov, Temirlan Shoiynbek
Abstract:
Speech emotion recognition (SER) has received increasing research interest in recent years. It is a common practice to utilize emotional speech collected under controlled conditions recorded by actors imitating and artificially producing emotions in front of a microphone. There are four issues related to that approach: emotions are not natural, meaning that machines are learning to recognize fake emotions; emotions are very limited in quantity and poor in variety of speaking; there is some language dependency in SER; consequently, each time researchers want to start work with SER, they need to find a good emotional database in their language. This paper proposes an approach to create an automatic tool for speech emotion extraction based on facial emotion recognition and describes the sequence of actions involved in the proposed approach. One of the first objectives in the sequence of actions is the speech detection issue. The paper provides a detailed description of the speech detection model based on a fully connected deep neural network for Kazakh and Russian. Despite the high results in speech detection for Kazakh and Russian, the described process is suitable for any language. To investigate the working capacity of the developed model, an analysis of speech detection and extraction from real tasks has been performed.Keywords: deep neural networks, speech detection, speech emotion recognition, Mel-frequency cepstrum coefficients, collecting speech emotion corpus, collecting speech emotion dataset, Kazakh speech dataset
Procedia PDF Downloads 261610 Physico-Mechanical Behavior of Indian Oil Shales
Authors: K. S. Rao, Ankesh Kumar
Abstract:
The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior
Procedia PDF Downloads 347