Search results for: extracting numerals
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 394

Search results for: extracting numerals

364 Comparative Study between Two Methods for Extracting Pomegranate Juice and Their Effect on Product Quality

Authors: Amani Aljahani

Abstract:

The purpose of the study was to identify the physical and chemical properties of pomegranate juices and to evaluate their sensory quality. The samples were collected from the local markets and included four types of pomegranate produced in the western and southern region of the kingdom. The juices were extracted by manual squeezing and by centrifugal force. The juices were analyzed periodically for their content of organic acids, total acidity, glucose and fructose, total sugars, and the anthosianine. A panel of 30 judges evaluated the juices for their color, smell, taste, consistency and general acceptance using a prepared scale for that purpose. Result showed that pomegranate juices were acidic in nature (PH between 3.56–4.27). The major organic acids were citric, tartaric, malic, and oxalic aids total organic acidity was between 596.32–763.49 ng/100 ml and increased over storage time, however; total acidity almost stable over time except for the southern produced. The major monosaccharide's in pomegranate juices were glucose and fructose. Their concentration in the juice varied by storage. On the average glucose concentration was between 6.68–7.71 g/100 ml while fructose concentration was between 6.72–7.98 g/100 ml. total sugars content was 16% on the average and dropped by storage. Anthosianine concertration increased after five hours of storage then dropped and stabilized over time regardless of method of treatment. In addition, sensory evaluation of the juices showed general acceptance of them as of color, flavor, and constercy but the preferred one was with that of the western kind extracted by squeezing.

Keywords: extracting, pomegranate, juice, quality

Procedia PDF Downloads 324
363 Green Extraction of Patchoulol from Patchouli Leaves Using Ultrasound-Assisted Ionic Liquids

Authors: G. C. Jadeja, M. A. Desai, D. R. Bhatt, J. K. Parikh

Abstract:

Green extraction techniques are fast paving ways into various industrial sectors due to the stringent governmental regulations leading to the banning of toxic chemicals’ usage and also due to the increasing health/environmental awareness. The present work describes the ionic liquids based sonication method for selectively extracting patchoulol from the leaves of patchouli. 1-Butyl-3-methylimidazolium tetrafluoroborate ([Bmim]BF4) and N,N,N,N’,N’,N’-Hexaethyl-butane-1,4-diammonium dibromide (dicationic ionic liquid - DIL) were selected for extraction. Ultrasound assisted ionic liquid extraction was employed considering concentration of ionic liquid (4–8 %, w/w), ultrasound power (50–150 W for [Bmim]BF4 and 20–80 W for DIL), temperature (30–50 oC) and extraction time (30–50 min) as major parameters influencing the yield of patchoulol. Using the Taguchi method, the parameters were optimized and analysis of variance (ANOVA) was performed to find the most influential factor in the selected extraction method. In case of [Bmim]BF4, the optimum conditions were found to be: 4 % (w/w) ionic liquid concentration, 50 W power, 30 oC temperature and extraction time of 30 min. The yield obtained under the optimum conditions was 3.99 mg/g. In case of DIL, the optimum conditions were obtained as 6 % (w/w) ionic liquid concentration, 80 W power, 30 oC temperature and extraction time of 40 min, for which the yield obtained was 4.03 mg/g. Temperature was found to be the most significant factor in both the cases. Extraction time was the insignificant parameter while extracting the product using [Bmim]BF4 and in case of DIL, power was found to be the least significant factor affecting the process. Thus, a green method of recovering patchoulol is proposed.

Keywords: green extraction, ultrasound, patchoulol, ionic liquids

Procedia PDF Downloads 330
362 Variability Management of Contextual Feature Model in Multi-Software Product Line

Authors: Muhammad Fezan Afzal, Asad Abbas, Imran Khan, Salma Imtiaz

Abstract:

Software Product Line (SPL) paradigm is used for the development of the family of software products that share common and variable features. Feature model is a domain of SPL that consists of common and variable features with predefined relationships and constraints. Multiple SPLs consist of a number of similar common and variable features, such as mobile phones and Tabs. Reusability of common and variable features from the different domains of SPL is a complex task due to the external relationships and constraints of features in the feature model. To increase the reusability of feature model resources from domain engineering, it is required to manage the commonality of features at the level of SPL application development. In this research, we have proposed an approach that combines multiple SPLs into a single domain and converts them to a common feature model. Extracting the common features from different feature models is more effective, less cost and time to market for the application development. For extracting features from multiple SPLs, the proposed framework consists of three steps: 1) find the variation points, 2) find the constraints, and 3) combine the feature models into a single feature model on the basis of variation points and constraints. By using this approach, reusability can increase features from the multiple feature models. The impact of this research is to reduce the development of cost, time to market and increase products of SPL.

Keywords: software product line, feature model, variability management, multi-SPLs

Procedia PDF Downloads 39
361 A Corpus-Based Diachronic Study on Indefinite Pronominal Anaphora in English

Authors: Qiong Hu

Abstract:

From old English to modern English, the gender category has changed from grammatical gender system to natural gender system. The word classes that reflected gender has changed from pronouns, adjectives, and numerals in old English to only pronouns in modern English. In present-day English, the third person singular pronouns are the only paradigm that keeps an intact gender. 'He' and 'they' used as epicene pronouns are one of the two commonest phenomena of gender disagreement (the other being those against the natural gender). Considering the convenience of corpus concordance, epicene pronoun usage is selected in this study in which the anaphors are restricted to possessives (eg. his, their), and the antecedents are restricted to compound indefinite pronouns (eg. someone, somebody). Factors like writing form (eg. someone vs. some one), the semantics of the prefixes (eg. some- vs. any-), and suffixes (eg. -one vs. -body), as well as frequency, are taken into consideration. Statistics indicate that 'their' is increasingly used as the epicene pronoun compared with the decline of 'his' (when both writing forms are considered). This is influenced by social factors such as feminist movement, as well as the semantics and frequency of antecedents. Their (plural) used in anaphoric reference to various indefinite pronouns (singular in form) can also be treated as number variation in third person pronouns, and the trend that 'their' in place of his can also be treated as a change in number category. Among different candidates for the gender-neutral function, 'their' is proven to be the most promising one based on the diachronic data. This does not reject any new competitors in the future which still remains to be seen.

Keywords: language variation and change, epicene pronouns, gender, number

Procedia PDF Downloads 160
360 A U-Net Based Architecture for Fast and Accurate Diagram Extraction

Authors: Revoti Prasad Bora, Saurabh Yadav, Nikita Katyal

Abstract:

In the context of educational data mining, the use case of extracting information from images containing both text and diagrams is of high importance. Hence, document analysis requires the extraction of diagrams from such images and processes the text and diagrams separately. To the author’s best knowledge, none among plenty of approaches for extracting tables, figures, etc., suffice the need for real-time processing with high accuracy as needed in multiple applications. In the education domain, diagrams can be of varied characteristics viz. line-based i.e. geometric diagrams, chemical bonds, mathematical formulas, etc. There are two broad categories of approaches that try to solve similar problems viz. traditional computer vision based approaches and deep learning approaches. The traditional computer vision based approaches mainly leverage connected components and distance transform based processing and hence perform well in very limited scenarios. The existing deep learning approaches either leverage YOLO or faster-RCNN architectures. These approaches suffer from a performance-accuracy tradeoff. This paper proposes a U-Net based architecture that formulates the diagram extraction as a segmentation problem. The proposed method provides similar accuracy with a much faster extraction time as compared to the mentioned state-of-the-art approaches. Further, the segmentation mask in this approach allows the extraction of diagrams of irregular shapes.

Keywords: computer vision, deep-learning, educational data mining, faster-RCNN, figure extraction, image segmentation, real-time document analysis, text extraction, U-Net, YOLO

Procedia PDF Downloads 101
359 Functionalized Magnetic Iron Oxide Nanoparticles for Extraction of Protein and Metal Nanoparticles from Complex Fluids

Authors: Meenakshi Verma, Mandeep Singh Bakshi, Kultar Singh

Abstract:

Magnetic nanoparticles have received incredible importance in view of their diverse applications, which arise primarily due to their response to the external magnetic field. The magnetic behaviour of magnetic nanoparticles (NPs) helps them in numerous different ways. The most important amongst them is the ease with which they can be purified and also can be separated from the media in which they are present merely by applying an external magnetic field. This exceptional ease of separation of the magnetic NPs from an aqueous media enables them to use for extracting/removing metal pollutants from complex aqueous medium. Functionalized magnetic NPs can be subjected for the metallic impurities extraction if are favourably adsorbed on the NPs surfaces. We have successfully used the magnetic NPs as vehicles for gold and silver NPs removal from the complex fluids. The NPs loaded with gold and silver NPs pollutant fractions has been easily removed from the aqueous media by using external magnetic field. Similarly, we have used the magnetic NPs for extraction of protein from complex media and then constantly washed with pure water to eliminate the unwanted surface adsorbed components for quantitative estimation. The purified and protein loaded magnetic NPs are best analyzed with SDS Page to not only for characterization but also for separating the protein fractions. A collective review of the results indicates that we have synthesized surfactant coated iron oxide NPs and then functionalized these with selected materials. These surface active magnetic NPs work very well for the extraction of metallic NPs from the aqueous bulk and make the whole process environmentally sustainable. Also, magnetic NPs-Au/Ag/Pd hybrids have excellent protein extracting properties. They are much easier to use in order to extract the magnetic impurities as well as protein fractions under the effect of external magnetic field without any complex conventional purification methods.

Keywords: magnetic nanoparticles, protein, functionalized, extraction

Procedia PDF Downloads 76
358 Approaches to Tsunami Mitigation and Prevention: Explaining Architectural Strategies for Reducing Urban Risk

Authors: Hedyeh Gamini, Hadi Abdus

Abstract:

Tsunami, as a natural disaster, is composed of waves that are usually caused by severe movements at the sea floor. Although tsunami and its consequences cannot be prevented in any way, by examining past tsunamis and extracting key points on how to deal with this incident and learning from it, a positive step can be taken to reduce the vulnerability of human settlements and reduce the risk of this phenomenon in architecture and urbanism. The method is reviewing and has examined the documents written and valid internet sites related to managing and reducing the vulnerability of human settlements in face of tsunami. This paper has explored the tsunamis in Indonesia (2004), Sri Lanka (2004) and Japan (2011), and of the study objectives has been understanding how they dealt with tsunami and extracting key points, and the lessons from them in terms of reduction of vulnerability of human settlements in dealing with the tsunami. Finally, strategies to prevent and reduce the vulnerability of communities at risk of tsunamis have been offered in terms of architecture and urban planning. According to what is obtained from the study of the recent tsunamis, the authorities' quality of dealing with them, how to manage the crisis and the manner of their construction, it can be concluded that to reduce the vulnerability of human settlements against tsunami, there are generally four ways that are: 1-Construction of tall buildings with opening on the first floor so that water can flow easily under and the direction of the building should be in a way that water passes easily from the side. 2- The construction of multi-purpose centers, which could be used as vertical evacuation during accidents. 3- Constructing buildings in core forms with diagonal orientation of the coastline, 4- Building physical barriers (natural and synthetic) such as water dams, mounds of earth, sea walls and creating forests

Keywords: tsunami, architecture, reducing vulnerability, human settlements, urbanism

Procedia PDF Downloads 365
357 How Educational Settings Can Influence Development of Creativity through Play in Young Children

Authors: D. M. W. Munasinghe

Abstract:

This study focuses on how teachers view and use play to influence creativity in preschool children. Play is strongly featured in most of the discussions about creativity in young children. Hence, it was noted through direct observation that most preschool teachers are not concerned with promoting play to develop the child’s creativity. Therefore, this study attempts to investigate how the teachers use play, for the development of creativity in the preschool environment. The survey method was used as the research design and interviews, observations and document perusal were used as data collection methods. The sample consisted of 20 preschools from selected administrative divisions in the Colombo district. It was revealed that a majority of preschool teachers used folk games as a means of involving children in play. Teachers assume that this type of guided play will motivate the child learn new words, memorization and provide enjoyment. Eighty percent of the preschool teachers used the play equipment installed in the preschool premises to encourage children to get involved in activities calculated at promoting the physical development of the child. In 40% of the preschools visited it was noticed that when children were given their break they created their own forms of free play and enjoyed themselves thoroughly in the little time available to them. Also, about 20% of preschool teachers promoted imaginative play with their preschoolers. There was also the situation where the role of play was interpreted negatively by the teachers who assigned the children to copy letters and numerals during the time assigned for play. This has a negative impact on the child’s creativity. In conclusion, it was felt that the teachers do not make the best use of the opportunity available to use the child’s enthusiasm to stimulate creative actions his/her and that there is no suitable environment to develop creativity through play.

Keywords: creativity, preschool children, preschool environment, play method

Procedia PDF Downloads 362
356 Extracting the Antioxidant Compounds of Medicinal Plant Limoniastrum guyonianum

Authors: Assia Belfar, Mohamed Hadjadj, Messaouda Dakmouche, Zineb Ghiaba, Mahdi Belguidoum

Abstract:

Introduction: This study aims to phytochemical screening; Extracting the active compounds and estimate the effectiveness of antioxidant in Medicinal plants desert Limoniastrum guyonianum (Zeïta) from South Algeria. Methods: Total phenolic content and total flavonoid content using Folin-Ciocalteu and aluminum chloride colorimetric methods, respectively. The total antioxidant capacity was estimated by the following methods: DPPH (1.1-diphenyl-2-picrylhydrazyl radical) and reducing power assay. Results: Phytochemical screening of the plant part reveals the presence of phenols, saponins, flavonoids and tannins. While alkaloids and Terpenoids were absent. The Methanolic extract of L. guyonianum was extracted successively with ethyl acetate and butanol. Extraction of yield varied widely in the L. guyonianum ranging from (1.315 % to 4.218%). butanol fraction had the highest yield. The higher content of phenols was recorded in butanol fraction (311.81 ± 0.02mg GAE/g DW), the higher content of flavonoids was found in butanol fraction (9.58 ± 0.33mg QE/g DW). IC50 of inhibition of radical DPPH in ethyl acetate fraction was (0.05 ± 0.01µg/ml) Equal effectiveness with BHT, All extracts showed good activity of ferric reducing power, the higher power was in butanol fraction (16.16 ± 0.05mM). Conclusions: Demonstrated this study that the Methanolic extract of L. guyonianum contain a considerable quantity of phenolic compounds and possess a good antioxidant activity. It can be used as an easily accessible source of Natural Antioxidants and as a possible food supplement and in pharmaceutical industry.

Keywords: flavonoid compound, l. guyonianum, medicinal plants, phenolic compounds, phytochemical screening

Procedia PDF Downloads 275
355 Investigating the Energy Harvesting Potential of a Pitch-Plunge Airfoil Subjected to Fluctuating Wind

Authors: Magu Raam Prasaad R., Venkatramani Jagadish

Abstract:

Recent studies in the literature have shown that randomly fluctuating wind flows can give rise to a distinct regime of pre-flutter oscillations called intermittency. Intermittency is characterized by the presence of sporadic bursts of high amplitude oscillations interspersed amidst low-amplitude aperiodic fluctuations. The focus of this study is on investigating the energy harvesting potential of these intermittent oscillations. Available literature has by and large devoted its attention on extracting energy from flutter oscillations. The possibility of harvesting energy from pre-flutter regimes have remained largely unexplored. However, extracting energy from violent flutter oscillations can be severely detrimental to the structural integrity of airfoil structures. Consequently, investigating the relatively stable pre-flutter responses for energy extraction applications is of practical importance. The present study is devoted towards addressing these concerns. A pitch-plunge airfoil with cubic hardening nonlinearity in the plunge and pitch degree of freedom is considered. The input flow fluctuations are modelled using a sinusoidal term with randomly perturbed frequencies. An electromagnetic coupling is provided to the pitch-plunge equations, such that, energy from the wind induced vibrations of the structural response are extracted. With the mean flow speed as the bifurcation parameter, a fourth order Runge-Kutta based time marching algorithm is used to solve the governing aeroelastic equations with electro-magnetic coupling. The harnessed energy from the intermittency regime is presented and the results are discussed in comparison to that obtained from the flutter regime. The insights from this study could be useful in health monitoring of aeroelastic structures.

Keywords: aeroelasticity, energy harvesting, intermittency, randomly fluctuating flows

Procedia PDF Downloads 162
354 Aural Skills Pedagogy for Students with Absolute Pitch

Authors: Rika Uchida

Abstract:

In teaching sophomore level aural skills, I have dealt with students with absolute pitch do poorly in my courses, particularly in harmonic dictation. They can identify triads; however, identifying quality of seventh chords or chromatic chords poses serious challenges. Most often, they need to spell all the pitches before identifying the chord qualities and Roman Numerals. Growing up in a country where acquiring absolute pitch is considered essential, I started my early music training with fixed do system at age three and learned all my music with solfege. When I was assigned as a TA in aural skills courses at graduate school in US, I had to learn relative pitch quickly. My survival method was listening to music with absolute pitch first, then quickly "translate" to relative pitch. In teaching my courses, I have been using chord progressions (5-8 chords total), in which students are asked to sing chord arpeggiation with movable do solfege. I use same progressions for harmonic dictation; I hoped that students learn to incorporate singing and listening skills by overlapping same materials. This method has proven to be successful for most students; in particular, it has helped students with absolute pitch to hear chord quality and function. Although original progressions are written in C as a tonic, they can identify chords in harmonic dictation in other keys as well. In short, I believe singing chord progression with movable do arpeggiation helps students with absolute pitch to improve hearing function and quality of chords in harmonic dictation.

Keywords: aural skills pedagogy, music theory, absolute pitch, harmonic dictation

Procedia PDF Downloads 109
353 Addressing Scheme for IOT Network Using IPV6

Authors: H. Zormati, J. Chebil, J. Bel Hadj Taher

Abstract:

The goal of this paper is to present an addressing scheme that allows for assigning a unique IPv6 address to each node in the Internet of Things (IoT) network. This scheme guarantees uniqueness by extracting the clock skew of each communication device and converting it into an IPv6 address. Simulation analysis confirms that the presented scheme provides reductions in terms of energy consumption, communication overhead and response time as compared to four studied addressing schemes Strong DAD, LEADS, SIPA and CLOSA.

Keywords: addressing, IoT, IPv6, network, nodes

Procedia PDF Downloads 262
352 Level Set Based Extraction and Update of Lake Contours Using Multi-Temporal Satellite Images

Authors: Yindi Zhao, Yun Zhang, Silu Xia, Lixin Wu

Abstract:

The contours and areas of water surfaces, especially lakes, often change due to natural disasters and construction activities. It is an effective way to extract and update water contours from satellite images using image processing algorithms. However, to produce optimal water surface contours that are close to true boundaries is still a challenging task. This paper compares the performances of three different level set models, including the Chan-Vese (CV) model, the signed pressure force (SPF) model, and the region-scalable fitting (RSF) energy model for extracting lake contours. After experiment testing, it is indicated that the RSF model, in which a region-scalable fitting (RSF) energy functional is defined and incorporated into a variational level set formulation, is superior to CV and SPF, and it can get desirable contour lines when there are “holes” in the regions of waters, such as the islands in the lake. Therefore, the RSF model is applied to extracting lake contours from Landsat satellite images. Four temporal Landsat satellite images of the years of 2000, 2005, 2010, and 2014 are used in our study. All of them were acquired in May, with the same path/row (121/036) covering Xuzhou City, Jiangsu Province, China. Firstly, the near infrared (NIR) band is selected for water extraction. Image registration is conducted on NIR bands of different temporal images for information update, and linear stretching is also done in order to distinguish water from other land cover types. Then for the first temporal image acquired in 2000, lake contours are extracted via the RSF model with initialization of user-defined rectangles. Afterwards, using the lake contours extracted the previous temporal image as the initialized values, lake contours are updated for the current temporal image by means of the RSF model. Meanwhile, the changed and unchanged lakes are also detected. The results show that great changes have taken place in two lakes, i.e. Dalong Lake and Panan Lake, and RSF can actually extract and effectively update lake contours using multi-temporal satellite image.

Keywords: level set model, multi-temporal image, lake contour extraction, contour update

Procedia PDF Downloads 335
351 Antibacterial and Antioxidant Properties of Total Phenolics from Waste Orange Peels

Authors: Kanika Kalra, Harmeet Kaur, Dinesh Goyal

Abstract:

Total phenolics were extracted from waste orange peels by solvent extraction and alkali hydrolysis method. The most efficient solvents for extracting phenolic compounds from waste biomass were methanol (60%) > dimethyl sulfoxide > ethanol (60%) > distilled water. The extraction yields were significantly impacted by solvents (ethanol, methanol, and dimethyl sulfoxide) due to varying polarity and concentrations. Extraction of phenolics using 60% methanol yielded the highest phenolics (in terms of gallic acid equivalent (GAE) per gram of biomass) in orange peels. Alkali hydrolyzed extract from orange peels contained 7.58±0.33 mg GAE g⁻¹. By using the solvent extraction technique, it was observed that 60% methanol is comparatively the best-suited solvent for extracting polyphenolic compounds and gave the maximum yield of 4.68 ± 0.47 mg GAE g⁻¹ in orange peel extracts. DPPH radical scavenging activity and reducing the power of orange peel extract were checked, where 60% methanolic extract showed the highest antioxidant activity, 85.50±0.009% for DPPH, and dimethyl sulfoxide (DMSO) extract gave the highest yield of 1.75±0.01% for reducing power ability of the orange peels extract. Characterization of the polyphenolic compounds was done by using Fourier transformation infrared (FTIR) spectroscopy. Solvent and alkali hydrolysed extracts were evaluated for antibacterial activity using the agar well diffusion method against Gram-positive Bacillus subtilis MTCC441 and Gram-negative Escherichia coli MTCC729. Methanolic extract at 300µl concentration showed an inhibition zone of around 16.33±0.47 mm against Bacillus subtilis, whereas, for Escherichia coli, it was comparatively less. Broth-based turbidimetric assay revealed the antibacterial effect of different volumes of orange peel extracts against both organisms.

Keywords: orange peels, total phenolic content, antioxidant, antibacterial

Procedia PDF Downloads 36
350 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 106
349 Reduplication in Dhiyan: An Indo-Aryan Language of Assam

Authors: S. Sulochana Singha

Abstract:

Dhiyan or Dehan is the name of the community and language spoken by the Koch-Rajbangshi people of Barak Valley of Assam. Ethnically, they are Mongoloids, and their language belongs to the Indo-Aryan language family. However, Dhiyan is absent in any classification of Indo-Aryan languages. So the classification of Dhiyan language under the Indo-Aryan language family is completely based on the shared typological features of the other Indo-Aryan languages. Typologically, Dhiyan is an agglutinating language, and it shares many features of Indo-Aryan languages like presence of aspirated voiced stops, non-tonal, verb-person agreement, adjectives as different word class, prominent tense and subject object verb word order. Reduplication is a productive word-formation process in Dhiyan. Besides it also expresses plurality, intensification, and distributive. Generally, reduplication in Dhiyan can be at the morphological or lexical level. Morphological reduplication in Dhiyan involves expressives which includes onomatopoeias, sound symbolism, idiophones, and imitatives. Lexical reduplication in the language can be formed by echo formations and word reduplication. Echo formation in Dhiyan is formed by partial repetition from the base word which can be either consonant alternation or vowel alternation. The consonant alternation is basically found in onset position while the alternation of vowel is basically found in open syllable particularly in final syllable. Word reduplication involves reduplication of nouns, interrogatives, adjectives, and numerals which further can be class changing or class maintaining reduplication. The process of reduplication can be partial or complete whether it is lexical or morphological. The present paper is an attempt to describe some aspects of the formation, function, and usage of reduplications in Dhiyan which is mainly spoken in ten villages in the Eastern part of Barak River in the Cachar District of Assam.

Keywords: Barak-Valley, Dhiyan, Indo-Aryan, reduplication

Procedia PDF Downloads 188
348 The Effect of Different Concentrations of Extracting Solvent on the Polyphenolic Content and Antioxidant Activity of Gynura procumbens Leaves

Authors: Kam Wen Hang, Tan Kee Teng, Huang Poh Ching, Chia Kai Xiang, H. V. Annegowda, H. S. Naveen Kumar

Abstract:

Gynura procumbens (G. procumbens) leaves, commonly known as ‘sambung nyawa’ in Malaysia is a well-known medicinal plant commonly used as folk medicines in controlling blood glucose, cholesterol level as well as treating cancer. These medicinal properties were believed to be related to the polyphenolic content present in G. procumbens extract, therefore optimization of its extraction process is vital to obtain highest possible antioxidant activities. The current study was conducted to investigate the effect of different concentrations of extracting solvent (ethanol) on the amount of polyphenolic content and antioxidant activities of G. procumbens leaf extract. The concentrations of ethanol used were 30-70%, with the temperature and time kept constant at 50°C and 30 minutes, respectively using ultrasound-assisted extraction. The polyphenolic content of these extracts were quantified by Folin-Ciocalteu colorimetric method and results were expressed as milligram gallic acid equivalent (mg GAE)/g. Phosphomolybdenum method and 1, 1-diphenyl-2-picrylhydrazyl (DPPH) radical scavenging assays were used to investigate the antioxidant properties of the extract and the results were expressed as milligram ascorbic acid equivalent (mg AAE)/g and effective concentration (EC50) respectively. Among the three different (30%, 50% and 70%) concentrations of ethanol studied, the 50% ethanolic extract showed total phenolic content of 31.565 ± 0.344 mg GAE/g and total antioxidant activity of 78.839 ± 0.199 mg AAE/g while 30% ethanolic extract showed 29.214 ± 0.645 mg GAE/g and 70.701 ± 1.394 mg AAE/g, respectively. With respect to DPPH radical scavenging assay, 50% ethanolic extract had exhibited slightly lower EC50 (314.3 ± 4.0 μg/ml) values compared to 30% ethanol extract (340.4 ± 5.3 μg/ml). Out of all the tested extracts, 70% ethanolic extract exhibited significantly (p< 0.05) highest total phenolic content (38.000 ± 1.009 mg GAE/g), total antioxidant capacity (95.874 ± 2.422 mg AAE/g) and demonstrated the lowest EC50 in DPPH assay (244.2 ± 5.9 μg/ml). An excellent correlations were drawn between total phenolic content, total antioxidant capacity and DPPH radical scavenging activity (R2 = 0.949 and R2 = 0.978, respectively). It was concluded from this study that, 70% ethanol should be used as the optimal polarity solvent to obtain G. procumbens leaf extract with maximum polyphenolic content with antioxidant properties.

Keywords: antioxidant activity, DPPH assay, Gynura procumbens, phenolic compounds

Procedia PDF Downloads 377
347 Extracting Opinions from Big Data of Indonesian Customer Reviews Using Hadoop MapReduce

Authors: Veronica S. Moertini, Vinsensius Kevin, Gede Karya

Abstract:

Customer reviews have been collected by many kinds of e-commerce websites selling products, services, hotel rooms, tickets and so on. Each website collects its own customer reviews. The reviews can be crawled, collected from those websites and stored as big data. Text analysis techniques can be used to analyze that data to produce summarized information, such as customer opinions. Then, these opinions can be published by independent service provider websites and used to help customers in choosing the most suitable products or services. As the opinions are analyzed from big data of reviews originated from many websites, it is expected that the results are more trusted and accurate. Indonesian customers write reviews in Indonesian language, which comes with its own structures and uniqueness. We found that most of the reviews are expressed with “daily language”, which is informal, do not follow the correct grammar, have many abbreviations and slangs or non-formal words. Hadoop is an emerging platform aimed for storing and analyzing big data in distributed systems. A Hadoop cluster consists of master and slave nodes/computers operated in a network. Hadoop comes with distributed file system (HDFS) and MapReduce framework for supporting parallel computation. However, MapReduce has weakness (i.e. inefficient) for iterative computations, specifically, the cost of reading/writing data (I/O cost) is high. Given this fact, we conclude that MapReduce function is best adapted for “one-pass” computation. In this research, we develop an efficient technique for extracting or mining opinions from big data of Indonesian reviews, which is based on MapReduce with one-pass computation. In designing the algorithm, we avoid iterative computation and instead adopt a “look up table” technique. The stages of the proposed technique are: (1) Crawling the data reviews from websites; (2) cleaning and finding root words from the raw reviews; (3) computing the frequency of the meaningful opinion words; (4) analyzing customers sentiments towards defined objects. The experiments for evaluating the performance of the technique were conducted on a Hadoop cluster with 14 slave nodes. The results show that the proposed technique (stage 2 to 4) discovers useful opinions, is capable of processing big data efficiently and scalable.

Keywords: big data analysis, Hadoop MapReduce, analyzing text data, mining Indonesian reviews

Procedia PDF Downloads 181
346 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena

Authors: Mohammad Zavid Parvez, Manoranjan Paul

Abstract:

A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.

Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.

Procedia PDF Downloads 440
345 Towards Update a Road Map Solution: Use of Information Obtained by the Extraction of Road Network and Its Nodes from a Satellite Image

Authors: Z. Nougrara, J. Meunier

Abstract:

In this paper, we present a new approach for extracting roads, there road network and its nodes from satellite image representing regions in Algeria. Our approach is related to our previous research work. It is founded on the information theory and the mathematical morphology. We therefore have to define objects as sets of pixels and to study the shape of these objects and the relations that exist between them. The main interest of this study is to solve the problem of the automatic mapping from satellite images. This study is thus applied for that the geographical representation of the images is as near as possible to the reality.

Keywords: nodes, road network, satellite image, updating a road map

Procedia PDF Downloads 393
344 Extracting Attributes for Twitter Hashtag Communities

Authors: Ashwaq Alsulami, Jianhua Shao

Abstract:

Various organisations often need to understand discussions on social media, such as what trending topics are and characteristics of the people engaged in the discussion. A number of approaches have been proposed to extract attributes that would characterise a discussion group. However, these approaches are largely based on supervised learning, and as such they require a large amount of labelled data. We propose an approach in this paper that does not require labelled data, but rely on lexical sources to detect meaningful attributes for online discussion groups. Our findings show an acceptable level of accuracy in detecting attributes for Twitter discussion groups.

Keywords: attributed community, attribute detection, community, social network

Procedia PDF Downloads 127
343 A Concept of Data Mining with XML Document

Authors: Akshay Agrawal, Anand K. Srivastava

Abstract:

The increasing amount of XML datasets available to casual users increases the necessity of investigating techniques to extract knowledge from these data. Data mining is widely applied in the database research area in order to extract frequent correlations of values from both structured and semi-structured datasets. The increasing availability of heterogeneous XML sources has raised a number of issues concerning how to represent and manage these semi structured data. In recent years due to the importance of managing these resources and extracting knowledge from them, lots of methods have been proposed in order to represent and cluster them in different ways.

Keywords: XML, similarity measure, clustering, cluster quality, semantic clustering

Procedia PDF Downloads 346
342 Lentil Protein Fortification in Cranberry Squash

Authors: Sandhya Devi A

Abstract:

The protein content of the cranberry squash (protein: 0g) may be increased by extracting protein from the lentils (9 g), which is particularly linked to a lower risk of developing heart disease. Using the technique of alkaline extraction from the lentils flour, protein may be extracted. Alkaline extraction of protein from lentil flour was optimized utilizing response surface approach in order to maximize both protein content and yield. Cranberry squash may be taken if a protein fortification syrup is prepared and processed into the squash.

Keywords: alkaline extraction, cranberry squash, protein fortification, response surface methodology

Procedia PDF Downloads 80
341 Extraction of Squalene from Lebanese Olive Oil

Authors: Henri El Zakhem, Christina Romanos, Charlie Bakhos, Hassan Chahal, Jessica Koura

Abstract:

Squalene is a valuable component of the oil composed of 30 carbon atoms and is mainly used for cosmetic materials. The main concern of this article is to study the Squalene composition in the Lebanese olive oil and to compare it with foreign oil results. To our knowledge, extraction of Squalene from the Lebanese olive oil has not been conducted before. Three different techniques were studied and experiments were performed on three brands of olive oil, Al Wadi Al Akhdar, Virgo Bio and Boulos. The techniques performed are the Fractional Crystallization, the Soxhlet and the Esterification. By comparing the results, it is found that the Lebanese oil contains squalene and Soxhlet method is the most effective between the three methods extracting about 6.5E-04 grams of Squalene per grams of olive oil.

Keywords: squalene, extraction, crystallization, Soxhlet

Procedia PDF Downloads 492
340 A Comparative Study of Multi-SOM Algorithms for Determining the Optimal Number of Clusters

Authors: Imèn Khanchouch, Malika Charrad, Mohamed Limam

Abstract:

The interpretation of the quality of clusters and the determination of the optimal number of clusters is still a crucial problem in clustering. We focus in this paper on multi-SOM clustering method which overcomes the problem of extracting the number of clusters from the SOM map through the use of a clustering validity index. We then tested multi-SOM using real and artificial data sets with different evaluation criteria not used previously such as Davies Bouldin index, Dunn index and silhouette index. The developed multi-SOM algorithm is compared to k-means and Birch methods. Results show that it is more efficient than classical clustering methods.

Keywords: clustering, SOM, multi-SOM, DB index, Dunn index, silhouette index

Procedia PDF Downloads 565
339 Determination of Concentrated State Using Multiple EEG Channels

Authors: Tae Jin Choi, Jong Ok Kim, Sang Min Jin, Gilwon Yoon

Abstract:

Analysis of EEG brainwave provides information on mental or emotional states. One of the particular states that can have various applications in human machine interface (HMI) is concentration. 8-channel EEG signals were measured and analyzed. The concentration index was compared during resting and concentrating periods. Among eight channels, locations the frontal lobe (Fp1 and Fp2) showed a clear increase of the concentration index during concentration regardless of subjects. The rest six channels produced conflicting observations depending on subjects. At this time, it is not clear whether individual difference or how to concentrate made these results for the rest six channels. Nevertheless, it is expected that Fp1 and Fp2 are promising locations for extracting control signal for HMI applications.

Keywords: concentration, EEG, human machine interface, biophysical

Procedia PDF Downloads 453
338 A Method to Evaluate and Compare Web Information Extractors

Authors: Patricia Jiménez, Rafael Corchuelo, Hassan A. Sleiman

Abstract:

Web mining is gaining importance at an increasing pace. Currently, there are many complementary research topics under this umbrella. Their common theme is that they all focus on applying knowledge discovery techniques to data that is gathered from the Web. Sometimes, these data are relatively easy to gather, chiefly when it comes from server logs. Unfortunately, there are cases in which the data to be mined is the data that is displayed on a web document. In such cases, it is necessary to apply a pre-processing step to first extract the information of interest from the web documents. Such pre-processing steps are performed using so-called information extractors, which are software components that are typically configured by means of rules that are tailored to extracting the information of interest from a web page and structuring it according to a pre-defined schema. Paramount to getting good mining results is that the technique used to extract the source information is exact, which requires to evaluate and compare the different proposals in the literature from an empirical point of view. According to Google Scholar, about 4 200 papers on information extraction have been published during the last decade. Unfortunately, they were not evaluated within a homogeneous framework, which leads to difficulties to compare them empirically. In this paper, we report on an original information extraction evaluation method. Our contribution is three-fold: a) this is the first attempt to provide an evaluation method for proposals that work on semi-structured documents; the little existing work on this topic focuses on proposals that work on free text, which has little to do with extracting information from semi-structured documents. b) It provides a method that relies on statistically sound tests to support the conclusions drawn; the previous work does not provide clear guidelines or recommend statistically sound tests, but rather a survey that collects many features to take into account as well as related work; c) We provide a novel method to compute the performance measures regarding unsupervised proposals; otherwise they would require the intervention of a user to compute them by using the annotations on the evaluation sets and the information extracted. Our contributions will definitely help researchers in this area make sure that they have advanced the state of the art not only conceptually, but from an empirical point of view; it will also help practitioners make informed decisions on which proposal is the most adequate for a particular problem. This conference is a good forum to discuss on our ideas so that we can spread them to help improve the evaluation of information extraction proposals and gather valuable feedback from other researchers.

Keywords: web information extractors, information extraction evaluation method, Google scholar, web

Procedia PDF Downloads 226
337 Assessing Building Rooftop Potential for Solar Photovoltaic Energy and Rainwater Harvesting: A Sustainable Urban Plan for Atlantis, Western Cape

Authors: Adedayo Adeleke, Dineo Pule

Abstract:

The ongoing load-shedding in most parts of South Africa, combined with climate change causing severe drought conditions in Cape Town, has left electricity consumers seeking alternative sources of power and water. Solar energy, which is abundant in most parts of South Africa and is regarded as a clean and renewable source of energy, allows for the generation of electricity via solar photovoltaic systems. Rainwater harvesting is the collection and storage of rainwater from building rooftops, allowing people without access to water to collect it. The lack of dependable energy and water source must be addressed by shifting to solar energy via solar photovoltaic systems and rainwater harvesting. Before this can be done, the potential of building rooftops must be assessed to determine whether solar energy and rainwater harvesting will be able to meet or significantly contribute to Atlantis industrial areas' electricity and water demands. This research project presents methods and approaches for automatically extracting building rooftops in Atlantis industrial areas and evaluating their potential for solar photovoltaics and rainwater harvesting systems using Light Detection and Ranging (LiDAR) data and aerial imagery. The four objectives were to: (1) identify an optimal method of extracting building rooftops from aerial imagery and LiDAR data; (2) identify a suitable solar radiation model that can provide a global solar radiation estimate of the study area; (3) estimate solar photovoltaic potential overbuilding rooftop; and (4) estimate the amount of rainwater that can be harvested from the building rooftop in the study area. Mapflow, a plugin found in Quantum Geographic Information System(GIS) was used to automatically extract building rooftops using aerial imagery. The mean annual rainfall in Cape Town was obtained from a 29-year rainfall period (1991- 2020) and used to calculate the amount of rainwater that can be harvested from building rooftops. The potential for rainwater harvesting and solar photovoltaic systems was assessed, and it can be concluded that there is potential for these systems but only to supplement the existing resource supply and offer relief in times of drought and load-shedding.

Keywords: roof potential, rainwater harvesting, urban plan, roof extraction

Procedia PDF Downloads 90
336 The Platform for Digitization of Georgian Documents

Authors: Erekle Magradze, Davit Soselia, Levan Shughliashvili, Irakli Koberidze, Shota Tsiskaridze, Victor Kakhniashvili, Tamar Chaghiashvili

Abstract:

Since the beginning of active publishing activity in Georgia, voluminous printed material has been accumulated, the digitization of which is an important task. Digitized materials will be available to the audience, and it will be possible to find text in them and conduct various factual research. Digitizing scanned documents means scanning documents, extracting text from the scanned documents, and processing the text into a corresponding language model to detect inaccuracies and grammatical errors. Implementing these stages requires a unified, scalable, and automated platform, where the digital service developed for each stage will perform the task assigned to it; at the same time, it will be possible to develop these services dynamically so that there is no interruption in the work of the platform.

Keywords: NLP, OCR, BERT, Kubernetes, transformers

Procedia PDF Downloads 114
335 Methodologies for Deriving Semantic Technical Information Using an Unstructured Patent Text Data

Authors: Jaehyung An, Sungjoo Lee

Abstract:

Patent documents constitute an up-to-date and reliable source of knowledge for reflecting technological advance, so patent analysis has been widely used for identification of technological trends and formulation of technology strategies. But, identifying technological information from patent data entails some limitations such as, high cost, complexity, and inconsistency because it rely on the expert’ knowledge. To overcome these limitations, researchers have applied to a quantitative analysis based on the keyword technique. By using this method, you can include a technological implication, particularly patent documents, or extract a keyword that indicates the important contents. However, it only uses the simple-counting method by keyword frequency, so it cannot take into account the sematic relationship with the keywords and sematic information such as, how the technologies are used in their technology area and how the technologies affect the other technologies. To automatically analyze unstructured technological information in patents to extract the semantic information, it should be transformed into an abstracted form that includes the technological key concepts. Specific sentence structure ‘SAO’ (subject, action, object) is newly emerged by representing ‘key concepts’ and can be extracted by NLP (Natural language processor). An SAO structure can be organized in a problem-solution format if the action-object (AO) states that the problem and subject (S) form the solution. In this paper, we propose the new methodology that can extract the SAO structure through technical elements extracting rules. Although sentence structures in the patents text have a unique format, prior studies have depended on general NLP (Natural language processor) applied to the common documents such as newspaper, research paper, and twitter mentions, so it cannot take into account the specific sentence structure types of the patent documents. To overcome this limitation, we identified a unique form of the patent sentences and defined the SAO structures in the patents text data. There are four types of technical elements that consist of technology adoption purpose, application area, tool for technology, and technical components. These four types of sentence structures from patents have their own specific word structure by location or sequence of the part of speech at each sentence. Finally, we developed algorithms for extracting SAOs and this result offer insight for the technology innovation process by providing different perspectives of technology.

Keywords: NLP, patent analysis, SAO, semantic-analysis

Procedia PDF Downloads 241