Search results for: small text extraction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7657

Search results for: small text extraction

7297 Small Entrepreneurs as Creators of Chaos: Increasing Returns Requires Scaling

Authors: M. B. Neace, Xin GAo

Abstract:

Small entrepreneurs are ubiquitous. Regardless of location their success depends on several behavioral characteristics and several market conditions. In this concept paper, we extend this paradigm to include elements from the science of chaos. Our observations, research findings, literature search and intuition lead us to the proposition that all entrepreneurs seek increasing returns, as did the many small entrepreneurs we have interviewed over the years. There will be a few whose initial perturbations may create tsunami-like waves of increasing returns over time resulting in very large market consequences–the butterfly impact. When small entrepreneurs perturb the market-place and their initial efforts take root a series of phase-space transitions begin to occur. They sustain the stream of increasing returns by scaling up. Chaos theory contributes to our understanding of this phenomenon. Sustaining and nourishing increasing returns of small entrepreneurs as complex adaptive systems requires scaling. In this paper we focus on the most critical element of the small entrepreneur scaling process–the mindset of the owner-operator.

Keywords: entrepreneur, increasing returns, scaling, chaos

Procedia PDF Downloads 435
7296 A Discussion on Electrically Small Antenna Property

Authors: Riki H. Patel, Arpan Desia, Trushit Upadhayay

Abstract:

The demand of compact antenna is ever increasing since the inception of wireless communication devices. In the age of wireless communication, requirement of miniaturized antennas is quite high. It is quite often that antenna dimensions are decided based on application based requirement compared to practical antenna constraints. The tradeoff in efficiency and other antenna parameters against to antenna size is always a debatable issue. The article presents detailed review of fundamentals of electrically small antennas and its potential applications. In addition, constraints and challenges of electrically small antennas are also presented in the article.

Keywords: bandwidth, communication, electrically small antenna, communication engineering

Procedia PDF Downloads 504
7295 Extractive Desulfurization of Fuels Using Choline Chloride-Based Deep Eutectic Solvents

Authors: T. Zaki, Fathi S. Soliman

Abstract:

Desulfurization process is required by most, if not all refineries, to achieve ultra-low sulfur fuel, that contains less than 10 ppm sulfur. A lot of research works and many effective technologies have been studied to achieve deep desulfurization process in moderate reaction environment, such as adsorption desulfurization (ADS), oxidative desulfurization (ODS), biodesulfurization and extraction desulfurization (EDS). Extraction desulfurization using deep eutectic solvents (DESs) is considered as simple, cheap, highly efficient and environmentally friend process. In this work, four DESs were designed and synthesized. Choline chloride (ChCl) was selected as typical hydrogen bond acceptors (HBA), and ethylene glycol (EG), glycerol (Gl), urea (Ur) and thiourea (Tu) were selected as hydrogen bond donors (HBD), from which a series of deep eutectic solvents were synthesized. The experimental data showed that the synthesized DESs showed desulfurization affinities towards the thiophene species in cyclohexane solvent. Ethylene glycol molecules showed more affinity to create hydrogen bond with thiophene instead of choline chloride. Accordingly, ethylene glycol choline chloride DES has the highest extraction efficiency.

Keywords: DES, desulfurization, green solvent, extraction

Procedia PDF Downloads 255
7294 Construction and Analysis of Tamazight (Berber) Text Corpus

Authors: Zayd Khayi

Abstract:

This paper deals with the construction and analysis of the Tamazight text corpus. The grammatical structure of the Tamazight remains poorly understood, and a lack of comparative grammar leads to linguistic issues. In order to fill this gap, even though it is small, by constructed the diachronic corpus of the Tamazight language, and elaborated the program tool. In addition, this work is devoted to constructing that tool to analyze the different aspects of the Tamazight, with its different dialects used in the north of Africa, specifically in Morocco. It also focused on three Moroccan dialects: Tamazight, Tarifiyt, and Tachlhit. The Latin version was good choice because of the many sources it has. The corpus is based on the grammatical parameters and features of that language. The text collection contains more than 500 texts that cover a long historical period. It is free, and it will be useful for further investigations. The texts were transformed into an XML-format standardization goal. The corpus counts more than 200,000 words. Based on the linguistic rules and statistical methods, the original user interface and software prototype were developed by combining the technologies of web design and Python. The corpus presents more details and features about how this corpus provides users with the ability to distinguish easily between feminine/masculine nouns and verbs. The interface used has three languages: TMZ, FR, and EN. Selected texts were not initially categorized. This work was done in a manual way. Within corpus linguistics, there is currently no commonly accepted approach to the classification of texts. Texts are distinguished into ten categories. To describe and represent the texts in the corpus, we elaborated the XML structure according to the TEI recommendations. Using the search function may provide us with the types of words we would search for, like feminine/masculine nouns and verbs. Nouns are divided into two parts. The gender in the corpus has two forms. The neutral form of the word corresponds to masculine, while feminine is indicated by a double t-t affix (the prefix t- and the suffix -t), ex: Tarbat (girl), Tamtut (woman), Taxamt (tent), and Tislit (bride). However, there are some words whose feminine form contains only the prefix t- and the suffix –a, ex: Tasa (liver), tawja (family), and tarwa (progenitors). Generally, Tamazight masculine words have prefixes that distinguish them from other words. For instance, 'a', 'u', 'i', ex: Asklu (tree), udi (cheese), ighef (head). Verbs in the corpus are for the first person singular and plural that have suffixes 'agh','ex', 'egh', ex: 'ghrex' (I study), 'fegh' (I go out), 'nadagh' (I call). The program tool permits the following characteristics of this corpus: list of all tokens; list of unique words; lexical diversity; realize different grammatical requests. To conclude, this corpus has only focused on a small group of parts of speech in Tamazight language verbs, nouns. Work is still on the adjectives, prounouns, adverbs and others.

Keywords: Tamazight (Berber) language, corpus linguistic, grammar rules, statistical methods

Procedia PDF Downloads 40
7293 Improving Topic Quality of Scripts by Using Scene Similarity Based Word Co-Occurrence

Authors: Yunseok Noh, Chang-Uk Kwak, Sun-Joong Kim, Seong-Bae Park

Abstract:

Scripts are one of the basic text resources to understand broadcasting contents. Since broadcast media wields lots of influence over the public, tools for understanding broadcasting contents are more required. Topic modeling is the method to get the summary of the broadcasting contents from its scripts. Generally, scripts represent contents descriptively with directions and speeches. Scripts also provide scene segments that can be seen as semantic units. Therefore, a script can be topic modeled by treating a scene segment as a document. Because scripts consist of speeches mainly, however, relatively small co-occurrences among words in the scene segments are observed. This causes inevitably the bad quality of topics based on statistical learning method. To tackle this problem, we propose a method of learning with additional word co-occurrence information obtained using scene similarities. The main idea of improving topic quality is that the information that two or more texts are topically related can be useful to learn high quality of topics. In addition, by using high quality of topics, we can get information more accurate whether two texts are related or not. In this paper, we regard two scene segments are related if their topical similarity is high enough. We also consider that words are co-occurred if they are in topically related scene segments together. In the experiments, we showed the proposed method generates a higher quality of topics from Korean drama scripts than the baselines.

Keywords: broadcasting contents, scripts, text similarity, topic model

Procedia PDF Downloads 294
7292 Importance of Punctuation in Communicative Competence

Authors: Khayriniso Bakhtiyarovna Ganiyeva

Abstract:

The article explores the significance of punctuation in achieving communicative competence. It underscores that effective communication goes beyond simply using punctuation correctly. In the successful completion of a communicative activity, it is important not that the writer correctly uses punctuation marks but that he was able to achieve a goal aimed at expressing a certain meaning. The unanimity of the writer and the reader in the mutual understanding of the text is of primary importance. It should also be taken into account that situational communication provides special informative content and expressiveness of speech. Also, the norms of the situation are determined by the nature of the information in the text, and the punctuation marks expressed in accordance with the norm perform logical-semantic, highlighting expressive-emotional and signaling functions. It is a mistake to classify the signs subject to the norm of the situation as created by the author because they functionally reflect the general stylistic features of different texts. Such signs are among the common signs that are codified only by the semantics and structure of the created text.

Keywords: communicative-pragmatic approach, expressiveness of speech, stylistic features, comparative analysis

Procedia PDF Downloads 38
7291 Using Visualization Techniques to Support Common Clinical Tasks in Clinical Documentation

Authors: Jonah Kenei, Elisha Opiyo

Abstract:

Electronic health records, as a repository of patient information, is nowadays the most commonly used technology to record, store and review patient clinical records and perform other clinical tasks. However, the accurate identification and retrieval of relevant information from clinical records is a difficult task due to the unstructured nature of clinical documents, characterized in particular by a lack of clear structure. Therefore, medical practice is facing a challenge thanks to the rapid growth of health information in electronic health records (EHRs), mostly in narrative text form. As a result, it's becoming important to effectively manage the growing amount of data for a single patient. As a result, there is currently a requirement to visualize electronic health records (EHRs) in a way that aids physicians in clinical tasks and medical decision-making. Leveraging text visualization techniques to unstructured clinical narrative texts is a new area of research that aims to provide better information extraction and retrieval to support clinical decision support in scenarios where data generated continues to grow. Clinical datasets in electronic health records (EHR) offer a lot of potential for training accurate statistical models to classify facets of information which can then be used to improve patient care and outcomes. However, in many clinical note datasets, the unstructured nature of clinical texts is a common problem. This paper examines the very issue of getting raw clinical texts and mapping them into meaningful structures that can support healthcare professionals utilizing narrative texts. Our work is the result of a collaborative design process that was aided by empirical data collected through formal usability testing.

Keywords: classification, electronic health records, narrative texts, visualization

Procedia PDF Downloads 96
7290 Text as Reader Device Improving Subjectivity on the Role of Attestation between Interpretative Semiotics and Discursive Linguistics

Authors: Marco Castagna

Abstract:

Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.Proposed paper is aimed to inquire about the relation between text and reader, focusing on the concept of ‘attestation’. Indeed, despite being widely accepted in semiotic research, even today the concept of text remains uncertainly defined. So, it seems to be undeniable that what is called ‘text’ offers an image of internal cohesion and coherence, that makes it possible to analyze it as an object. Nevertheless, this same object remains problematic when it is pragmatically activated by the act of reading. In fact, as for the T.A.R:D.I.S., that is the unique space-temporal vehicle used by the well-known BBC character Doctor Who in his adventures, every text appears to its own readers not only “bigger inside than outside”, but also offering spaces that change according to the different traveller standing in it. In a few words, as everyone knows, this singular condition raises the questions about the gnosiological relation between text and reader. How can a text be considered the ‘same’, even if it can be read in different ways by different subjects? How can readers can be previously provided with knowledge required for ‘understanding’ a text, but at the same time learning something more from it? In order to explain this singular condition it seems useful to start thinking about text as a device more than an object. In other words, this unique status is more clearly understandable when ‘text’ ceases to be considered as a box designed to move meaning from a sender to a recipient (marking the semiotic priority of the “code”) and it starts to be recognized as performative meaning hypothesis, that is discursively configured by one or more forms and empirically perceivable by means of one or more substances. Thus, a text appears as a “semantic hanger”, potentially offered to the “unending deferral of interpretant", and from time to time fixed as “instance of Discourse”. In this perspective, every reading can be considered as an answer to the continuous request for confirming or denying the meaning configuration (the meaning hypothesis) expressed by text. Finally, ‘attestation’ is exactly what regulates this dynamic of request and answer, through which the reader is able to confirm his previous hypothesis on reality or maybe acquire some new ones.

Keywords: attestation, meaning, reader, text

Procedia PDF Downloads 222
7289 Prevalence of Lower Third Molar Impactions and Angulations Among Yemeni Population

Authors: Khawlah Al-Khalidi

Abstract:

Prevalence of lower third molar impactions and angulations among Yemeni population The purpose of this study was to look into the prevalence of lower third molars in a sample of patients from Ibb University Affiliated Hospital, as well as to study and categorise their position by using Pell and Gregory classification, and to look into a possible correlation between their position and the indication for extraction. Materials and methods: This is a retrospective, observational study in which a sample of 200 patients from Ibb University Affiliated Hospital were studied, including patient record validation and orthopantomography performed in screening appointments in people aged 16 to 21. Results and discussion: Males make up 63% of the sample, while people aged 19 to 20 make up 41.2%. Lower third molars were found in 365 of the 365 instances examined, accounting for 91% of the sample under study. According to Pell and Gregory's categorisation, the most common position is IIB, with 37%, followed by IIA with 21%; less common classes are IIIA, IC, and IIIC, with 1%, 3%, and 3%, respectively. It was feasible to determine that 56% of the lower third molars in the sample were recommended for extraction during the screening consultation. Finally, there are differences in third molar location and angulation. There was, however, a link between the available space for third molar eruption and the need for tooth extraction.

Keywords: lower third molar, extraction, Pell and Gregory classification, lower third molar impaction

Procedia PDF Downloads 28
7288 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System

Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa

Abstract:

Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.

Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)

Procedia PDF Downloads 285
7287 Intrusion Detection System Using Linear Discriminant Analysis

Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou

Abstract:

Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.

Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99

Procedia PDF Downloads 208
7286 Recycling of Spent Mo-Co Catalyst for the Recovery of Molybdenum Using Cyphos IL 104

Authors: Harshit Mahandra, Rashmi Singh, Bina Gupta

Abstract:

Molybdenum is widely used in thermocouples, anticathode of X-ray tubes and in the production of alloys of steels. Molybdenum compounds are extensively used as a catalyst in petroleum-refining industries for hydrodesulphurization. Activity of the catalysts decreases gradually with time and are dumped as hazardous waste due to contamination with toxic materials during the process. These spent catalysts can serve as a secondary source for metal recovery and help to sort out environmental and economical issues. In present study, extraction and separation of molybdenum from a Mo-Co spent catalyst leach liquor containing 0.870 g L⁻¹ Mo, 0.341 g L⁻¹ Co, 0.422 ×10⁻¹ g L⁻¹ Fe and 0.508 g L⁻¹ Al in 3 mol L⁻¹ HCl has been investigated using solvent extraction technique. The extracted molybdenum has been finally recovered as molybdenum trioxide. Leaching conditions used were- 3 mol L⁻¹ HCl, 90°C temperature, solid to liquid ratio (w/v) of 1.25% and reaction time of 60 minutes. 96.45% molybdenum was leached under these conditions. For the extraction of molybdenum from leach liquor, Cyphos IL 104 [trihexyl(tetradecyl)phosphonium bis(2,4,4-trimethylpentyl)phosphinate] in toluene was used as an extractant. Around 91% molybdenum was extracted with 0.02 mol L⁻¹ Cyphos IL 104, and 75% of molybdenum was stripped from the loaded organic phase with 2 mol L⁻¹ HNO₃ at A/O=1/1. McCabe Thiele diagrams were drawn to determine the number of stages required for the extraction and stripping of molybdenum. According to McCabe Thiele plots, two stages are required for both extraction and stripping of molybdenum at A/O=1/1 which were also confirmed by countercurrent simulation studies. Around 98% molybdenum was extracted in two countercurrent extraction stages with no co-extraction of cobalt and aluminum. Iron was removed from the loaded organic phase by scrubbing with 0.01 mol L⁻¹ HCl. Quantitative recovery of molybdenum is achieved in three countercurrent stripping stages at A/O=1/1. Trioxide of molybdenum was obtained from strip solution and was characterized by XRD, FE-SEM and EDX techniques. Molybdenum trioxide due to its distinctive electrochromic, thermochromic and photochromic properties is used as a smart material for sensors, lubricants, and Li-ion batteries. Molybdenum trioxide finds application in various processes such as methanol oxidation, metathesis, propane oxidation and in hydrodesulphurization. It can also be used as a precursor for the synthesis of MoS₂ and MoSe₂.

Keywords: Cyphos IL 104, molybdenum, spent Mo-Co catalyst, recovery

Procedia PDF Downloads 181
7285 Extrapulmonary Gastrointestinal Small Cell Carcinoma: A Single Institute Experience of 14 Patients from a Low Middle Income Country

Authors: Awais Naeem, Osama Shakeel, Faizan Ullah, Abdul Wahid Anwer

Abstract:

Introduction: To study the clinic-pathological factors, diagnostic factors and survival of extra-pulmonary small cell carcinoma. Methodology: From 1995 to 2017 all patients with a diagnosis of extra-pulmonary small cell carcinoma were included in the study. Demographic variables and clinic-pathological factors were collected. Management of disease was recorded. Short and long term oncological outcomes were recorded. All data was entered and analyzed in SPSS version 21. Results: A total of 14 patients were included in the study. Median age was 53.42 +/- 16.1 years. There were 5 male and 9 female patients. Most common presentation was dysphagia in 16 patient among esophageal small cell carcinoma and while other patient had pain in abdomen. Mean duration of symptoms was 4.23+/-2.91 months .Most common site is esophagus (n=6) followed by gall bladder(n=3). Almost all of the patients received chemo-radiotherapy. Majority of the patient presented with extensive disease. Five patients (35.7%) died during the follow up period, two (14.3%) were alive and rest of the patients were lost to follow up. Mean follow up period was 22.92 months and median follow up was 15 months. Conclusion: Extra-pulmonary small cell carcinoma is rare and needs to be managed aggressively. All patients should be treated with both systemic and local therapies.

Keywords: small cell carcinoma of esophagus, extrapulmonary small cell carcinoma, small cell carcinoma of gall bladder, small cell carcinoma of rectum, small cell carcinoma of stomach

Procedia PDF Downloads 136
7284 From Binary Solutions to Real Bio-Oils: A Multi-Step Extraction Story of Phenolic Compounds with Ionic Liquid

Authors: L. Cesari, L. Canabady-Rochelle, F. Mutelet

Abstract:

The thermal conversion of lignin produces bio-oils that contain many compounds with high added-value such as phenolic compounds. In order to efficiently extract these compounds, the possible use of choline bis(trifluoromethylsulfonyl)imide [Choline][NTf2] ionic liquid was explored. To this end, a multistep approach was implemented. First, binary (phenolic compound and solvent) and ternary (phenolic compound and solvent and ionic liquid) solutions were investigated. Eight binary systems of phenolic compound and water were investigated at atmospheric pressure. These systems were quantified using the turbidity method and UV-spectroscopy. Ternary systems (phenolic compound and water and [Choline][NTf2]) were investigated at room temperature and atmospheric pressure. After stirring, the solutions were let to settle down, and a sample of each phase was collected. The analysis of the phases was performed using gas chromatography with an internal standard. These results were used to quantify the values of the interaction parameters of thermodynamic models. Then, extractions were performed on synthetic solutions to determine the influence of several operating conditions (temperature, kinetics, amount of [Choline][NTf2]). With this knowledge, it has been possible to design and simulate an extraction process composed of one extraction column and one flash. Finally, the extraction efficiency of [Choline][NTf2] was quantified with real bio-oils from lignin pyrolysis. Qualitative and quantitative analysis were performed using gas chromatographic connected to mass spectroscopy and flame ionization detector. The experimental measurements show that the extraction of phenolic compounds is efficient at room temperature, quick and does not require a high amount of [Choline][NTf2]. Moreover, the simulations of the extraction process demonstrate that [Choline][NTf2] process requires less energy than an organic one. Finally, the efficiency of [Choline][NTf2] was confirmed in real situations with the experiments on lignin pyrolysis bio-oils.

Keywords: bio-oils, extraction, lignin, phenolic compounds

Procedia PDF Downloads 88
7283 Microwave-Assisted Alginate Extraction from Portuguese Saccorhiza polyschides – Influence of Acid Pretreatment

Authors: Mário Silva, Filipa Gomes, Filipa Oliveira, Simone Morais, Cristina Delerue-Matos

Abstract:

Brown seaweeds are abundant in Portuguese coastline and represent an almost unexploited marine economic resource. One of the most common species, easily available for harvesting in the northwest coast, is Saccorhiza polyschides grows in the lowest shore and costal rocky reefs. It is almost exclusively used by local farmers as natural fertilizer, but contains a substantial amount of valuable compounds, particularly alginates, natural biopolymers of high interest for many industrial applications. Alginates are natural polysaccharides present in cell walls of brown seaweed, highly biocompatible, with particular properties that make them of high interest for the food, biotechnology, cosmetics and pharmaceutical industries. Conventional extraction processes are based on thermal treatment. They are lengthy and consume high amounts of energy and solvents. In recent years, microwave-assisted extraction (MAE) has shown enormous potential to overcome major drawbacks that outcome from conventional plant material extraction (thermal and/or solvent based) techniques, being also successfully applied to the extraction of agar, fucoidans and alginates. In the present study, acid pretreatment of brown seaweed Saccorhiza polyschides for subsequent microwave-assisted extraction (MAE) of alginate was optimized. Seaweeds were collected in Northwest Portuguese coastal waters of the Atlantic Ocean between May and August, 2014. Experimental design was used to assess the effect of temperature and acid pretreatment time in alginate extraction. Response surface methodology allowed the determination of the optimum MAE conditions: 40 mL of HCl 0.1 M per g of dried seaweed with constant stirring at 20ºC during 14h. Optimal acid pretreatment conditions have enhanced significantly MAE of alginates from Saccorhiza polyschides, thus contributing for the development of a viable, more environmental friendly alternative to conventional processes.

Keywords: acid pretreatment, alginate, brown seaweed, microwave-assisted extraction, response surface methodology

Procedia PDF Downloads 350
7282 Use RP-HPLC To Investigate Factors Influencing Sorghum Protein Extraction

Authors: Khaled Khaladi, Rafika Bibi, Hind Mokrane, Boubekeur Nadjemi

Abstract:

Sorghum (Sorghum bicolor (L.) Moench) is an important cereal crop grown in the semi-arid tropics of Africa and Asia due to its drought tolerance. Sorghum grain has protein content varying from 6 to 18%, with an average of 11%, Sorghum proteins can be broadly classified into prolamin and non-prolamin proteins. Kafirins, the major storage proteins, are classified as prolamins, and as such, they contain high levels of proline and glutamine and are soluble in non-polar solvents such as aqueous alcohols. Kafirins account for 77 to 82% of the protein in the endosperm, whereas non-prolamin proteins (namely, albumins, globulins, and glutelins) make up about 30% of the proteins. To optimize the extraction of sorghum proteins, several variables were examined: detergent type and concentration, reducing agent type and concentration, and buffer pH and concentration. Samples were quantified and characterized by RP-HPLC.

Keywords: sorghum, protein extraction, detergent, food science

Procedia PDF Downloads 295
7281 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs, and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine-learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient but not the magnitude. A neural network with two hidden layers were then used to learn the coefficient magnitudes along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction

Procedia PDF Downloads 84
7280 Making Sense of Places: A Comparative Study of Three Contexts in Thailand

Authors: Thirayu Jumsai Na Ayudhya

Abstract:

The study of what architecture means to people in their everyday lives inadequately addresses the contextualized and holistic theoretical framework. This article succinctly presents theoretical framework obtained from the comparative study of how people experience the everyday architecture in three different contexts including 1) Bangkok CBD, 2) Phuket island old-town, and 3) Nan province old-town. The way people make sense of the everyday architecture can be addressed in four super-ordinate themes; (1) building in urban (text), (2) building in (text), (3) building in human (text), (4) and building in time (text). In this article, these super-ordinate themes were verified whether they recur in three studied-contexts. In each studied-context, the participants were divided into two groups, 1) local people, 2) visitors. Participants were asked to take photographs of the everyday architecture during the everyday routine and to participate the elicit-interview with photographs produced by themselves. Interpretative phenomenological analysis (IPA) was adopted to interpret elicit-interview data. Sub-themes emerging in each studied-context were brought into the cross-comparison among three studied- contexts. It is found that four super-ordinate themes recur with additional distinctive sub-themes. Further studies in other different contexts, such as socio-political, economic, cultural differences, are recommended to complete the theoretical framework.

Keywords: sense of place, the everyday architecture, architectural experience, the everyday

Procedia PDF Downloads 135
7279 Effect of Honey on Rate of Healing of Socket after Tooth Extraction in Rabbits

Authors: Deependra Prasad Sarraf, Ashish Shrestha, Mehul Rajesh Jaisani, Gajendra Prasad Rauniar

Abstract:

Background: Honey is the worlds’ oldest known wound dressing. Its wound healing properties are not fully established till today. Concerns about antibiotic resistance, and a renewed interest in natural remedies have prompted the resurgence in the antimicrobial and wound healing properties of Honey. Evidence from animal studies and some trials has suggested that honey may accelerate wound healing in burns, infected wounds and open wounds. None of these reports have documented the effect of honey on the healing of socket after tooth extraction. Therefore, the present experimental study was planned to evaluate the efficacy of honey on the healing of socket after tooth extraction in rabbits. Materials and Methods: An experimental study was conducted in six New Zealand White rabbits. Extraction of first premolar tooth on both sides of the lower jaw was done under anesthesia produced by Ketamine and Xylazine followed by application of honey on one socket (test group) and normal saline (control group) in the opposite socket. The intervention was continued for two more days. On the 7th day, the biopsy was taken from the extraction site, and histopathological examination was done. Student’s t-test was used for comparison between the groups and differences were considered to be statistically significant at p-value less than 0.05. Results: There was a significant difference between control group and test group in terms of fibroblast proliferation (p = 0.0019) and bony trabeculae formation (p=0.0003). Inflammatory cells were also observed in both groups, and it was not significant (p=1.0). Overlying epithelium was hyperplastic in both the groups. Conclusion: The study showed that local application of honey promoted the rapid healing process particularly by increasing fibroblast proliferation and bony trabeculae.

Keywords: honey, extraction wound, Nepal, healing

Procedia PDF Downloads 272
7278 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 96
7277 Machine Learning Automatic Detection on Twitter Cyberbullying

Authors: Raghad A. Altowairgi

Abstract:

With the wide spread of social media platforms, young people tend to use them extensively as the first means of communication due to their ease and modernity. But these platforms often create a fertile ground for bullies to practice their aggressive behavior against their victims. Platform usage cannot be reduced, but intelligent mechanisms can be implemented to reduce the abuse. This is where machine learning comes in. Understanding and classifying text can be helpful in order to minimize the act of cyberbullying. Artificial intelligence techniques have expanded to formulate an applied tool to address the phenomenon of cyberbullying. In this research, machine learning models are built to classify text into two classes; cyberbullying and non-cyberbullying. After preprocessing the data in 4 stages; removing characters that do not provide meaningful information to the models, tokenization, removing stop words, and lowering text. BoW and TF-IDF are used as the main features for the five classifiers, which are; logistic regression, Naïve Bayes, Random Forest, XGboost, and Catboost classifiers. Each of them scores 92%, 90%, 92%, 91%, 86% respectively.

Keywords: cyberbullying, machine learning, Bag-of-Words, term frequency-inverse document frequency, natural language processing, Catboost

Procedia PDF Downloads 105
7276 Kinetic and Removable of Amoxicillin Using Aliquat336 as a Carrier via a HFSLM

Authors: Teerapon Pirom, Ura Pancharoen

Abstract:

Amoxicillin is an antibiotic which is widely used to treat various infections in both human beings and animals. However, when amoxicillin is released into the environment, it is a major problem. Amoxicillin causes bacterial resistance to these drugs and failure of treatment with antibiotics. Liquid membrane is of great interest as a promising method for the separation and recovery of the target ions from aqueous solutions due to the use of carriers for the transport mechanism, resulting in highly selectivity and rapid transportation of the desired metal ions. The simultaneous processes of extraction and stripping in a single unit operation of liquid membrane system are very interesting. Therefore, it is practical to apply liquid membrane, particularly the HFSLM for industrial applications as HFSLM is proved to be a separation process with lower capital and operating costs, low energy and extractant with long life time, high selectivity and high fluxes compared with solid membranes. It is a simple design amenable to scaling up for industrial applications. The extraction and recovery for (Amoxicillin) through the hollow fiber supported liquid membrane (HFSLM) using aliquat336 as a carrier were explored with the experimental data. The important variables affecting on transport of amoxicillin viz. extractant concentration and operating time were investigated. The highest AMOX- extraction percentages of 85.35 and Amoxicillin stripping of 80.04 were achieved with the best condition at 6 mmol/L [aliquat336] and operating time 100 min. The extraction reaction order (n) and the extraction reaction rate constant (kf) were found to be 1.00 and 0.0344 min-1, respectively.

Keywords: aliquat336, amoxicillin, HFSLM, kinetic

Procedia PDF Downloads 252
7275 Evaluation Means in English and Russian Academic Discourse: Through Comparative Analysis towards Translation

Authors: Albina Vodyanitskaya

Abstract:

Given the culture- and language-specific nature of evaluation, this phenomenon is widely studied around the linguistic world and may be regarded as a challenge for translators. Evaluation penetrates all the levels of a scientific text, influences its composition and the reader’s attitude towards the information presented. One of the most challenging and rarely studied phenomena is the individual style of the scientific writer, which is mostly reflected in the use of evaluative language means. The evaluative and expressive potential of a scientific text is becoming more and more welcoming area for researchers, which stems in the shift towards anthropocentric paradigm in linguistics. Other reasons include: the cognitive and psycholinguistic processes that accompany knowledge acquisition, a genre-determined nature of a scientific text, the increasing public concern about the quality of scientific papers and some such. One more important issue, is the fact that linguists all over the world still argue about the definition of evaluation and its functions in the text. The author analyzes various approaches towards the study of evaluation and scientific texts. A comparative analysis of English and Russian dissertations and other scientific papers with regard to evaluative language means reveals major differences and similarities between English and Russian scientific style. Though standardized and genre-specific, English scientific texts contain more figurative and expressive evaluative means than the Russian ones, which should be taken into account while translating scientific papers. The processes that evaluation undergoes while being expressed by means of a target language are also analyzed. The author offers a target-language-dependent strategy for the translation of evaluation in English and Russian scientific texts. The findings may contribute to the theory and practice of translation and can increase scientific writers’ awareness of inter-language and intercultural differences in evaluative language means.

Keywords: academic discourse, evaluation, scientific text, scientific writing, translation

Procedia PDF Downloads 332
7274 Segmentation of Arabic Handwritten Numeral Strings Based on Watershed Approach

Authors: Nidal F. Shilbayeh, Remah W. Al-Khatib, Sameer A. Nooh

Abstract:

Arabic offline handwriting recognition systems are considered as one of the most challenging topics. Arabic Handwritten Numeral Strings are used to automate systems that deal with numbers such as postal code, banking account numbers and numbers on car plates. Segmentation of connected numerals is the main bottleneck in the handwritten numeral recognition system.  This is in turn can increase the speed and efficiency of the recognition system. In this paper, we proposed algorithms for automatic segmentation and feature extraction of Arabic handwritten numeral strings based on Watershed approach. The algorithms have been designed and implemented to achieve the main goal of segmenting and extracting the string of numeral digits written by hand especially in a courtesy amount of bank checks. The segmentation algorithm partitions the string into multiple regions that can be associated with the properties of one or more criteria. The numeral extraction algorithm extracts the numeral string digits into separated individual digit. Both algorithms for segmentation and feature extraction have been tested successfully and efficiently for all types of numerals.

Keywords: handwritten numerals, segmentation, courtesy amount, feature extraction, numeral recognition

Procedia PDF Downloads 364
7273 The Syntactic Features of Islamic Legal Texts and Their Implications for Translation

Authors: Rafat Y. Alwazna

Abstract:

Certain religious texts are deemed part of legal texts that are characterised by high sensitivity and sacredness. Amongst such religious texts are Islamic legal texts that are replete with Islamic legal terms that designate particular legal concepts peculiar to Islamic legal system and legal culture. However, from the syntactic perspective, Islamic legal texts prove lengthy, condensed and convoluted, with little use of punctuation system, but with an extensive use of subordinations and co-ordinations, which separate the main verb from the subject, and which, of course, carry a heavy load of legal detail. The present paper seeks to examine the syntactic features of Islamic legal texts through analysing a short text of Islamic jurisprudence in an attempt at exploring the syntactic features that characterise this type of legal text. A translation of this text into legal English is then exercised to find the translation implications that have emerged as a result of the English translation. Based on these implications, the paper compares and contrasts the syntactic features of Islamic legal texts to those of legal English texts. Finally, the present paper argues that there are a number of syntactic features of Islamic legal texts, such as nominalisation, passivisation, little use of punctuation system, the use of the Arabic cohesive device, etc., which are also possessed by English legal texts except for the last feature and with some variations. The paper also claims that when rendering an Islamic legal text into legal English, certain implications emerge, such as the necessity of a sentence break, the omission of the cohesive device concerned and the increase in the use of nominalisation, passivisation, passive participles, and so on.

Keywords: English legal texts, Islamic legal texts, nominalisation, participles, passivisation, syntactic features, translation implications

Procedia PDF Downloads 198
7272 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis

Procedia PDF Downloads 303
7271 Communication through Technology: SMS Taking Most of the Time Impacting the Standard English

Authors: Nazia Sulemna, Sadia Gul

Abstract:

With the invade of mobile phones text messaging has become a popular medium of communication. Its users are multiplying with every passing day. Its use is not only limites to informal but to formal communication as well. Students are the advent users of mobile phones and of SMS as well. The present study manifests the fact that students are practicing SMS for a number of reasons and a good amount of time is spent upon it which is resulting in typographical features, graphones and rebus writing. Data was collected through questionnaires and came to the conclusion that its effect is obvious in the L2 users and in exam as well.

Keywords: text messaging, technology, exams, formal writing

Procedia PDF Downloads 720
7270 Evaluation of Features Extraction Algorithms for a Real-Time Isolated Word Recognition System

Authors: Tomyslav Sledevič, Artūras Serackis, Gintautas Tamulevičius, Dalius Navakauskas

Abstract:

This paper presents a comparative evaluation of features extraction algorithm for a real-time isolated word recognition system based on FPGA. The Mel-frequency cepstral, linear frequency cepstral, linear predictive and their cepstral coefficients were implemented in hardware/software design. The proposed system was investigated in the speaker-dependent mode for 100 different Lithuanian words. The robustness of features extraction algorithms was tested recognizing the speech records at different signals to noise rates. The experiments on clean records show highest accuracy for Mel-frequency cepstral and linear frequency cepstral coefficients. For records with 15 dB signal to noise rate the linear predictive cepstral coefficients give best result. The hard and soft part of the system is clocked on 50 MHz and 100 MHz accordingly. For the classification purpose, the pipelined dynamic time warping core was implemented. The proposed word recognition system satisfies the real-time requirements and is suitable for applications in embedded systems.

Keywords: isolated word recognition, features extraction, MFCC, LFCC, LPCC, LPC, FPGA, DTW

Procedia PDF Downloads 473
7269 Zonal and Sequential Extraction Design for Large Flat Space to Achieve Perpetual Tenability

Authors: Mingjun Xu, Man Pun Wan

Abstract:

This study proposed an effective smoke control strategy for the large flat space with a low ceiling to achieve the requirement of perpetual tenability. For the large flat space with a low ceiling, the depth of the smoke reservoir is very shallow, and it is difficult to perpetually constrain the smoke within a limited space. A series of numerical tests were conducted to determine the smoke strategy. A zonal design i.e., the fire zone and two adjacent zones was proposed and validated to be effective in controlling smoke. Once a fire happens in a compartment space, the Engineered Smoke Control (ESC) system will be activated in three zones i.e., the fire zone, in which the fire happened, and two adjacent zones. The smoke can be perpetually constrained within the three smoke zones. To further improve the extraction efficiency, sequential activation of the ESC system within the 3 zones turned out to be more efficient than simultaneous activation. Additionally, the proposed zonal and sequential extraction design can reduce the mechanical extraction flow rate by up to 40.7 % as compared to the conventional method, which is much more economical than that of the conventional method.

Keywords: performance-based design, perpetual tenability, smoke control, fire plume

Procedia PDF Downloads 50
7268 Curved Rectangular Patch Array Antenna Using Flexible Copper Sheet for Small Missile Application

Authors: Jessada Monthasuwan, Charinsak Saetiaw, Chanchai Thongsopa

Abstract:

This paper presents the development and design of the curved rectangular patch arrays antenna for small missile application. This design uses a 0.1mm flexible copper sheet on the front layer and back layer, and a 1.8mm PVC substrate on a middle layer. The study used a small missile model with 122mm diameter size with speed 1.1 Mach and frequency range on ISM 2.4 GHz. The design of curved antenna can be installation on a cylindrical object like a missile. So, our proposed antenna design will have a small size, lightweight, low cost, and simple structure. The antenna was design and analysis by a simulation result from CST microwave studio and confirmed with a measurement result from a prototype antenna. The proposed antenna has a bandwidth covering the frequency range 2.35-2.48 GHz, the return loss below -10 dB and antenna gain 6.5 dB. The proposed antenna can be applied with a small guided missile effectively.

Keywords: rectangular patch arrays, small missile antenna, antenna design and simulation, cylinder PVC tube

Procedia PDF Downloads 291