Search results for: classification of matter
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3877

Search results for: classification of matter

1777 Decision Tree Analysis of Risk Factors for Intravenous Infiltration among Hospitalized Children: A Retrospective Study

Authors: Soon-Mi Park, Ihn Sook Jeong

Abstract:

This retrospective study was aimed to identify risk factors of intravenous (IV) infiltration for hospitalized children. The participants were 1,174 children for test and 424 children for validation, who admitted to a general hospital, received peripheral intravenous injection therapy at least once and had complete records. Data were analyzed with frequency and percentage or mean and standard deviation were calculated, and decision tree analysis was used to screen for the most important risk factors for IV infiltration for hospitalized children. The decision tree analysis showed that the most important traditional risk factors for IV infiltration were the use of ampicillin/sulbactam, IV insertion site (lower extremities), and medical department (internal medicine) both in the test sample and validation sample. The correct classification was 92.2% in the test sample and 90.1% in the validation sample. More careful attention should be made to patients who are administered ampicillin/sulbactam, have IV site in lower extremities and have internal medical problems to prevent or detect infiltration occurrence.

Keywords: decision tree analysis, intravenous infiltration, child, validation

Procedia PDF Downloads 174
1776 The Client-Supplier Relationship in Managing Innovation: Delineating Defence Industry First Mover Challenges within the Government Contract Competition

Authors: Edward Pol

Abstract:

All companies are confronted with the need to innovate in order to meet market demands. In so doing they are challenged with the dilemma of whether to aim to be first into the market with a new innovative product or to deliberately wait and learn from a pioneers’ mistakes; potentially avoiding higher risks. It is therefore important to critically understand from a first-mover advantage and disadvantage perspective the decision-making implications of defence industry transformation onset by an innovative paradigm shift. This paper will argue that the type of industry characteristics matter, especially when considering what role the clients play in the innovation process and what is their level of influence. Through investigation of qualitative case study research, this inquiry will focus on first mover advantages and first mover disadvantages with a view to establish practical and value-added academic findings by focusing on specific industries where the clients play an active role in cooperation with the supplier innovation. The resulting findings will help managers to mitigate risk in innovative technology introduction. A selection from several defense industry innovations is specifically chosen because of the client-supplier relationship typically differing from traditional first-mover research. In this instance, case studies will be used referencing vertical-takeoff-and-landing defence equipment innovations.

Keywords: innovation, pioneer, first-mover advantage, first-mover disadvantage, risk

Procedia PDF Downloads 189
1775 Framework for Detecting External Plagiarism from Monolingual Documents: Use of Shallow NLP and N-Gram Frequency Comparison

Authors: Saugata Bose, Ritambhra Korpal

Abstract:

The internet has increased the copy-paste scenarios amongst students as well as amongst researchers leading to different levels of plagiarized documents. For this reason, much of research is focused on for detecting plagiarism automatically. In this paper, an initiative is discussed where Natural Language Processing (NLP) techniques as well as supervised machine learning algorithms have been combined to detect plagiarized texts. Here, the major emphasis is on to construct a framework which detects external plagiarism from monolingual texts successfully. For successfully detecting the plagiarism, n-gram frequency comparison approach has been implemented to construct the model framework. The framework is based on 120 characteristics which have been extracted during pre-processing the documents using NLP approach. Afterwards, filter metrics has been applied to select most relevant characteristics and then supervised classification learning algorithm has been used to classify the documents in four levels of plagiarism. Confusion matrix was built to estimate the false positives and false negatives. Our plagiarism framework achieved a very high the accuracy score.

Keywords: lexical matching, shallow NLP, supervised machine learning algorithm, word n-gram

Procedia PDF Downloads 356
1774 Mineralogy and Classification of Altered Host Rocks in the Zaghia Iron Oxide Deposit, East of Bafq, Central Iran

Authors: Azat Eslamizadeh, Neda Akbarian

Abstract:

The Zaghia Iron ore, in 15 km east of a town named Bafq, is located in Precambrian formation of Central Iran in form of a small local deposit. The Volcano-sedimentary rocks of Precambrian-Cambrian age, belonging to Rizu series have spread through the region. Substantial portion of the deposit is covered by alluvial deposits. The rocks hosting the Zaghia iron ore have a main combination of rhyolitic tuffs along with clastic sediments, carbonate include sandstone, limestone, dolomite, conglomerate and is somewhat metamorphed causing them to have appeared as slate and phyllite. Moreover, carbonate rocks are in existence as skarn compound of marble bearing tremolite with mineralization of magnetite-hematite. The basic igneous rocks have dramatically altered into green rocks consist of actinolite-tremolite and chlorite along with amount of iron (magnetite + Martite). The youngest units of ore-bearing rocks in the area are found as dolerite - diabase dikes. The dikes are cutting the rhyolitic tuffs and carbonate rocks.

Keywords: Zaghia, iron ore deposite, mineralogy, petrography Bafq, Iran

Procedia PDF Downloads 523
1773 An Online Adaptive Thresholding Method to Classify Google Trends Data Anomalies for Investor Sentiment Analysis

Authors: Duygu Dere, Mert Ergeneci, Kaan Gokcesu

Abstract:

Google Trends data has gained increasing popularity in the applications of behavioral finance, decision science and risk management. Because of Google’s wide range of use, the Trends statistics provide significant information about the investor sentiment and intention, which can be used as decisive factors for corporate and risk management fields. However, an anomaly, a significant increase or decrease, in a certain query cannot be detected by the state of the art applications of computation due to the random baseline noise of the Trends data, which is modelled as an Additive white Gaussian noise (AWGN). Since through time, the baseline noise power shows a gradual change an adaptive thresholding method is required to track and learn the baseline noise for a correct classification. To this end, we introduce an online method to classify meaningful deviations in Google Trends data. Through extensive experiments, we demonstrate that our method can successfully classify various anomalies for plenty of different data.

Keywords: adaptive data processing, behavioral finance , convex optimization, online learning, soft minimum thresholding

Procedia PDF Downloads 166
1772 Recovery of Dredged Sediments With Lime or Cement as Platform Materials for Use in a Roadway

Authors: Abriak Yassine, Zri Abdeljalil, Benzerzour Mahfoud., Hadj Sadok Rachid, Abriak Nor-Edine

Abstract:

In this study, firstly, the study of the capacity reuse of dredged sediments and treated sediments with lime or cement were used in an establishment layer and the base layer of the roadway. Also, the analysis of mineral changes caused by the addition of lime or cement on the way as described in the mechanical results of stabilised sediments. After determining the quantity of lime and cement required to stabilise the sediment, the compaction characteristics were studied using the modified Proctor method. Then the evolution of the three parameters, that is, ideal water content and maximum dry density had been determined. Mechanical exhibitions can be assessed across the resistance to compression, flexibility modulus and the resistance under traction. The resistance of the formulation treated with cement addition (ROLAC®645) increase with the quantity of ROLAC®645. Traction resistances and the elastic modulus were utilized to assess the potential of the formulation as road construction materials utilizing classification diagram. The results show the various formulations with ROLAC® 645may be employed in subgrades and foundation layers for roads.

Keywords: cement, dredged, sediment, foundation layer, resistance

Procedia PDF Downloads 97
1771 Decomposition of Funds Transfer Pricing Components in Islamic Bank: The Exposure Effect of Shariah Non-Compliant Event Rectification Process

Authors: Azrul Azlan Iskandar Mirza

Abstract:

The purpose of Funds Transfer Pricing (FTP) for Islamic Bank is to promote prudent liquidity risk-taking behavior of business units. The acquirer of stable deposits will be rewarded whilst a business unit that generates long-term assets will be charged for added liquidity funding risks. In the end, it promotes risk-adjusted pricing by incorporating profit rate risk and liquidity risk component in the product pricing. However, in the event of Shariah non-compliant (SNCE), FTP components will be examined in the rectification plan especially when Islamic banks need to purify the non-compliance income. The finding shows that the determination between actual and provision cost will defer the decision among Shariah committee in Islamic banks. This paper will review each of FTP components to ensure the classification of actual and provision costs reflect the decision on rectification process on SNCE. This will benefit future decision and its consistency of Islamic banks.

Keywords: fund transfer pricing, Islamic banking, Islamic finance, shariah non-compliant event

Procedia PDF Downloads 193
1770 Second-Order Complex Systems: Case Studies of Autonomy and Free Will

Authors: Eric Sanchis

Abstract:

Although there does not exist a definitive consensus on a precise definition of a complex system, it is generally considered that a system is complex by nature. The presented work illustrates a different point of view: a system becomes complex only with regard to the question posed to it, i.e., with regard to the problem which has to be solved. A complex system is a couple (question, object). Because the number of questions posed to a given object can be potentially substantial, complexity does not present a uniform face. Two types of complex systems are clearly identified: first-order complex systems and second-order complex systems. First-order complex systems physically exist. They are well-known because they have been studied by the scientific community for a long time. In second-order complex systems, complexity results from the system composition and its articulation that are partially unknown. For some of these systems, there is no evidence of their existence. Vagueness is the keyword characterizing this kind of systems. Autonomy and free will, two mental productions of the human cognitive system, can be identified as second-order complex systems. A classification based on the properties structure makes it possible to discriminate complex properties from the others and to model this kind of second order complex systems. The final outcome is an implementable synthetic property that distinguishes the solid aspects of the actual property from those that are uncertain.

Keywords: autonomy, free will, synthetic property, vaporous complex systems

Procedia PDF Downloads 202
1769 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 240
1768 Enhancing the Recruitment Process through Machine Learning: An Automated CV Screening System

Authors: Kaoutar Ben Azzou, Hanaa Talei

Abstract:

Human resources is an important department in each organization as it manages the life cycle of employees from recruitment training to retirement or termination of contracts. The recruitment process starts with a job opening, followed by a selection of the best-fit candidates from all applicants. Matching the best profile for a job position requires a manual way of looking at many CVs, which requires hours of work that can sometimes lead to choosing not the best profile. The work presented in this paper aims at reducing the workload of HR personnel by automating the preliminary stages of the candidate screening process, thereby fostering a more streamlined recruitment workflow. This tool introduces an automated system designed to help with the recruitment process by scanning candidates' CVs, extracting pertinent features, and employing machine learning algorithms to decide the most fitting job profile for each candidate. Our work employs natural language processing (NLP) techniques to identify and extract key features from unstructured text extracted from a CV, such as education, work experience, and skills. Subsequently, the system utilizes these features to match candidates with job profiles, leveraging the power of classification algorithms.

Keywords: automated recruitment, candidate screening, machine learning, human resources management

Procedia PDF Downloads 54
1767 Pedagogical Content Knowledge for Nature of Science: In Search for a Meaning for the Construct

Authors: Elaosi Vhurumuku

Abstract:

During the past twenty years, there has been an increased interest by science educators in researching and developing teachers’ pedagogical content knowledge for teaching the nature of science (PCKNOS). While there has been this surge in interest in the idea of PCKNOS, there has not been a common understanding among NOS researchers as to how exactly the PCKNOS concept should be construed. In this paper, we analyse and evaluate published accredited journal articles on PCKNOS research. We also draw from our teaching experiences. The major points of foci are the researchers’ presentations of SMKNOS and their centres of attention regarding the elements of PCKNOS. Our content, cluster analysis, and evaluation of the studies on PCKNOS reveal that most researchers have presented SMKNOS in the form of a heuristic or a set of heuristics (targeted NOS ideas) to be mastered by teachers or learners. Furthermore, we found that most of the researchers’ attention has been on developing and recommending teacher pedagogical practices for teaching NOS. From this, we synthesize and propose a subject knowledge content structure and a pedagogical approach that we believe is relevant and appropriate for secondary school and science teacher education if the goal of science education for scientific literacy is to be achieved. The justification of our arguments is rooted in tracing and unpacking the origins and meaning of pedagogical content knowledge (PCK). From our analysis, synthesis, and evaluation, as well as teaching experiences, we distil and construct a meaning for the PCKNOS construct.

Keywords: pedagogical content knowledge, teaching, nature of science, construct, subject matter knowledge

Procedia PDF Downloads 95
1766 Performance Measurement of Logistics Systems for Thailand's Wholesales and Retails Industries by Data Envelopment Analysis

Authors: Pornpimol Chaiwuttisak

Abstract:

The study aims to compare the performance of the logistics for Thailand’s wholesale and retail trade industries (except motor vehicles, motorcycle, and stalls) by using data (data envelopment analysis). Thailand Standard Industrial Classification in 2009 (TSIC - 2009) categories that industries into sub-group no. 45: wholesale and retail trade (except for the repair of motor vehicles and motorcycles), sub-group no. 46: wholesale trade (except motor vehicles and motorcycles), and sub-group no. 47: retail trade (except motor vehicles and motorcycles. Data used in the study is collected by the National Statistical Office, Thailand. The study consisted of four input factors include the number of companies, the number of personnel in logistics, the training cost in logistics, and outsourcing logistics management. Output factor includes the percentage of enterprises having inventory management. The results showed that the average relative efficiency of small-sized enterprises equals to 27.87 percent and 49.68 percent for the medium-sized enterprises.

Keywords: DEA, wholesales and retails, logistics, Thailand

Procedia PDF Downloads 414
1765 Characterization of the Corn Cob to Know Its Potential as a Source of Biosilica to Be Used in Sustainable Cementitious Mixtures

Authors: Sandra C. L. Dorea, Joann K. Whalen, Yixin Shao, Oumarou Savadogo

Abstract:

The major challenge for industries that rely on fossil fuels in manufacturing processes or to provide goods and services is to lower their CO2 emissions, as the case for the manufacture of Portland cement. Feasible materials for this purpose can include agro-industrial or agricultural wastes, which are termed 'biosilica' since the silica was contained in a biological matrix (biomass). Corn cob (CC) has some characteristics that make it a good candidate as biosilica source: 1) it is an abundant grain crop produced around the world; 2) more production means more available residues is left in the field to be used. This work aims to evaluate the CC collected from different farms in Canada during the corn harvest in order to see if they can be used together as a biosilica source. The characterization of the raw CC was made in the physical, chemical, and thermal way. The moisture content, the granulometry, and the morphology were also analyzed. The ash content measured was 2,1%. The Thermogravimetric Analysis (TGA) and its Derivative (DTG) evaluated of CC as a function of weight loss with temperature variation ranging between 30°C and 800°C in an atmosphere of N2. The chemical composition and the presence of silica revealed that the different sources of the CC do not interfere in its basic chemical composition, which means that this kind of waste can be used together as a source of biosilica no matter where they come from. Then, this biosilica can partially replace the cement Portland making sustainable cementitious mixtures and contributing to reduce the CO2 emissions.

Keywords: biosilica, characterization, corn cob, sustainable cementitious materials

Procedia PDF Downloads 261
1764 Image Segmentation: New Methods

Authors: Flaurence Benjamain, Michel Casperance

Abstract:

We present in this paper, first, a comparative study of three mathematical theories to achieve the fusion of information sources. This study aims to identify the characteristics inherent in theories of possibilities, belief functions (DST) and plausible and paradoxical reasoning to establish a strategy of choice that allows us to adopt the most appropriate theory to solve a problem of fusion in order, taking into account the acquired information and imperfections that accompany them. Using the new theory of plausible and paradoxical reasoning, also called Dezert-Smarandache Theory (DSmT), to fuse information multi-sources needs, at first step, the generation of the composites events witch is, in general, difficult. Thus, we present in this paper a new approach to construct pertinent paradoxical classes based on gray levels histograms, which also allows to reduce the cardinality of the hyper-powerset. Secondly, we developed a new technique for order and coding generalized focal elements. This method is exploited, in particular, to calculate the cardinality of Dezert and Smarandache. Then, we give an experimentation of classification of a remote sensing image that illustrates the given methods and we compared the result obtained by the DSmT with that resulting from the use of the DST and theory of possibilities.

Keywords: segmentation, image, approach, vision computing

Procedia PDF Downloads 272
1763 Using Geopolymer Technology on Stabilization and Reutilization the Expansion Behavior Slag

Authors: W. H. Lee, T. W. Cheng, K. Y. Lin, S. W. Huang, Y. C. Ding

Abstract:

Basic Oxygen Furnace (BOF) Slag and electric arc furnace (EAF) slag is the by-product of iron making and steel making. Each of slag with produced over 100 million tons annually in Taiwan. The type of slag has great engineering properties, such as, high hardness and density, high compressive strength, low abrasion ratio, and can replace natural aggregate for building materials. However, no matter BOF or EAF slag, both have the expansion problem, due to it contains free lime. The purpose of this study was to stabilize the BOF and EAF slag by using geopolymer technology, hoping can prevent and solve the expansion problem. The experimental results showed that using geopolymer technology can successfully solve and prevent the expansion problem. Their main properties are analyzed with regard to their use as building materials. Autoclave is used to study the volume stability of these specimens. Finally, the compressive strength of geopolymer mortar with BOF/FAF slag can be reached over 21MPa after curing for 28 days. After autoclave testing, the volume expansion does not exceed 0.2%. Even after the autoclave test, the compressive strength can be grown to over 35MPa. In this study have success using these results on ready-mixed concrete plant, and have the same experimental results as laboratory scale. These results gave encouragement that the stabilized and reutilized BOF/EAF slag could be replaced as a feasible natural fine aggregate by using geopolymer technology.

Keywords: BOF slag, EAF slag, autoclave test, geopolymer

Procedia PDF Downloads 132
1762 Durability of a Cementitious Matrix Based on Treated Sediments

Authors: Mahfoud Benzerzour, Mouhamadou Amar, Amine Safhi, Nor-Edine Abriak

Abstract:

Significant volumes of sediment are annually dredged in France and all over the world. These materials may, in fact, be used beneficially as supplementary cementitious material. This paper studies the durability of a new cement matrix based on marine dredged sediment of Dunkirk-Harbor (north of France). Several techniques are used to characterize the raw sediment such as physical properties, chemical analyses, and mineralogy. The XRD analysis revealed quartz, calcite, kaolinite as main mineral phases. In order to eliminate organic matter and activate some of those minerals, the sediment is calcined at a temperature of 850°C for 1h. Moreover, four blended mortars were formulated by mixing a portland cement (CEM I 52,5 N) and the calcined sediment as partial cement substitute (0%, 10%, 20% and 30%). Reference mortars, based on the blended cement, were then prepared. This re-use cannot be substantiating and efficient without a durability study. In this purpose, the following tests, mercury porosity, accessible water porosity, chloride permeability, freezing and thawing, external sulfate attack, alkali aggregates reaction, compressive and bending strength tests were conducted on those mortars. The results of most of those tests evidenced the fact that the mortar that contains 10% of the treated sediment is efficient and durable as the reference mortar itself. That would infer that the presence of these calcined sediment improves mortar general behavior.

Keywords: sediment, characterization, calcination, substitution, durability

Procedia PDF Downloads 255
1761 An Investigation of Current Potato Nitrogen Fertility Programs' Contribution to Ground Water Contamination

Authors: Brian H. Marsh

Abstract:

Nitrogen fertility is an important component for optimum potato yield and quality. Best management practices are necessary in regards to N applications to achieve these goals without applying excess N with may contribute to ground water contamination. Eight potato fields in the Southern San Joaquin Valley were sampled for nitrogen inputs and uptake, tuber and vine dry matter and residual soil nitrate-N. The fields had substantial soil nitrate-N prior to the potato crop. Nitrogen fertilizer was applied prior to planting and in irrigation water as needed based on in-season petiole sampling in accordance with published recommendations. Average total nitrogen uptake was 237 kg ha-1 on 63.5 Mg ha-1 tuber yield and nitrogen use efficiency was very good at 81 percent. Sixty-nine percent of the plant nitrogen was removed in tubers. Soil nitrate-N increased 14 percent from pre-plant to post-harvest averaged across all fields and was generally situated in the upper soil profile. Irrigation timing and amount applied did not move water into the lower profile except for a single location where nitrate also moved into the lower soil profile. Pre-plant soil analysis is important information to be used. Rotation crops having deeper rooting growth would be able to utilize nitrogen that remained in the soil profile.

Keywords: potato, nitrogen fertilization, irrigation management, leaching potential

Procedia PDF Downloads 458
1760 Design an Development of an Agorithm for Prioritizing the Test Cases Using Neural Network as Classifier

Authors: Amit Verma, Simranjeet Kaur, Sandeep Kaur

Abstract:

Test Case Prioritization (TCP) has gained wide spread acceptance as it often results in good quality software free from defects. Due to the increase in rate of faults in software traditional techniques for prioritization results in increased cost and time. Main challenge in TCP is difficulty in manually validate the priorities of different test cases due to large size of test suites and no more emphasis are made to make the TCP process automate. The objective of this paper is to detect the priorities of different test cases using an artificial neural network which helps to predict the correct priorities with the help of back propagation algorithm. In our proposed work one such method is implemented in which priorities are assigned to different test cases based on their frequency. After assigning the priorities ANN predicts whether correct priority is assigned to every test case or not otherwise it generates the interrupt when wrong priority is assigned. In order to classify the different priority test cases classifiers are used. Proposed algorithm is very effective as it reduces the complexity with robust efficiency and makes the process automated to prioritize the test cases.

Keywords: test case prioritization, classification, artificial neural networks, TF-IDF

Procedia PDF Downloads 393
1759 Kinetics, Equilibrium and Thermodynamic Studies on Adsorption of Reactive Blue 29 from Aqueous Solution Using Activated Tamarind Kernel Powder

Authors: E. D. Paul, A. D. Adams, O. Sunmonu, U. S. Ishiaku

Abstract:

Activated tamarind kernel powder (ATKP) was prepared from tamarind fruit (Tamarindus indica), and utilized for the removal of Reactive Blue 29 (RB29) from its aqueous solution. The powder was activated using 4N nitric acid (HNO₃). The adsorbent was characterised using infrared spectroscopy, bulk density, ash content, pH, moisture content and dry matter content measurements. The effect of various parameters which include; temperature, pH, adsorbent dosage, ion concentration, and contact time were studied. Four different equilibrium isotherm models were tested on the experimental data, but the Temkin isotherm model was best-fitted into the experimental data. The pseudo-first order and pseudo-second-order kinetic models were also fitted into the graphs, but pseudo-second order was best fitted to the experimental data. The thermodynamic parameters showed that the adsorption of Reactive Blue 29 onto activated tamarind kernel powder is a physical process, feasible and spontaneous, exothermic in nature and there is decreased randomness at the solid/solution interphase during the adsorption process. Therefore, activated tamarind kernel powder has proven to be a very good adsorbent for the removal of Reactive Blue 29 dyes from industrial waste water.

Keywords: tamarind kernel powder, reactive blue 29, isotherms, kinetics

Procedia PDF Downloads 244
1758 Polarity Classification of Social Media Comments in Turkish

Authors: Migena Ceyhan, Zeynep Orhan, Dimitrios Karras

Abstract:

People in modern societies are continuously sharing their experiences, emotions, and thoughts in different areas of life. The information reaches almost everyone in real-time and can have an important impact in shaping people’s way of living. This phenomenon is very well recognized and advantageously used by the market representatives, trying to earn the most from this means. Given the abundance of information, people and organizations are looking for efficient tools that filter the countless data into important information, ready to analyze. This paper is a modest contribution in this field, describing the process of automatically classifying social media comments in the Turkish language into positive or negative. Once data is gathered and preprocessed, feature sets of selected single words or groups of words are build according to the characteristics of language used in the texts. These features are used later to train, and test a system according to different machine learning algorithms (Naïve Bayes, Sequential Minimal Optimization, J48, and Bayesian Linear Regression). The resultant high accuracies can be important feedback for decision-makers to improve the business strategies accordingly.

Keywords: feature selection, machine learning, natural language processing, sentiment analysis, social media reviews

Procedia PDF Downloads 145
1757 Hyperspectral Mapping Methods for Differentiating Mangrove Species along Karachi Coast

Authors: Sher Muhammad, Mirza Muhammad Waqar

Abstract:

It is necessary to monitor and identify mangroves types and spatial extent near coastal areas because it plays an important role in coastal ecosystem and environmental protection. This research aims at identifying and mapping mangroves types along Karachi coast ranging from 24.79 to 24.85 degree in latitude and 66.91 to 66.97 degree in longitude using hyperspectral remote sensing data and techniques. Image acquired during February, 2012 through Hyperion sensor have been used for this research. Image preprocessing includes geometric and radiometric correction followed by Minimum Noise Fraction (MNF) and Pixel Purity Index (PPI). The output of MNF and PPI has been analyzed by visualizing it in n-dimensions for end-member extraction. Well-distributed clusters on the n-dimensional scatter plot have been selected with the region of interest (ROI) tool as end members. These end members have been used as an input for classification techniques applied to identify and map mangroves species including Spectral Angle Mapper (SAM), Spectral Feature Fitting (SFF), and Spectral Information Diversion (SID). Only two types of mangroves namely Avicennia Marina (white mangroves) and Avicennia Germinans (black mangroves) have been observed throughout the study area.

Keywords: mangrove, hyperspectral, hyperion, SAM, SFF, SID

Procedia PDF Downloads 361
1756 English for Specific Purposes: Its Definition, Characteristics, and the Role of Needs Analysis

Authors: Karima Tayaa, Amina Bouaziz

Abstract:

The rapid expansion in the scientific fields and the growth of communication technology increased the use of English as international language in the world. Hence, over the past few decades, many researchers have been emphasizing on how the teaching and learning of English as a foreign or as an additional language can best help students to perform successfully. English for specific purpose is today quite literally regarded as the most global language discipline which existed practically in every country in the world. ESP (English for Specific Purposes) involves teaching and learning the specific skills and language needed by particular learners for a particular purpose. The P in ESP is always a professional purpose which is a set of skills that learners currently need in their work or will need in their professional careers. It has had an early origin since 1960’s and has grown to become one of the most prominent of English language teaching today. Moreover, ESP learners are usually adults who have some quittances with English and learn the language so as to communicate and perform particular profession. Related activities are based on specific purposes and needs. They are integrated into subject matter area important to the learners. Unlike general English which focuses on teaching general language courses and all four language skills are equally stressed, ESP and practically needs analysis determine which language skills are the most needed by the learners and syllabus designed accordingly. This paper looked into the origin, characteristics, development of ESP, the difference between ESP and general English. Finally, the paper critically reviews the role of needs analysis in the ESP.

Keywords: English language teaching, English for general purposes, English for specific purposes, needs analysis

Procedia PDF Downloads 403
1755 Land Suitability Analysis for Maize Production in Egbeda Local Government Area of Oyo State Using GIS Techniques

Authors: Abegunde Linda, Adedeji Oluwatayo, Tope-Ajayi Opeyemi

Abstract:

Maize constitutes a major agrarian production for use by the vast population but despite its economic importance, it has not been produced to meet the economic needs of the country. Achieving optimum yield in maize can meaningfully be supported by land suitability analysis in order to guarantee self-sufficiency for future production optimization. This study examines land suitability for maize production through the analysis of the physic-chemical variations in soil properties over space using a Geographic Information System (GIS) framework. Physic-chemical parameters of importance selected include slope, landuse, and physical and chemical properties of the soil. Landsat imagery was used to categorize the landuse, Shuttle Radar Topographic Mapping (SRTM) generated the slope and soil samples were analyzed for its physical and chemical components. Suitability was categorized into highly, moderately and marginally suitable based on Food and Agricultural Organisation (FAO) classification using the Analytical Hierarchy Process (AHP) technique of GIS. This result can be used by small scale farmers for efficient decision making in the allocation of land for maize production.

Keywords: AHP, GIS, MCE, suitability, Zea mays

Procedia PDF Downloads 395
1754 Grammatical and Lexical Cohesion in the Japan’s Prime Minister Shinzo Abe’s Speech Text ‘Nihon wa Modottekimashita’

Authors: Nadya Inda Syartanti

Abstract:

This research aims to identify, classify, and analyze descriptively the aspects of grammatical and lexical cohesion in the speech text of Japan’s Prime Minister Shinzo Abe entitled Nihon wa Modotte kimashita delivered in Washington DC, the United States on February 23, 2013, as a research data source. The method used is qualitative research, which uses descriptions through words that are applied by analyzing aspects of grammatical and lexical cohesion proposed by Halliday and Hasan (1976). The aspects of grammatical cohesion consist of references (personal, demonstrative, interrogative pronouns), substitution, ellipsis, and conjunction. In contrast, lexical cohesion consists of reiteration (repetition, synonym, antonym, hyponym, meronym) and collocation. Data classification is based on the 6 aspects of the cohesion. Through some aspects of cohesion, this research tries to find out the frequency of using grammatical and lexical cohesion in Shinzo Abe's speech text entitled Nihon wa Modotte kimashita. The results of this research are expected to help overcome the difficulty of understanding speech texts in Japanese. Therefore, this research can be a reference for learners, researchers, and anyone who is interested in the field of discourse analysis.

Keywords: cohesion, grammatical cohesion, lexical cohesion, speech text, Shinzo Abe

Procedia PDF Downloads 160
1753 Deep-Learning Coupled with Pragmatic Categorization Method to Classify the Urban Environment of the Developing World

Authors: Qianwei Cheng, A. K. M. Mahbubur Rahman, Anis Sarker, Abu Bakar Siddik Nayem, Ovi Paul, Amin Ahsan Ali, M. Ashraful Amin, Ryosuke Shibasaki, Moinul Zaber

Abstract:

Thomas Friedman, in his famous book, argued that the world in this 21st century is flat and will continue to be flatter. This is attributed to rapid globalization and the interdependence of humanity that engendered tremendous in-flow of human migration towards the urban spaces. In order to keep the urban environment sustainable, policy makers need to plan based on extensive analysis of the urban environment. With the advent of high definition satellite images, high resolution data, computational methods such as deep neural network analysis, and hardware capable of high-speed analysis; urban planning is seeing a paradigm shift. Legacy data on urban environments are now being complemented with high-volume, high-frequency data. However, the first step of understanding urban space lies in useful categorization of the space that is usable for data collection, analysis, and visualization. In this paper, we propose a pragmatic categorization method that is readily usable for machine analysis and show applicability of the methodology on a developing world setting. Categorization to plan sustainable urban spaces should encompass the buildings and their surroundings. However, the state-of-the-art is mostly dominated by classification of building structures, building types, etc. and largely represents the developed world. Hence, these methods and models are not sufficient for developing countries such as Bangladesh, where the surrounding environment is crucial for the categorization. Moreover, these categorizations propose small-scale classifications, which give limited information, have poor scalability and are slow to compute in real time. Our proposed method is divided into two steps-categorization and automation. We categorize the urban area in terms of informal and formal spaces and take the surrounding environment into account. 50 km × 50 km Google Earth image of Dhaka, Bangladesh was visually annotated and categorized by an expert and consequently a map was drawn. The categorization is based broadly on two dimensions-the state of urbanization and the architectural form of urban environment. Consequently, the urban space is divided into four categories: 1) highly informal area; 2) moderately informal area; 3) moderately formal area; and 4) highly formal area. In total, sixteen sub-categories were identified. For semantic segmentation and automatic categorization, Google’s DeeplabV3+ model was used. The model uses Atrous convolution operation to analyze different layers of texture and shape. This allows us to enlarge the field of view of the filters to incorporate larger context. Image encompassing 70% of the urban space was used to train the model, and the remaining 30% was used for testing and validation. The model is able to segment with 75% accuracy and 60% Mean Intersection over Union (mIoU). In this paper, we propose a pragmatic categorization method that is readily applicable for automatic use in both developing and developed world context. The method can be augmented for real-time socio-economic comparative analysis among cities. It can be an essential tool for the policy makers to plan future sustainable urban spaces.

Keywords: semantic segmentation, urban environment, deep learning, urban building, classification

Procedia PDF Downloads 188
1752 Long Short-Term Memory Based Model for Modeling Nicotine Consumption Using an Electronic Cigarette and Internet of Things Devices

Authors: Hamdi Amroun, Yacine Benziani, Mehdi Ammi

Abstract:

In this paper, we want to determine whether the accurate prediction of nicotine concentration can be obtained by using a network of smart objects and an e-cigarette. The approach consists of, first, the recognition of factors influencing smoking cessation such as physical activity recognition and participant’s behaviors (using both smartphone and smartwatch), then the prediction of the configuration of the e-cigarette (in terms of nicotine concentration, power, and resistance of e-cigarette). The study uses a network of commonly connected objects; a smartwatch, a smartphone, and an e-cigarette transported by the participants during an uncontrolled experiment. The data obtained from sensors carried in the three devices were trained by a Long short-term memory algorithm (LSTM). Results show that our LSTM-based model allows predicting the configuration of the e-cigarette in terms of nicotine concentration, power, and resistance with a root mean square error percentage of 12.9%, 9.15%, and 11.84%, respectively. This study can help to better control consumption of nicotine and offer an intelligent configuration of the e-cigarette to users.

Keywords: Iot, activity recognition, automatic classification, unconstrained environment

Procedia PDF Downloads 223
1751 Development and Performance of Aerobic Granular Sludge at Elevated Temperature

Authors: Mustafa M. Bob, Siti Izaidah Azmi, Mohd Hakim Ab Halim, Nur Syahida Abdul Jamal, Aznah Nor-Anuar, Zaini Ujang

Abstract:

In this research, the formation and development of aerobic granular sludge (AGS) for domestic wastewater treatment application in hot climate conditions was studied using a sequencing batch reactor (SBR). The performance of the developed AGS in the removal of organic matter and nutrients from wastewater was also investigated. The operation of the reactor was based on the sequencing batch system with a complete cycle time of 3 hours that included feeding, aeration, settling, discharging and idling. The reactor was seeded with sludge collected from the municipal wastewater treatment plant in Madinah city, Saudi Arabia and operated at a temperature of 40ºC using synthetic wastewater as influent. Results showed that granular sludge was developed after an operation period of 30 days. The developed granular sludge had a good settling ability with the average size of the granules ranging from 1.03 to 2.42 mm. The removal efficiency of chemical oxygen demand (COD), ammonia nitrogen (NH3-N) and total phosphorus (TP) were 87.31%, 91.93% and 61.25% respectively. These results show that AGS can be developed at elevated temperatures and it is a promising technique to treat domestic wastewater in hot and low humidity climate conditions such as those encountered in Saudi Arabia.

Keywords: aerobic granular sludge, hot climate, sequencing batch reactor, domestic wastewater treatment

Procedia PDF Downloads 356
1750 Assessing the Role of Failed-ADR in Civil Litigation

Authors: Masood Ahmed

Abstract:

There is a plethora of literature (including judicial and extra-judicial comments) concerning the virtues of alternative dispute resolution processes within the English civil justice system. Lord Woolf in his Access to Justice Report ushered in a new pro-ADR philosophy and this was reinforced by Sir Rupert Jackson in his review of civil litigation costs. More recently, Briggs LJ, in his review of the Chancery Court, reiterated the significant role played by ADR and the need for better integration of ADR processes within the Chancery Court. His Lordship also noted that ADR which had failed to produce a settlement (i.e. a failed-ADR) continued to play a significant role in contributing to a ‘substantial narrowing of the issues or increased focus on the key issues’ which were ‘capable of assisting both the parties and the court in the economical determination of the dispute at trial.’ With the assistance of empirical data, this paper investigates the nature of failed-ADR and, in particular, assesses the effectiveness of failed-ADR processes as a tool in: (a) narrowing the legal and/or factual issues which may assist the courts in more effective and efficient case management of the dispute; (b) assisting the parties in the future settlement of the matter. This paper will also measure the effectiveness of failed-ADR by considering the views and experiences of legal practitioners who have engaged in failed-ADR.

Keywords: English civil justice system, alternative dispute resolution processes, civil court process, empirical data from legal profession regarding failed ADR

Procedia PDF Downloads 464
1749 Forensic Speaker Verification in Noisy Environmental by Enhancing the Speech Signal Using ICA Approach

Authors: Ahmed Kamil Hasan Al-Ali, Bouchra Senadji, Ganesh Naik

Abstract:

We propose a system to real environmental noise and channel mismatch for forensic speaker verification systems. This method is based on suppressing various types of real environmental noise by using independent component analysis (ICA) algorithm. The enhanced speech signal is applied to mel frequency cepstral coefficients (MFCC) or MFCC feature warping to extract the essential characteristics of the speech signal. Channel effects are reduced using an intermediate vector (i-vector) and probabilistic linear discriminant analysis (PLDA) approach for classification. The proposed algorithm is evaluated by using an Australian forensic voice comparison database, combined with car, street and home noises from QUT-NOISE at a signal to noise ratio (SNR) ranging from -10 dB to 10 dB. Experimental results indicate that the MFCC feature warping-ICA achieves a reduction in equal error rate about (48.22%, 44.66%, and 50.07%) over using MFCC feature warping when the test speech signals are corrupted with random sessions of street, car, and home noises at -10 dB SNR.

Keywords: noisy forensic speaker verification, ICA algorithm, MFCC, MFCC feature warping

Procedia PDF Downloads 406
1748 Cancer and Disability: A Psychosocial Approach in Puerto Rican Women as Cancer Survivors

Authors: Hector Jose Velazquez-Gonzalez, Norma Maldonado-Santiago, Laura Pietri-Gomez

Abstract:

Cancer is one of the first cause of death in the world, most of them are women. In Puerto Rico, there is a permanent controversy on the conceptuation of what really involves a disability, also in when a chronic illness, like cancer, should be considered a disability. The aim of the research was to identify functional limitation in 50 women survivors of cancer. In turn, to know the meanings that 6 women attributed to cancer with a focus on functionality. We conducted a mix method research based on surveys and narratives. We administered the World Health Organization Disability Assessment, version 2.0, which obtained a Cronbach’s alpha of .949 on the general scale, and from .773 to .956 on the six domains. The domain that obtained the highest average was social participation (M= 33.89, SD= 20.434), but it was not significant in the disability percentage. Also, there was no significance in the disability percentage in the other five domains. In a matter of meanings, we conduct a semistructured interview to 6 participants. All of them do not refer to cancer as a disability, either they do not know that in Puerto Rico cancer is considered as a disability by the law. However, participants agree that cancer at the time of treatment and subsequent to it, has significant effects on functional limitations (fatigue, pain, cognitive limitations, and weakness, among others. Psychooncologic practice should encourage the constant assessment of the functionality to identify the needs that emerge from oncological diagnosis. So that psychosocial intervention could be considered as critical in cancer treatment to promote a better quality of life and well-being in a person with cancer.

Keywords: cancer, Puerto Rico, disability, psychosocial approach

Procedia PDF Downloads 276