Search results for: user generated content
10394 Modeling Intention to Use 3PL Services: An Application of the Theory of Planned Behavior
Authors: Nasrin Akter, Prem Chhetri, Shams Rahman
Abstract:
The present study tested Ajzen’s Theory of Planned Behavior (TPB) model to explain the formation of business customers’ intention to use 3PL services in Bangladesh. The findings show that the TPB model has a good fit to the data. Based on theoretical support and suggested modification indices, a refined TPB model was developed afterwards which provides a better predictive power for intention. Consistent with the theory, the results of a structural equation analysis revealed that the intention to use 3PL services is predicted by attitude and subjective norms but not by perceived behavioral control. Further investigation indicated that the paths between (attitude and intention) and (subjective norms and intention) did not statistically differ between 3PL user and non-user. Findings of this research provide an evidence base to formulate business strategies to increase the use of 3PL services in Bangladesh to enhance productivity and to gain economic efficiency.Keywords: Bangladesh, intention, third-party logistics, Theory of Planned Behavior
Procedia PDF Downloads 58110393 The Impact of the Economic Crisis in the European Identity
Authors: Sofía Luna, Carla González Salamanca
Abstract:
The 2008 economic crisis had huge implications in Europe. In this continent, the repercussions of the crisis were not only economic but also political and institutional. The economic stress has generated changes in the perception of the citizens, their attitude and the confidence placed in the political organizations. The lost of confidence is not only present in the debtor countries but it is also present in the European economic powers like Germany and France. This research explains how the economic crisis had an impact in the identity, population’s attitude and how this generated the rise of extreme right parties. In addition, it defines the different types of attitudes and support that exist towards these political and economic institutions. The results of this investigation show that the depression beside of its economic implications, it caused institutional, social and political difficulties for the Union. Moreover, the support and attitudes of the population were severely strained because the confidence in the political organization decreased. Furthermore, a rise in the otherness sentiment was shown. In other words, the distinction between “us” and “them” increased causing repercussions in the collective European identity. Additionally, there was a spread in national identities that caused the rise of the extreme right wing parties. In conclusion, the 2008 economic crisis caused not only economic stress but also it generated a political, social and institutional crisis in Europe.Keywords: Europe, identity, economic crisis, otherness sentiment
Procedia PDF Downloads 49810392 Evaluation of Salt Content in Bread and the Amount Intake by Hypertensive Patients in the Algiers Region
Authors: S.lanasri, A.Boudjerrane, R.Belgherbi, O.Hadjoudj
Abstract:
Introduction: Bread is the most popular food in Algeria. The aim of this study was to examine the consumption of salt from bread by hypertensive patients. Materials and methods: sixty breads were collected from different artisans Algiers bakeries, each sample was mixed in harm distilled water until homogeneous and filtered. Analysis of the salt content was carried out according to the Mohr method titration. We calculated the amount of salt in bread consumed by 100 hypertensive patients using a questionnaire about the average amount of bread per day. Results: The salt content values from bread were 3.4g ± 0.37 NaCl / 100g.The average amount of salt consumed per day by patients from only bread was 3.82 g ± 3.8 with a maximum of 17 g per day. Only 38.18% of patients consume bread without salt even then 95% knew that excess salt intake can complicate hypertension. Conclusion: This study showed that bread is a major contributor to salt intake by Algerian hypertensive patients.Keywords: salt, bread, hypertensive patients, Algiers
Procedia PDF Downloads 15110391 User-Friendly Task Creation Using a CAD Integrated Robotic System on a Real Workcell
Authors: Alireza Changizi, Arash Rezaei, Jamal Muhammad, Jyrki Latokartano, Minna Lanz
Abstract:
Offline programming (OLP) is a new method in robot programming which is used widely in the industry nowadays which is a simulation base method that can produce the robot codes for motion according to virtual world in the simulation software. In this project Delmia v5 is used as simulation software. First the work cell component was modelled by Catia v5 and all of them was imported to a process file in Delmia and placed roughly to form the virtual work cell. Then robot was added to the work cell from the Delmia library. Work cell was calibrated corresponding to real world work cell to have accurate code. Tool calibration is the first step of calibration scheme and then work cell equipment can be calibrated using 6 point calibration method. Finally generated code needs to be reformed to match related controller code instruction. At the last stage IO were set to accomplish robots cooperation and make their motion synchronized. The pros and cons also will be discussed to clarify the presented results show the feasibility of the method and its effect on production line efficiency. Finally the positive and negative points of the implementation will be discussed.Keywords: robotic, automated, production, offline programming, CAD
Procedia PDF Downloads 38810390 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 25810389 Defining a Holistic Approach for Model-Based System Engineering: Paradigm and Modeling Requirements
Authors: Hycham Aboutaleb, Bruno Monsuez
Abstract:
Current systems complexity has reached a degree that requires addressing conception and design issues while taking into account all the necessary aspects. Therefore, one of the main challenges is the way complex systems are specified and designed. The exponential growing effort, cost and time investment of complex systems in modeling phase emphasize the need for a paradigm, a framework and a environment to handle the system model complexity. For that, it is necessary to understand the expectations of the human user of the model and his limits. This paper presents a generic framework for designing complex systems, highlights the requirements a system model needs to fulfill to meet human user expectations, and defines the refined functional as well as non functional requirements modeling tools needs to meet to be useful in model-based system engineering.Keywords: system modeling, modeling language, modeling requirements, framework
Procedia PDF Downloads 53210388 Analysis of the Significance of Multimedia Channels Using Sparse PCA and Regularized SVD
Authors: Kourosh Modarresi
Abstract:
The abundance of media channels and devices has given users a variety of options to extract, discover, and explore information in the digital world. Since, often, there is a long and complicated path that a typical user may venture before taking any (significant) action (such as purchasing goods and services), it is critical to know how each node (media channel) in the path of user has contributed to the final action. In this work, the significance of each media channel is computed using statistical analysis and machine learning techniques. More specifically, “Regularized Singular Value Decomposition”, and “Sparse Principal Component” has been used to compute the significance of each channel toward the final action. The results of this work are a considerable improvement compared to the present approaches.Keywords: multimedia attribution, sparse principal component, regularization, singular value decomposition, feature significance, machine learning, linear systems, variable shrinkage
Procedia PDF Downloads 30910387 Women Right in Islam and Misconceptions: A Critical Study
Authors: Abubakar Ibrahim Usman, Mustapha Halilu
Abstract:
The provisions of rights to women in Islam have generated and are creating a tense and serious debate among Muslims and non-Muslims alike. The Muslims are arguing that Islam provides right to Womenfolk, but their actions, cultural/traditional practices, and treatment reveal otherwise, Non-Muslims, on the other hand, held a different view, saying that Islam has never made such provision. One may not blame their misconception, due to the wide spectrum of treatment given to women in many Muslim societies, which generated, fueled and geared the misconceptions and ceaseless barrage of sensational articles, movies and negative portrayal of Islam today. It has to put in our minds, many actions and Crimes of some Muslims (Who are mostly minority) did not represent the teachings and precepts of Islam, just like one cannot put blame on the parents of a child whose actions fall short of his home background.Keywords: Islam, women rights, cultural practices, religion
Procedia PDF Downloads 44210386 Tool for Metadata Extraction and Content Packaging as Endorsed in OAIS Framework
Authors: Payal Abichandani, Rishi Prakash, Paras Nath Barwal, B. K. Murthy
Abstract:
Information generated from various computerization processes is a potential rich source of knowledge for its designated community. To pass this information from generation to generation without modifying the meaning is a challenging activity. To preserve and archive the data for future generations it’s very essential to prove the authenticity of the data. It can be achieved by extracting the metadata from the data which can prove the authenticity and create trust on the archived data. Subsequent challenge is the technology obsolescence. Metadata extraction and standardization can be effectively used to resolve and tackle this problem. Metadata can be categorized at two levels i.e. Technical and Domain level broadly. Technical metadata will provide the information that can be used to understand and interpret the data record, but only this level of metadata isn’t sufficient to create trustworthiness. We have developed a tool which will extract and standardize the technical as well as domain level metadata. This paper is about the different features of the tool and how we have developed this.Keywords: digital preservation, metadata, OAIS, PDI, XML
Procedia PDF Downloads 39310385 The Effectiveness of Using MS SharePoint for the Curriculum Repository System
Authors: Misook Ahn
Abstract:
This study examines the Institutional Curriculum Repository (ICR) developed with MS SharePoint. The purpose of using MS SharePoint is to organize, share, and manage the curriculum data. The ICR aims to build a centralized curriculum infrastructure, preserve all curriculum materials, and provide academic service to users (faculty, students, or other agencies). The ICR collection includes core language curriculum materials developed by each language school—foreign language textbooks, language survival kits, and audio files currently in or not in use at the schools. All core curriculum materials with audio and video files have been coded, collected, and preserved at the ICR. All metadata for the collected curriculum materials have been input by language, code, year, book type, level, user, version, and current status (in use/not in use). The qualitative content analysis, including the survey data, is used to evaluate the effectiveness of using MS SharePoint for the repository system. This study explains how to manage and preserve curriculum materials with MS SharePoint, along with challenges and suggestions for further research. This study will be beneficial to other universities or organizations considering archiving or preserving educational materials.Keywords: digital preservation, ms sharepoint, repository, curriculum materials
Procedia PDF Downloads 10510384 A Survey of Online User Perspectives and Age Profile in an Undergraduate Fundamental Business Technology Course
Authors: Danielle Morin, Jennifer D. E. Thomas, Raafat G. Saade, Daniela Petrachi
Abstract:
Over the past few decades, more and more students choose to enroll in online classes instead of attending in-class lectures. While past studies consider students’ attitudes towards online education and how their grades differed from in-class lectures, the profile of the online student remains a blur. To shed light on this, an online survey was administered to about 1,500 students enrolled in an undergraduate Fundamental Business Technology course at a Canadian University. The survey was comprised of questions on students’ demographics, their reasons for choosing online courses, their expectations towards the course, the communication channels they use for the course with fellow students and with the instructor. This paper focused on the research question: Do the perspectives of online students concerning the online experience, in general, and in the course in particular, differ according to age profile? After several statistical analyses, it was found that age does have an impact on the reasons why students select online classes instead of in-class. For example, it was found that the perception that an online course might be easier than in-class delivery was a more important reason for younger students than for older ones. Similarly, the influence of friends is much more important for younger students, than for older students. Similar results were found when analyzing students’ expectation about the online course and their use of communication tools. Overall, the age profile of online users had an impact on reasons, expectations and means of communication in an undergraduate Fundamental Business Technology course. It is left to be seen if this holds true across other courses, graduate and undergraduate.Keywords: communication channels, fundamentals of business technology, online classes, pedagogy, user age profile, user perspectives
Procedia PDF Downloads 25010383 Energy Storage in the Future of Ethiopia Renewable Electricity Grid System
Authors: Dawit Abay Tesfamariam
Abstract:
Ethiopia’s Climate- Resilient Green Economy strategy focuses mainly on generating and utilization of Renewable Energy (RE). The data collected in 2016 by Ethiopian Electric Power (EEP) indicates that the intermittent RE sources on the grid from solar and wind energy were only 8 % of the total energy produced. On the other hand, the EEP electricity generation plan in 2030 indicates that 36 % of the energy generation share will be covered by solar and wind sources. Thus, a case study was initiated to model and compute the balance and consumption of electricity in three different scenarios: 2016, 2025, and 2030 using the Energy PLAN Model (EPM). Initially, the model was validated using the 2016 annual power-generated data to conduct the EPM analysis for two predictive scenarios. The EPM simulation analysis using EPM for 2016 showed that there was no significant excess power generated. Hence, the model’s results are in line with the actual 2016 output. Thus, the EPM was applied to analyze the role of energy storage in RE in Ethiopian grid systems. The results of the EPM simulation analysis showed there will be excess production of 402 /7963 MW average and maximum, respectively, in 2025. The excess power was dominant in all months except in the three rainy months of the year (June, July, and August). Consequently, based on the validated outcomes of EPM indicates, there is a good reason to think about other alternatives for the utilization of excess energy and storage of RE. Thus, from the scenarios and model results obtained, it is realistic to infer that; if the excess power is utilized with a storage mechanism that can stabilize the grid system; as a result, the extra RE generated can be exported to support the economy. Therefore, researchers must continue to upgrade the current and upcoming energy storage system to synchronize with RE potentials that can be generated from RE.Keywords: renewable energy, storage, wind, energyplan
Procedia PDF Downloads 8110382 Disaggregation the Daily Rainfall Dataset into Sub-Daily Resolution in the Temperate Oceanic Climate Region
Authors: Mohammad Bakhshi, Firas Al Janabi
Abstract:
High resolution rain data are very important to fulfill the input of hydrological models. Among models of high-resolution rainfall data generation, the temporal disaggregation was chosen for this study. The paper attempts to generate three different rainfall resolutions (4-hourly, hourly and 10-minutes) from daily for around 20-year record period. The process was done by DiMoN tool which is based on random cascade model and method of fragment. Differences between observed and simulated rain dataset are evaluated with variety of statistical and empirical methods: Kolmogorov-Smirnov test (K-S), usual statistics, and Exceedance probability. The tool worked well at preserving the daily rainfall values in wet days, however, the generated data are cumulated in a shorter time period and made stronger storms. It is demonstrated that the difference between generated and observed cumulative distribution function curve of 4-hourly datasets is passed the K-S test criteria while in hourly and 10-minutes datasets the P-value should be employed to prove that their differences were reasonable. The results are encouraging considering the overestimation of generated high-resolution rainfall data.Keywords: DiMoN Tool, disaggregation, exceedance probability, Kolmogorov-Smirnov test, rainfall
Procedia PDF Downloads 20110381 Subject Teachers’ Perception of the Changing Role of Language in the Curriculum of Secondary Education
Authors: Moldir Makenova
Abstract:
Alongside the implementation of trilingual education in schools, the Ministry of Education and Science of the Republic of Kazakhstan innovated the school curriculum in 2013 to include a Content and Language Integrated Learning (CLIL) approach. In this regard, some transition issues have arisen, such as unprepared teachers, a need for more awareness of the CLIL approach, and teaching resources. Some teachers view it as a challenge due to its combination of both content and language. This often creates anxiety among teachers who are knowledgeable about their subject areas in Kazakh or Russian but are deficient in delivering the subject’s content in English. Thus, with this new teaching approach, teachers encounter to choose the role of language and answer how language works in the CLIL classroom. This study aimed to explore how teachers experience the changing role of language in the curriculum and to find out what challenges teachers face related to CLIL implementation and how their language proficiency influences their teaching practices. A qualitative comparative case study was conducted in an X Lyceum and a mainstream school piloting CLIL. Data collection procedures were conducted via semi-structured interviews, classroom observations, and document analysis. Eight content teachers were chosen from these two schools as the target group of this study. Subject teachers, rather than language teachers, were chosen as the target group to grasp how the language-related issues in the new curriculum are interpreted by educators who do not necessarily identify themselves as language experts at the outset. The findings showed that mainstream teachers prioritize content over language because, as content teachers, the knowledge of content is more essential for them rather than the language. In contrast, most X Lyceum teachers balance language and content and additionally showed their preferences to support the ‘English language only' policy among 10-11 graders. Moreover, due to the low-level English proficiency, mainstream teachers did highlight the necessity of CLIL training and further collaboration with language teachers. This study will be beneficial for teachers and policy-makers to enable them to solve the issues mentioned above related to the implementation of CLIL. Larger-scale research conducted in the future would further inform its successful deployment country-wide.Keywords: role of language, trilingual education, updated curriculum, teacher practices
Procedia PDF Downloads 7110380 Thermo-Mechanical Properties of PBI Fiber Reinforced HDPE Composites: Effect of Fiber Length and Composition
Authors: Shan Faiz, Arfat Anis, Saeed M. Al-Zarani
Abstract:
High density polyethylene (HDPE) and poly benzimidazole fiber (PBI) composites were prepared by melt blending in a twin screw extruder (TSE). The thermo-mechanical properties of PBI fiber reinforced HDPE composite samples (1%, 4% and 8% fiber content) of fiber lengths 3 mm and 6 mm were investigated using differential scanning calorimeter (DSC), universal testing machine (UTM), rheometer and scanning electron microscopy (SEM). The effect of fiber content and fiber lengths on the thermo-mechanical properties of the HDPE-PBI composites was studied. The DSC analysis showed decrease in crystallinity of HDPE-PBI composites with the increase of fiber loading. Maximum decrease observed was 12% at 8% fiber length. The thermal stability was found to increase with the addition of fiber. T50% was notably increased to 40oC for both grades of HDPE using 8% of fiber content. The mechanical properties were not much affected by the increase in fiber content. The optimum value of tensile strength was achieved using 4% fiber content and slight increase of 9% in tensile strength was observed. No noticeable change was observed in flexural strength. In rheology study, the complex viscosities of HDPE-PBI composites were higher than the HDPE matrix and substantially increased with even minimum increase of PBI fiber loading i.e. 1%. We found that the addition of the PBI fiber resulted in a modest improvement in the thermal stability and mechanical properties of the prepared composites.Keywords: PBI fiber, high density polyethylene, composites, melt blending
Procedia PDF Downloads 36610379 A Computational Fluid Dynamics Study of Turbulence Flow and Parameterization of an Aerofoil
Authors: Mohamed Z. M. Duwahir, Shian Gao
Abstract:
The main objective of this project was to introduce and test a new scheme for parameterization of subsonic aerofoil, using a function called Shape Function. Python programming was used to create a user interactive environment for geometry generation of aerofoil using NACA and Shape Function methodologies. Two aerofoils, NACA 0012 and NACA 1412, were generated using this function. Testing the accuracy of the Shape Function scheme was done by Linear Square Fitting using Python and CFD modelling the aerofoil in Fluent. NACA 0012 (symmetrical aerofoil) was better approximated using Shape Function than NACA 1412 (cambered aerofoil). The second part of the project involved comparing two turbulent models, k-ε and Spalart-Allmaras (SA), in Fluent by modelling the aerofoils NACA 0012 and NACA 1412 in conditions of Reynolds number of 3 × 106. It was shown that SA modelling is better for aerodynamic purpose. The experimental coefficient of lift (Cl) and coefficient of drag (Cd) were compared with empirical wind tunnel data for a range of angle of attack (AOA). As a further step, this project involved drawing and meshing 3D wings in Gambit. The 3D wing flow was solved and compared with 2D aerofoil section experimental results and wind tunnel data.Keywords: CFD simulation, shape function, turbulent modelling, aerofoil
Procedia PDF Downloads 35810378 Nutritional Characteristics, Mineral contents, Amino acid Composition and Phytochemical Analysis of Eryngium alpinium Leaf Protein Concentrates
Authors: Owonikoko A. D., Odoje O. F.
Abstract:
Fresh sample of Eryngium alpinum was purchased and processed for leaf protein concentrates with a view to evaluating its nutritional potential, mineral composition, amino acid characteristics and phytochemical constituents. Using standard analytical methods. The proximate composition of the leaf protein concentrates revealed moisture content;(5.35±0.21)g/100g, ash;(11.37±0.43)g/100g, crude protein;(48.17±0.46)g/100g, crude fat;(15.38±0.07)g/100g, crude fibre (3.05±0.46)g/100g, and Nitrogen free extractive; (16.68±0.30) g/100g. The mineral content was: Na;(51.88±0.23) mg/100g, K;(65.40±0.32)mg/100g, Ca; (86.89±0.46)mg/100g, Mg;(49.27±0.42) mg/100g, Zn;(0.62±0.03)mg/100g, Fe (6.65±0.43)mg/100g, Mn;(0.96±0.54)mg/100g, Cd;(0.28±0.04)mg/100g, P; (8.55±0.97)mg/100g, while selenium, lead and mercury were not detected in the sample indicating that the sample is free of causing risk of metal poisoning. The results of phytochemical constituents showed phytate; (18.34±0.36)mg/100g, flavonoid (0.25±0.41)mg/100g. The sample contain both essential and non-essential amino acid, with the highest value of Glutamic acid (12.26) and the lowest value of Tryptophan 1.05. the content of the leaf protein content shows that the sample is fit for dietary consumption and could as well be processed to be used as food additives.Keywords: mineral composition, phytochemical analysis, leaf protein concentrates, eryngium alpinum
Procedia PDF Downloads 10910377 The Development of an Automated Computational Workflow to Prioritize Potential Resistance Variants in HIV Integrase Subtype C
Authors: Keaghan Brown
Abstract:
The prioritization of drug resistance mutations impacting protein folding or protein-drug and protein-DNA interactions within macromolecular systems is critical to the success of treatment regimens. With a continual increase in computational tools to assess these impacts, the need for scalability and reproducibility became an essential component of computational analysis and experimental research. Here it introduce a bioinformatics pipeline that combines several structural analysis tools in a simplified workflow, by optimizing the present computational hardware and software to automatically ease the flow of data transformations. Utilizing preestablished software tools, it was possible to develop a pipeline with a set of pre-defined functions that will automate mutation introduction into the HIV-1 Integrase protein structure, calculate the gain and loss of polar interactions and calculate the change in energy of protein fold. Additionally, an automated molecular dynamics analysis was implemented which reduces the constant need for user input and output management. The resulting pipeline, Automated Mutation Introduction and Analysis (AMIA) is an open source set of scripts designed to introduce and analyse the effects of mutations on the static protein structure as well as the results of the multi-conformational states from molecular dynamic simulations. The workflow allows the user to visualize all outputs in a user friendly manner thereby successfully enabling the prioritization of variant systems for experimental validation.Keywords: automated workflow, variant prioritization, drug resistance, HIV Integrase
Procedia PDF Downloads 7710376 Searching Linguistic Synonyms through Parts of Speech Tagging
Authors: Faiza Hussain, Usman Qamar
Abstract:
Synonym-based searching is recognized to be a complicated problem as text mining from unstructured data of web is challenging. Finding useful information which matches user need from bulk of web pages is a cumbersome task. In this paper, a novel and practical synonym retrieval technique is proposed for addressing this problem. For replacement of semantics, user intent is taken into consideration to realize the technique. Parts-of-Speech tagging is applied for pattern generation of the query and a thesaurus for this experiment was formed and used. Comparison with Non-Context Based Searching, Context Based searching proved to be a more efficient approach while dealing with linguistic semantics. This approach is very beneficial in doing intent based searching. Finally, results and future dimensions are presented.Keywords: natural language processing, text mining, information retrieval, parts-of-speech tagging, grammar, semantics
Procedia PDF Downloads 30810375 A Recommender System for Job Seekers to Show up Companies Based on Their Psychometric Preferences and Company Sentiment Scores
Authors: A. Ashraff
Abstract:
The increasing importance of the web as a medium for electronic and business transactions has served as a catalyst or rather a driving force for the introduction and implementation of recommender systems. Recommender Systems play a major role in processing and analyzing thousands of data rows or reviews and help humans make a purchase decision of a product or service. It also has the ability to predict whether a particular user would rate a product or service based on the user’s profile behavioral pattern. At present, Recommender Systems are being used extensively in every domain known to us. They are said to be ubiquitous. However, in the field of recruitment, it’s not being utilized exclusively. Recent statistics show an increase in staff turnover, which has negatively impacted the organization as well as the employee. The reasons being company culture, working flexibility (work from home opportunity), no learning advancements, and pay scale. Further investigations revealed that there are lacking guidance or support, which helps a job seeker find the company that will suit him best, and though there’s information available about companies, job seekers can’t read all the reviews by themselves and get an analytical decision. In this paper, we propose an approach to study the available review data on IT companies (score their reviews based on user review sentiments) and gather information on job seekers, which includes their Psychometric evaluations. Then presents the job seeker with useful information or rather outputs on which company is most suitable for the job seeker. The theoretical approach, Algorithmic approach and the importance of such a system will be discussed in this paper.Keywords: psychometric tests, recommender systems, sentiment analysis, hybrid recommender systems
Procedia PDF Downloads 10710374 Selecting Answers for Questions with Multiple Answer Choices in Arabic Question Answering Based on Textual Entailment Recognition
Authors: Anes Enakoa, Yawei Liang
Abstract:
Question Answering (QA) system is one of the most important and demanding tasks in the field of Natural Language Processing (NLP). In QA systems, the answer generation task generates a list of candidate answers to the user's question, in which only one answer is correct. Answer selection is one of the main components of the QA, which is concerned with selecting the best answer choice from the candidate answers suggested by the system. However, the selection process can be very challenging especially in Arabic due to its particularities. To address this challenge, an approach is proposed to answer questions with multiple answer choices for Arabic QA systems based on Textual Entailment (TE) recognition. The developed approach employs a Support Vector Machine that considers lexical, semantic and syntactic features in order to recognize the entailment between the generated hypotheses (H) and the text (T). A set of experiments has been conducted for performance evaluation and the overall performance of the proposed method reached an accuracy of 67.5% with C@1 score of 80.46%. The obtained results are promising and demonstrate that the proposed method is effective for TE recognition task.Keywords: information retrieval, machine learning, natural language processing, question answering, textual entailment
Procedia PDF Downloads 14510373 An Introduction to E-Content Producing Algorithm for Screen-Recorded Videos
Authors: Jamileh Darsareh, Mohammad Nikafrooz
Abstract:
Some teachers and e-content producers, based on their experiences, try to produce educational videos using screen recording software. There are many challenges that they may encounter while producing screen-recorded videos. These are in the domains of technical and pedagogical challenges like designing the roadmap, preparing the screen, setting the recording software and recording the screen, editing, etc. This study is a descriptive study and tries to present some procedures for producing acceptable and well-made videos. These procedures are presented in the form of an algorithm for producing screen-recorded video. This algorithm presents the main producing phases, including design, pre-production, production, post-production, and distribution. These phases consist of some steps which are supported by several technical and pedagogical considerations. Following these phases and steps according to the suggested order helps the producers to produce their intended and desired video by saving time and also facing fewer technical problems. It is expected that by using this algorithm, e-content producers and teachers gain better performance in producing educational videos.Keywords: e-content producing algorithm, screen-recorded videos, screen recording software, technical and pedagogical considerations
Procedia PDF Downloads 19710372 Vitrification and Devitrification of Chromium Containing Tannery Ash
Authors: Savvas Varitis, Panagiotis Kavouras, George Kaimakamis, Eleni Pavlidou, George Vourlias, Konstantinos Chrysafis, Philomela Komninou, Theodoros Karakostas
Abstract:
Tannery industry produces high quantities of chromium containing waste which also have high organic content. Processing of this waste is important since the organic content is above the disposal limits and the containing trivalent chromium could be potentially oxidized to hexavalent in the environment. This work aims to fabricate new vitreous and glass ceramic materials which could incorporate the tannery waste in stabilized form either for safe disposal or for the production of useful materials. Tannery waste was incinerated at 500oC in anoxic conditions so most of the organic content would be removed and the chromium remained trivalent. Glass forming agents SiO2, Na2O and CaO were mixed with the resulting ash in different proportions with decreasing ash content. Considering the low solubility of Cr in silicate melts, the mixtures were melted at 1400oC and/or 1500oC for 2h and then casted on a refractory steel plate. The resulting vitreous products were characterized by X-Ray Diffraction (XRD), Differential Thermal Analysis (DTA), Scanning and Transmission Electron Microscopy (SEM and TEM). XRD reveals the existence of Cr2O3 (eskolaite) crystallites embedded in a glassy amorphous matrix. Such crystallites are not formed under a certain proportion of the waste in the ash-vitrified material. Reduction of the ash proportion increases chromium content in the silicate matrix. From these glassy products, glass-ceramics were produced via different regimes of thermal treatment.Keywords: chromium containing tannery ash, glass ceramic materials, thermal processing, vitrification
Procedia PDF Downloads 36710371 Context Detection in Spreadsheets Based on Automatically Inferred Table Schema
Authors: Alexander Wachtel, Michael T. Franzen, Walter F. Tichy
Abstract:
Programming requires years of training. With natural language and end user development methods, programming could become available to everyone. It enables end users to program their own devices and extend the functionality of the existing system without any knowledge of programming languages. In this paper, we describe an Interactive Spreadsheet Processing Module (ISPM), a natural language interface to spreadsheets that allows users to address ranges within the spreadsheet based on inferred table schema. Using the ISPM, end users are able to search for values in the schema of the table and to address the data in spreadsheets implicitly. Furthermore, it enables them to select and sort the spreadsheet data by using natural language. ISPM uses a machine learning technique to automatically infer areas within a spreadsheet, including different kinds of headers and data ranges. Since ranges can be identified from natural language queries, the end users can query the data using natural language. During the evaluation 12 undergraduate students were asked to perform operations (sum, sort, group and select) using the system and also Excel without ISPM interface, and the time taken for task completion was compared across the two systems. Only for the selection task did users take less time in Excel (since they directly selected the cells using the mouse) than in ISPM, by using natural language for end user software engineering, to overcome the present bottleneck of professional developers.Keywords: natural language processing, natural language interfaces, human computer interaction, end user development, dialog systems, data recognition, spreadsheet
Procedia PDF Downloads 31110370 Natural Language News Generation from Big Data
Authors: Bastian Haarmann, Likas Sikorski
Abstract:
In this paper, we introduce an NLG application for the automatic creation of ready-to-publish texts from big data. The fully automatic generated stories have a high resemblance to the style in which the human writer would draw up a news story. Topics may include soccer games, stock exchange market reports, weather forecasts and many more. The generation of the texts runs according to the human language production. Each generated text is unique. Ready-to-publish stories written by a computer application can help humans to quickly grasp the outcomes of big data analyses, save time-consuming pre-formulations for journalists and cater to rather small audiences by offering stories that would otherwise not exist.Keywords: big data, natural language generation, publishing, robotic journalism
Procedia PDF Downloads 43110369 Nutrient Content and Labelling Status of Pre-Packaged Beverages in Saudi Arabia
Authors: Ruyuf Y. Alnafisah, Nouf S. Alammari, Amani S. Alqahtani
Abstract:
Background: Beverage choice can have implications for the risk of non-communicable diseases. However, there is a lack of knowledge in assessing the nutritional content of these beverages. This study aims to describe the nutrient content of pre-packaged beverages available in the Saudi market. Design: Data were collected from the Saudi Branded Food Data-base (SBFD). Nutrient content was standardized in terms of units and reference volumes to ensure consistency in analysis. Results: A total of 1490 beverages were analyzed. The highest median levels of the majority of nutrients were found among dairy products; energy (68.4(43-188]kcal/100 ml in a milkshake); protein (8.2(0.5-8.2]g/100 ml in yogurt drinks); total fat (2.1(1.3-3.5]g/100 ml in milk); saturated fat (1.4(0-1.4]g/100 ml in yogurt drinks); cholesterol (30(0-30]mg/100 ml in yogurt drinks); sodium (65(65-65].4mg/100 ml in yogurt drinks); and total sugars (12.9(7.5-27]g/100 ml in milkshake). Carbohydrate level was the highest in nectar (13(11.8-14.2] g/100ml]; fruits drinks (12.9(11.9-13.9] g/100ml), and sparkling juices (12.9(8.8-14] g/100ml). The highest added sugar level was observed among regular soft drinks (12(10.8-14] g/100ml). The average rate of nutrient declaration was 60.95%. Carbo-hydrate had the highest declaration rate among nutrients (99.1%), and yogurt drinks had the highest declaration rate among beverage categories (92.7%). The median content of vitamins A and D in dairy products met the mandatory addition levels. Conclusion: This study provides valuable insights into the nutrient content of pre-packaged beverages in the Saudi market. It serves as a foundation for future research and monitoring. The findings of the study support the idea of taxing sugary beverages and raise concerns about the health effects of high sugar in fruit juices. Despite the inclusion of vitamins D and A in dairy products, the study highlights the need for alternative strategies to address these deficiencies.Keywords: pre-packaged beverages, nutrients content, nutrients declaration, daily percentage value, mandatory addition of vitamins
Procedia PDF Downloads 5810368 TARF: Web Toolkit for Annotating RNA-Related Genomic Features
Abstract:
Genomic features, the genome-based coordinates, are commonly used for the representation of biological features such as genes, RNA transcripts and transcription factor binding sites. For the analysis of RNA-related genomic features, such as RNA modification sites, a common task is to correlate these features with transcript components (5'UTR, CDS, 3'UTR) to explore their distribution characteristics in terms of transcriptomic coordinates, e.g., to examine whether a specific type of biological feature is enriched near transcription start sites. Existing approaches for performing these tasks involve the manipulation of a gene database, conversion from genome-based coordinate to transcript-based coordinate, and visualization methods that are capable of showing RNA transcript components and distribution of the features. These steps are complicated and time consuming, and this is especially true for researchers who are not familiar with relevant tools. To overcome this obstacle, we develop a dedicated web app TARF, which represents web toolkit for annotating RNA-related genomic features. TARF web tool intends to provide a web-based way to easily annotate and visualize RNA-related genomic features. Once a user has uploaded the features with BED format and specified a built-in transcript database or uploaded a customized gene database with GTF format, the tool could fulfill its three main functions. First, it adds annotation on gene and RNA transcript components. For every features provided by the user, the overlapping with RNA transcript components are identified, and the information is combined in one table which is available for copy and download. Summary statistics about ambiguous belongings are also carried out. Second, the tool provides a convenient visualization method of the features on single gene/transcript level. For the selected gene, the tool shows the features with gene model on genome-based view, and also maps the features to transcript-based coordinate and show the distribution against one single spliced RNA transcript. Third, a global transcriptomic view of the genomic features is generated utilizing the Guitar R/Bioconductor package. The distribution of features on RNA transcripts are normalized with respect to RNA transcript landmarks and the enrichment of the features on different RNA transcript components is demonstrated. We tested the newly developed TARF toolkit with 3 different types of genomics features related to chromatin H3K4me3, RNA N6-methyladenosine (m6A) and RNA 5-methylcytosine (m5C), which are obtained from ChIP-Seq, MeRIP-Seq and RNA BS-Seq data, respectively. TARF successfully revealed their respective distribution characteristics, i.e. H3K4me3, m6A and m5C are enriched near transcription starting sites, stop codons and 5’UTRs, respectively. Overall, TARF is a useful web toolkit for annotation and visualization of RNA-related genomic features, and should help simplify the analysis of various RNA-related genomic features, especially those related RNA modifications.Keywords: RNA-related genomic features, annotation, visualization, web server
Procedia PDF Downloads 20810367 Elimination Study of Organic Pollutants from Leachate Technical Landfill; Using Fenton and Photo-Fenton Systems Combined with Biological Treatment
Authors: Belahmadi M. S. O., Abdessemed A., Benchiheub M., Doukali H., Kaid Kasbah K. M.
Abstract:
The aim of this study is to evaluate the quality of leachate generated by the Batna landfill site, and to verify the performance of various advanced oxidation processes, in particular the Fenton and Photo-Fenton systems combined with biological treatment to eliminate the recalcitrant organic matter contained in this effluent, and to preserve reverse osmosis membranes used for leachate treatment. The average values obtained are compared with national and international discharge standards. The results of physico-chemical analyses show that the leachate has an alkaline pH =8.26 and a high organic load with a low oxygen content. Mineral pollution is represented by high conductivity (38.3 mS/cm), high Kjeldahl nitrogen content (1266.504 mg/L) and ammoniacal nitrogen (1098.384 mg/L). The average pollution indicator parameters measured were: BOD5 = 1483.333 mg O2 /L, COD = 99790.244 mg O 2/L, TOC = 22400 mg C/L. These parameters exceed Algerian standards. Hence, there is a necessity to treat this effluent before discharging it into the environment. A comparative study was carried out to estimate the efficiency of two oxidation processes. Under optimum reaction conditions, TOC removal efficiencies of 63.43% and 73.4% were achieved for the Fenton and Photo-Fenton processes, respectively. COD removal rates estimated at 88% and 99.5% for the Fenton and Photo- Fenton processes, respectively. In addition, the Photo-Fenton + bacteria + micro- algae hybrid treatment gave removal efficiencies of around 92.24% for TOC and 99.9% for COD; -0.5 for AOS and 0.01 for CN. The results obtained during this study showed that a hybrid approach combining the PhotoFenton process and biological treatment appears to be a highly effective alternative for achieving satisfactory treatment, which aimed at exploiting the advantages of this method in terms of organic pollutant removal.Keywords: leachate, landfill, advanced oxidation processes, Fenton and Photo-Fenton systems, biological treatment, organic pollutants
Procedia PDF Downloads 6710366 Design and Numerical Study on Aerodynamics Performance for F16 Leading Edge Extension
Authors: San-Yih Lin, Hsien-Hao Teng
Abstract:
In this research, we use commercial software, ANSYS CFX, to carry on the simulation the F16 aerodynamics performance flow field. The flight with a modified Leading Edge Extension (LEX) is proposed to increase the lift/drag ratio. The Shear Stress Transport turbulent model is used. The unstructured grid system is generated by the ICEM CFD. The prism grid around the wall surface is generated to simulate boundary layer viscosity flow field and Tetrahedron Mesh is used for the other computation domain. The lift, drag, and pitch moment are computed. The strong vortex structures upper the wing and vortex bursts under different sweep angle of LEX are investigated.Keywords: LEX, lift/drag ratio, pitch moment, vortex burst
Procedia PDF Downloads 32610365 Excitation Modeling for Hidden Markov Model-Based Speech Synthesis Based on Wavelet Analysis
Authors: M. Kiran Reddy, K. Sreenivasa Rao
Abstract:
The conventional Hidden Markov Model (HMM)-based speech synthesis system (HTS) uses only a pulse excitation model, which significantly differs from natural excitation signal. Hence, buzziness can be perceived in the speech generated using HTS. This paper proposes an efficient excitation modeling method that can significantly reduce the buzziness, and improve the quality of HMM-based speech synthesis. The proposed approach models the pitch-synchronous residual frames extracted from the residual excitation signal. Each pitch synchronous residual frame is parameterized using 30 wavelet coefficients. These 30 wavelet coefficients are found to accurately capture the perceptually important information present in the residual waveform. In synthesis phase, the residual frames are reconstructed from the generated wavelet coefficients and are pitch-synchronously overlap-added to generate the excitation signal. The proposed excitation modeling method is integrated into HMM-based speech synthesis system. Evaluation results indicate that the speech synthesized by the proposed excitation model is significantly better than the speech generated using state-of-the-art excitation modeling methods.Keywords: excitation modeling, hidden Markov models, pitch-synchronous frames, speech synthesis, wavelet coefficients
Procedia PDF Downloads 249