Search results for: grammatical errors
794 Changes of First-Person Pronoun Pragmatic Functions in Three Historical Chinese Texts
Authors: Cher Leng Lee
Abstract:
The existence of multiple first-person pronouns (1PPs) in classical Chinese is an issue that has not been resolved despite linguists using the grammatical perspective. This paper proposes pragmatics as a viable solution. There is also a lack of research exploring the evolving usage patterns of 1PPs within the historical context of Chinese language use. Such research can help us comprehend the changes and developments of these linguistic elements. To fill these research gaps, we use the diachronic pragmatics approach to contrast the functions of Chinese 1PPs in three representative texts from three different historical periods: The Analects (The Spring and Autumn Period), The Grand Scribe’s Records (Grand Records) (Qin and Han Period), and A New Account of Tales of the World (New Account) (The Wei, Jin and Southern and Northern Period). The 1PPs of these texts are manually identified and classified according to the pragmatic functions in the given contexts to observe their historical changes, understand the factors that contribute to these changes, and provide possible answers to the development of how wo became the only 1PP in today’s spoken Mandarin.Keywords: historical, Chinese, pronouns, pragmatics
Procedia PDF Downloads 54793 FT-NIR Method to Determine Moisture in Gluten Free Rice-Based Pasta during Drying
Authors: Navneet Singh Deora, Aastha Deswal, H. N. Mishra
Abstract:
Pasta is one of the most widely consumed food products around the world. Rapid determination of the moisture content in pasta will assist food processors to provide online quality control of pasta during large scale production. Rapid Fourier transform near-infrared method (FT-NIR) was developed for determining moisture content in pasta. A calibration set of 150 samples, a validation set of 30 samples and a prediction set of 25 samples of pasta were used. The diffuse reflection spectra of different types of pastas were measured by FT-NIR analyzer in the 4,000-12,000 cm-1 spectral range. Calibration and validation sets were designed for the conception and evaluation of the method adequacy in the range of moisture content 10 to 15 percent (w.b) of the pasta. The prediction models based on partial least squares (PLS) regression, were developed in the near-infrared. Conventional criteria such as the R2, the root mean square errors of cross validation (RMSECV), root mean square errors of estimation (RMSEE) as well as the number of PLS factors were considered for the selection of three pre-processing (vector normalization, minimum-maximum normalization and multiplicative scatter correction) methods. Spectra of pasta sample were treated with different mathematic pre-treatments before being used to build models between the spectral information and moisture content. The moisture content in pasta predicted by FT-NIR methods had very good correlation with their values determined via traditional methods (R2 = 0.983), which clearly indicated that FT-NIR methods could be used as an effective tool for rapid determination of moisture content in pasta. The best calibration model was developed with min-max normalization (MMN) spectral pre-processing (R2 = 0.9775). The MMN pre-processing method was found most suitable and the maximum coefficient of determination (R2) value of 0.9875 was obtained for the calibration model developed.Keywords: FT-NIR, pasta, moisture determination, food engineering
Procedia PDF Downloads 258792 The Multi-Lingual Acquisition Patterns of Elementary, High School and College Students in Angeles City, Philippines
Authors: Dennis Infante, Leonora Yambao
Abstract:
The Philippines is a multilingual community. A Filipino learns at least three languages throughout his lifespan. Since languages are learned and picked up simultaneously in the environment, a student naturally develops a language system that combines features of at least three languages: the local language, English and Filipino. This study seeks to investigate this particular phenomenon and aspires to propose a theoretical framework of unique language acquisition in the elementary, high school and college in the three languages spoken and used in media, community, business and school: Kapampangan, the local language; Filipino, the national language; and English. The study randomly selects five students from three participating schools in order to acquire language samples. The samples were analyzed in the subsentential, sentential and suprasentential levels using grammatical theories. The data are classified to map out the pattern of substitution or shifting from one language to another.Keywords: language acquisition, mother tongue, multiculturalism, multilingual education
Procedia PDF Downloads 380791 How Can Personal Protective Equipment Be Best Used and Reused: A Human Factors based Look at Donning and Doffing Procedures
Authors: Devin Doos, Ashley Hughes, Trang Pham, Paul Barach, Rami Ahmed
Abstract:
Over 115,000 Health Care Workers (HCWs) have died from COVID-19, and millions have been infected while caring for patients. HCWs have filed thousands of safety complaints surrounding safety concerns due to Personal Protective Equipment (PPE) shortages, which included concerns around inadequate and PPE reuse. Protocols for donning and doffing PPE remain ambiguous, lacking an evidence-base, and often result in wide deviations in practice. PPE donning and doffing protocol deviations commonly result in self-contamination but have not been thoroughly addressed. No evidence-driven protocols provide guidance on protecting HCW during periods of PPE reuse. Objective: The aim of this study was to examine safety-related threats and risks to Health Care Workers (HCWs) due to the reuse of PPE among Emergency Department personnel. Method: We conducted a prospective observational study to examine the risks of reusing PPE. First, ED personnel were asked to don and doff PPE in a simulation lab. Each participant was asked to don and doff PPE five times, according to the maximum reuse recommendation set by the Centers for Disease Control and Prevention (CDC). Each participant was videorecorded; video recordings were reviewed and coded independently by at least 2 of the 3trained coders for safety behaviors and riskiness of actions. A third coder was brought in when the agreement between the 2 coders could not be reached. Agreement between coders was high (81.9%), and all disagreements (100%) were resolved via consensus. A bowtie risk assessment chart was constructed analyzing the factors that contribute to increased risks HCW are faced with due to PPE use and reuse. Agreement amongst content experts in the field of Emergency Medicine, Human Factors, and Anesthesiology was used to select aspects of health care that both contribute and mitigate risks associated with PPE reuse. Findings: Twenty-eight clinician participants completed five rounds of donning/doffing PPE, yielding 140 PPE donning/doffing sequences. Two emerging threats were associated with behaviors in donning, doffing, and re-using PPE: (i) direct exposure to contaminant, and (ii) transmission/spread of contaminant. Protective behaviors included: hand hygiene, not touching the patient-facing surface of PPE, and ensuring a proper fit and closure of all PPE materials. 100% of participants (n= 28) deviated from the CDC recommended order, and most participants (92.85%, n=26) self-contaminated at least once during reuse. Other frequent errors included failure to tie all ties on the PPE (92.85%, n=26) and failure to wash hands after a contamination event occurred (39.28%, n=11). Conclusions: There is wide variation and regular errors in how HCW don and doffPPE while including in reusing PPE that led to self-contamination. Some errors were deemed “recoverable”, such as hand washing after touching a patient-facing surface to remove the contaminant. Other errors, such as using a contaminated mask and accidentally spreading to the neck and face, can lead to compound risks that are unique to repeated PPE use. A more comprehensive understanding of the contributing threats to HCW safety and complete approach to mitigating underlying risks, including visualizing with risk management toolsmay, aid future PPE designand workflow and space solutions.Keywords: bowtie analysis, health care, PPE reuse, risk management
Procedia PDF Downloads 90790 Coupling Large Language Models with Disaster Knowledge Graphs for Intelligent Construction
Authors: Zhengrong Wu, Haibo Yang
Abstract:
In the context of escalating global climate change and environmental degradation, the complexity and frequency of natural disasters are continually increasing. Confronted with an abundance of information regarding natural disasters, traditional knowledge graph construction methods, which heavily rely on grammatical rules and prior knowledge, demonstrate suboptimal performance in processing complex, multi-source disaster information. This study, drawing upon past natural disaster reports, disaster-related literature in both English and Chinese, and data from various disaster monitoring stations, constructs question-answer templates based on large language models. Utilizing the P-Tune method, the ChatGLM2-6B model is fine-tuned, leading to the development of a disaster knowledge graph based on large language models. This serves as a knowledge database support for disaster emergency response.Keywords: large language model, knowledge graph, disaster, deep learning
Procedia PDF Downloads 56789 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 164788 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 316787 Linguistic Attitudes and Language Learning Needs of Heritage Language Learners of Spanish in the United States
Authors: Sheryl Bernardo-Hinesley
Abstract:
Heritage language learners are students who have been raised in a home where a minority language is spoken, who speaks or merely understand the minority heritage language, but to some degree are bilingual in the majority and the heritage language. In view of the rising university enrollment by Hispanics in the United States who have chosen to study Spanish, university language programs are currently faced with challenges of accommodating the language needs of heritage language learners of Spanish. The present study investigates the heritage language perception and language attitudes by heritage language learners of Spanish, as well as their classroom language learning experiences and needs. In order to carry out the study, a qualitative survey was used to gather data from university students. Analysis of students' responses indicates that heritage learners are motivated to learn the heritage language. In relation to the aspects of focus of a language course for heritage learners, results show that the aspects of interest are accent marks and spelling, grammatical accuracy, vocabulary, writing, reading, and culture.Keywords: heritage language learners, language acquisition, linguistic attitudes, Spanish in the US
Procedia PDF Downloads 212786 Kitchenary Metaphors in Hindi-Urdu: A Cognitive Analysis
Authors: Bairam Khan, Premlata Vaishnava
Abstract:
The ability to conceptualize one entity in terms of another allows us to communicate through metaphors. This central feature of human cognition has evolved with the development of language, and the processing of metaphors is without any conscious appraisal and is quite effortless. South Asians, like other speech communities, have been using the kitchenary [culinary] metaphor in a very simple yet interesting way and are known for bringing into new and unique constellations wherever they are. This composite feature of our language is used to communicate in a precise and compact manner and maneuvers the expression. The present study explores the role of kitchenary metaphors in the making and shaping of idioms by applying Cognitive Metaphor Theories. Drawing on examples from a corpus of adverts, print, and electronic media, the study looks at the metaphorical language used by real people in real situations. The overarching theme throughout the course is that kitchenary metaphors are powerful tools of expression in Hindi-Urdu.Keywords: cognitive metaphor theories, kitchenary metaphors, hindi-urdu print, and electronic media, grammatical structure of kitchenary metaphors of hindi-urdu
Procedia PDF Downloads 93785 Use and Relationship of Shell Nouns as Cohesive Devices in the Quality of Second Language Writing
Authors: Kristine D. de Leon, Junifer A. Abatayo, Jose Cristina M. Pariña
Abstract:
The current study is a comparative analysis of the use of shell nouns as a cohesive device (CD) in an English for Second Language (ESL) setting in order to identify their use and relationship in the quality of second language (L2) writing. As these nouns were established to anticipate the meaning within, across or outside the text, their use has fascinated writing researchers. The corpus of the study included published articles from reputable journals and graduate students’ papers in order to analyze the frequency of shell nouns using “highly prevalent” nouns in the academic community, to identify the different lexicogrammatical patterns where these nouns occur and to the functions connected with these patterns. The result of the study implies that published authors used more shell nouns in their paper than graduate students. However, the functions of the different lexicogrammatical patterns for the frequently occurring shell nouns are somewhat similar. These results could help students in enhancing the cohesion of their text and in comprehending it.Keywords: anaphoric, cataphoric, lexico-grammatical, shell nouns
Procedia PDF Downloads 185784 Optimization of Geometric Parameters of Microfluidic Channels for Flow-Based Studies
Authors: Parth Gupta, Ujjawal Singh, Shashank Kumar, Mansi Chandra, Arnab Sarkar
Abstract:
Microfluidic devices have emerged as indispensable tools across various scientific disciplines, offering precise control and manipulation of fluids at the microscale. Their efficacy in flow-based research, spanning engineering, chemistry, and biology, relies heavily on the geometric design of microfluidic channels. This work introduces a novel approach to optimise these channels through Response Surface Methodology (RSM), departing from the conventional practice of addressing one parameter at a time. Traditionally, optimising microfluidic channels involved isolated adjustments to individual parameters, limiting the comprehensive understanding of their combined effects. In contrast, our approach considers the simultaneous impact of multiple parameters, employing RSM to efficiently explore the complex design space. The outcome is an innovative microfluidic channel that consumes an optimal sample volume and minimises flow time, enhancing overall efficiency. The relevance of geometric parameter optimization in microfluidic channels extends significantly in biomedical engineering. The flow characteristics of porous materials within these channels depend on many factors, including fluid viscosity, environmental conditions (such as temperature and humidity), and specific design parameters like sample volume, channel width, channel length, and substrate porosity. This intricate interplay directly influences the performance and efficacy of microfluidic devices, which, if not optimized, can lead to increased costs and errors in disease testing and analysis. In the context of biomedical applications, the proposed approach addresses the critical need for precision in fluid flow. it mitigate manufacturing costs associated with trial-and-error methodologies by optimising multiple geometric parameters concurrently. The resulting microfluidic channels offer enhanced performance and contribute to a streamlined, cost-effective process for testing and analyzing diseases. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing. A key highlight of our methodology is its consideration of the interconnected nature of geometric parameters. For instance, the volume of the sample, when optimized alongside channel width, length, and substrate porosity, creates a synergistic effect that minimizes errors and maximizes efficiency. This holistic optimization approach ensures that microfluidic devices operate at their peak performance, delivering reliable results in disease testing.Keywords: microfluidic device, minitab, statistical optimization, response surface methodology
Procedia PDF Downloads 68783 First-Person Pronoun Pragmatic Functions in Three Historical Chinese Texts
Authors: Cher Leng Lee
Abstract:
The existence of multiple first-person pronouns (1PPs) in classical Chinese is an issue that has not been resolved despite linguists using the grammatical perspective. This paper proposes pragmatics as a viable solution. There is also a lack of research exploring the evolving usage patterns of 1PPs within the historical context of Chinese language use. Such research can help us comprehend the changes and developments of these linguistic elements. To fill these research gaps, we use the diachronic pragmatics approach to contrast the functions of Chinese 1PPs in three representative texts from three different historical periods: The Analects (The Spring and Autumn Period), The Grand Scribe’s Records (Grand Records) (Qin and Han Period), and A New Account of Tales of the World (New Account) (The Wei, Jin and Southern and Northern Period). The 1PPs of these texts are manually identified and classified according to the pragmatic functions in the given contexts to observe their historical changes, understand the factors that contribute to these changes, and provide possible answers to the development of how wo became the only 1PP in today’s spoken Mandarin.Keywords: Chinese language, classical Chinese, historical linguistics, pragmatics, first-person pronouns
Procedia PDF Downloads 23782 Quality Analysis of Vegetables Through Image Processing
Authors: Abdul Khalique Baloch, Ali Okatan
Abstract:
The quality analysis of food and vegetable from image is hot topic now a day, where researchers make them better then pervious findings through different technique and methods. In this research we have review the literature, and find gape from them, and suggest better proposed approach, design the algorithm, developed a software to measure the quality from images, where accuracy of image show better results, and compare the results with Perouse work done so for. The Application we uses an open-source dataset and python language with tensor flow lite framework. In this research we focus to sort food and vegetable from image, in the images, the application can sorts and make them grading after process the images, it could create less errors them human base sorting errors by manual grading. Digital pictures datasets were created. The collected images arranged by classes. The classification accuracy of the system was about 94%. As fruits and vegetables play main role in day-to-day life, the quality of fruits and vegetables is necessary in evaluating agricultural produce, the customer always buy good quality fruits and vegetables. This document is about quality detection of fruit and vegetables using images. Most of customers suffering due to unhealthy foods and vegetables by suppliers, so there is no proper quality measurement level followed by hotel managements. it have developed software to measure the quality of the fruits and vegetables by using images, it will tell you how is your fruits and vegetables are fresh or rotten. Some algorithms reviewed in this thesis including digital images, ResNet, VGG16, CNN and Transfer Learning grading feature extraction. This application used an open source dataset of images and language used python, and designs a framework of system.Keywords: deep learning, computer vision, image processing, rotten fruit detection, fruits quality criteria, vegetables quality criteria
Procedia PDF Downloads 70781 Exploring Students' Alternative Conception in Vector Components
Authors: Umporn Wutchana
Abstract:
An open ended problem and unstructured interview had been used to explore students’ conceptual and procedural understanding of vector components. The open ended problem had been designed based on research instrument used in previous physics education research. Without physical context, we asked students to find out magnitude and draw graphical form of vector components. The open ended problem was given to 211 first year students of faculty of science during the third (summer) semester in 2014 academic year. The students spent approximately 15 minutes of their second time of the General Physics I course to complete the open ended problem after they had failed. Consequently, their responses were classified based on the similarity of errors performed in the responses. Then, an unstructured interview was conducted. 7 students were randomly selected and asked to reason and explain their answers. The study results showed that 53% of 211 students provided correct numerical magnitude of vector components while 10.9% of them confused and punctuated the magnitude of vectors in x- with y-components. Others 20.4% provided just symbols and the last 15.6% gave no answer. When asking to draw graphical form of vector components, only 10% of 211 students made corrections. A majority of them produced errors and revealed alternative conceptions. 46.5% drew longer and/or shorter magnitude of vector components. 43.1% drew vectors in different forms or wrote down other symbols. Results from the unstructured interview indicated that some students just memorized the method to get numerical magnitude of x- and y-components. About graphical form of component vectors, some students though that the length of component vectors should be shorter than those of the given one. So then, it could be combined to be equal length of the given vectors while others though that component vectors should has the same length as the given vectors. It was likely to be that many students did not develop a strong foundation of understanding in vector components but just learn by memorizing its solution or the way to compute its magnitude and attribute little meaning to such concept.Keywords: graphical vectors, vectors, vector components, misconceptions, alternative conceptions
Procedia PDF Downloads 188780 Software Development for AASHTO and Ethiopian Roads Authority Flexible Pavement Design Methods
Authors: Amare Setegn Enyew, Bikila Teklu Wodajo
Abstract:
The primary aim of flexible pavement design is to ensure the development of economical and safe road infrastructure. However, failures can still occur due to improper or erroneous structural design. In Ethiopia, the design of flexible pavements relies on doing calculations manually and selecting pavement structure from catalogue. The catalogue offers, in eight different charts, alternative structures for combinations of traffic and subgrade classes, as outlined in the Ethiopian Roads Authority (ERA) Pavement Design Manual 2001. Furthermore, design modification is allowed in accordance with the structural number principles outlined in the AASHTO 1993 Guide for Design of Pavement Structures. Nevertheless, the manual calculation and design process involves the use of nomographs, charts, tables, and formulas, which increases the likelihood of human errors and inaccuracies, and this may lead to unsafe or uneconomical road construction. To address the challenge, a software called AASHERA has been developed for AASHTO 1993 and ERA design methods, using MATLAB language. The software accurately determines the required thicknesses of flexible pavement surface, base, and subbase layers for the two methods. It also digitizes design inputs and references like nomographs, charts, default values, and tables. Moreover, the software allows easier comparison of the two design methods in terms of results and cost of construction. AASHERA's accuracy has been confirmed through comparisons with designs from handbooks and manuals. The software can aid in reducing human errors, inaccuracies, and time consumption as compared to the conventional manual design methods employed in Ethiopia. AASHERA, with its validated accuracy, proves to be an indispensable tool for flexible pavement structure designers.Keywords: flexible pavement design, AASHTO 1993, ERA, MATLAB, AASHERA
Procedia PDF Downloads 63779 A Compact Via-less Ultra-Wideband Microstrip Filter by Utilizing Open-Circuit Quarter Wavelength Stubs
Authors: Muhammad Yasir Wadood, Fatemeh Babaeian
Abstract:
By developing ultra-wideband (UWB) systems, there is a high demand for UWB filters with low insertion loss, wide bandwidth, and having a planar structure which is compatible with other components of the UWB system. A microstrip interdigital filter is a great option for designing UWB filters. However, the presence of via holes in this structure creates difficulties in the fabrication procedure of the filter. Especially in the higher frequency band, any misalignment of the drilled via hole with the Microstrip stubs causes large errors in the measurement results compared to the desired results. Moreover, in this case (high-frequency designs), the line width of the stubs are very narrow, so highly precise small via holes are required to be implemented, which increases the cost of fabrication significantly. Also, in this case, there is a risk of having fabrication errors. To combat this issue, in this paper, a via-less UWB microstrip filter is proposed which is designed based on a modification of a conventional inter-digital bandpass filter. The novel approaches in this filter design are 1) replacement of each via hole with a quarter-wavelength open circuit stub to avoid the complexity of manufacturing, 2) using a bend structure to reduce the unwanted coupling effects and 3) minimising the size. Using the proposed structure, a UWB filter operating in the frequency band of 3.9-6.6 GHz (1-dB bandwidth) is designed and fabricated. The promising results of the simulation and measurement are presented in this paper. The selected substrate for these designs was Rogers RO4003 with a thickness of 20 mils. This is a common substrate in most of the industrial projects. The compact size of the proposed filter is highly beneficial for applications which require a very miniature size of hardware.Keywords: band-pass filters, inter-digital filter, microstrip, via-less
Procedia PDF Downloads 156778 The Effect of Explicit Focus on Form on Second Language Learning Writing Performance
Authors: Keivan Seyyedi, Leila Esmaeilpour, Seyed Jamal Sadeghi
Abstract:
Investigating the effectiveness of explicit focus on form on the written performance of the EFL learners was the aim of this study. To provide empirical support for this study, sixty male English learners were selected and randomly assigned into two groups of explicit focus on form and meaning focused. Narrative writing was employed for data collection. To measure writing performance, participants were required to narrate a story. They were given 20 minutes to finish the task and were asked to write at least 150 words. The participants’ output was coded then analyzed utilizing Independent t-test for grammatical accuracy and fluency of learners’ performance. Results indicated that learners in explicit focus on form group appear to benefit from error correction and rule explanation as two pedagogical techniques of explicit focus on form with respect to accuracy, but regarding fluency they did not yield any significant differences compared to the participants of meaning-focused group.Keywords: explicit focus on form, rule explanation, accuracy, fluency
Procedia PDF Downloads 511777 Evaluating the Dosimetric Performance for 3D Treatment Planning System for Wedged and Off-Axis Fields
Authors: Nashaat A. Deiab, Aida Radwan, Mohamed S. Yahiya, Mohamed Elnagdy, Rasha Moustafa
Abstract:
This study is to evaluate the dosimetric performance of our institution's 3D treatment planning system for wedged and off-axis 6MV photon beams, guided by the recommended QA tests documented in the AAPM TG53; NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Ten tests were applied on solid water equivalent phantom along with 2D array dose detection system. The calculated doses using 3D treatment planning system PrecisePLAN were compared with measured doses to make sure that the dose calculations are accurate for simple situations such as square and elongated fields, different SSD, beam modifiers e.g. wedges, blocks, MLC-shaped fields and asymmetric collimator settings. The QA results showed dosimetric accuracy of the TPS within the specified tolerance limits. Except for large elongated wedged field, the central axis and outside central axis have errors of 0.2% and 0.5%, respectively, and off- planned and off-axis elongated fields the region outside the central axis of the beam errors are 0.2% and 1.1%, respectively. The dosimetric investigated results yielded differences within the accepted tolerance level as recommended. Differences between dose values predicted by the TPS and measured values at the same point are the result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.Keywords: quality assurance, dose calculation, wedged fields, off-axis fields, 3D treatment planning system, photon beam
Procedia PDF Downloads 445776 Disadvantages and Drawbacks of Concrete Blocks and Fix Their Defects
Authors: Ehsan Sadie
Abstract:
Today, the cost of repair and maintenance of structures is very important and by studying the behavior of reinforced concrete structures Will become specified several factors such as : Design and calculation errors, lack of proper implementation of structural changes, the damage caused by the introduction of random loads, concrete corrosion and environmental conditions reduce durability of the structures . Meanwhile building codes alteration also cause changes in the assessment and review of the design and structure rather if necessary will be improved and strengthened in the future.Keywords: concrete building , expandable cement, honeycombed surface , reinforcement corrosion
Procedia PDF Downloads 441775 A Chinese Nested Named Entity Recognition Model Based on Lexical Features
Abstract:
In the field of named entity recognition, most of the research has been conducted around simple entities. However, for nested named entities, which still contain entities within entities, it has been difficult to identify them accurately due to their boundary ambiguity. In this paper, a hierarchical recognition model is constructed based on the grammatical structure and semantic features of Chinese text for boundary calculation based on lexical features. The analysis is carried out at different levels in terms of granularity, semantics, and lexicality, respectively, avoiding repetitive work to reduce computational effort and using the semantic features of words to calculate the boundaries of entities to improve the accuracy of the recognition work. The results of the experiments carried out on web-based microblogging data show that the model achieves an accuracy of 86.33% and an F1 value of 89.27% in recognizing nested named entities, making up for the shortcomings of some previous recognition models and improving the efficiency of recognition of nested named entities.Keywords: coarse-grained, nested named entity, Chinese natural language processing, word embedding, T-SNE dimensionality reduction algorithm
Procedia PDF Downloads 128774 Factors Affecting Contractual Disputes in Construction ProJects in Sri Lanka
Authors: R. M. Rajapaksa
Abstract:
Construction industry is one of the key players in driving the economy of a country to achieve its prosperity. However, a dispute is one of the crucial factors which prevent the completion of construction contracts within the budgeted cost, scheduled time, and accepted quality. Disputes are inevitable in the construction contract. Accordingly, a study has been undertaken to identify the factors affecting contractual disputes in construction projects in Sri Lanka. The study was a mixed approach with major qualitative and minor quantitative. Qualitative study was set in the form of in-depth interviews with eighteen participants, and quantitative study was conducted using a questionnaire with twenty-four respondents from previously implemented projects by the National Water Supply & Drainage Board representing the employer, engineer and the Contractor to identify the factors affecting contractual disputes and to verify most critical factors respectively. Data analysis for qualitative and quantitative studies was carried out by means of transcribing, code & categorizeand average score methods, respectively. The study reveals that there are forty factors affecting the contractual disputes in construction contracts in Sri Lanka. The finding further illustrates that conflicting decisions by inexperience personnel in the higher position of the Employer, ambiguities resulting inadequate descriptions of the preliminary/general items in price schedule, unfair valuation and late confirmation of variations, unfair determination due to lack of experience of the Engineer/Consultant, under certification of progress payments, unfair grant of EOT & application of delay damages, unreasonable claims for variation of works, errors/discrepancies/ambiguities in the contract conditions and discrepancies & errors in designs & specifications are the most critical factors affecting contractual disputes. Finally, the study proposed remedial measures to most critical factors affecting contractual disputes.Keywords: dispute, contractual, factors, employer, engineer, contractor, construction projects
Procedia PDF Downloads 216773 Morphological Analysis of Manipuri Language: Wahei-Neinarol
Authors: Y. Bablu Singh, B. S. Purkayashtha, Chungkham Yashawanta Singh
Abstract:
Morphological analysis forms the basic foundation in NLP applications including syntax parsing Machine Translation (MT), Information Retrieval (IR) and automatic indexing in all languages. It is the field of the linguistics; it can provide valuable information for computer based linguistics task such as lemmatization and studies of internal structure of the words. Computational Morphology is the application of morphological rules in the field of computational linguistics, and it is the emerging area in AI, which studies the structure of words, which are formed by combining smaller units of linguistics information, called morphemes: the building blocks of words. Morphological analysis provides about semantic and syntactic role in a sentence. It analyzes the Manipuri word forms and produces several grammatical information associated with the words. The Morphological Analyzer for Manipuri has been tested on 3500 Manipuri words in Shakti Standard format (SSF) using Meitei Mayek as source; thereby an accuracy of 80% has been obtained on a manual check.Keywords: morphological analysis, machine translation, computational morphology, information retrieval, SSF
Procedia PDF Downloads 326772 The Guaranteed Detection of the Seismoacoustic Emission Source in the C-OTDR Systems
Authors: Andrey V. Timofeev
Abstract:
A method is proposed for stable detection of seismoacoustic sources in C-OTDR systems that guarantee given upper bounds for probabilities of type I and type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDR-system are presented in this.Keywords: guaranteed detection, C-OTDR systems, change point, interval estimation
Procedia PDF Downloads 256771 Gesture in the Arabic and Malay Languages a Comparative Study
Authors: Siti Sara binti Hj Ahmad, Adil Elshiekh Abdalla
Abstract:
The Arabic and Malay languages belong to different language’s families; while the Arabic language descends from the Semitic language, Malay belongs to the Austronesian (Malayo-Polynesian) family. Hence, the grammatical systems of the two languages differ from each other. Arabic, being a language found in the heart of the dessert, and Malay is the language found in the heart of thick equatorial forests, is another source of vital cultural differences. Consequently, it is expected that this situation will create differences in the ways of how speakers of the two languages perceive the world around them, convey and understand their messages. On the other hand, as the majority of the speakers of Malay language are Muslims, Arabic language found its way in this region; currently, Arabic is widely taught in school, some terms of it found their way in the Malay language. Accordingly, the Arabic language and culture have widely penetrated into the Malay language. This study is proposed with the aim to find out the differences and similarities between the two languages, in the term of the nonverbal communication. The result of this study will be of high significance, as it will help in enhancing the mutual understanding between the speakers of these languages. The comparative analysis approach will be utilized in this study.Keywords: gesture, Arabic language, Malay language, comparative analysis
Procedia PDF Downloads 567770 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge
Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi
Abstract:
Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring
Procedia PDF Downloads 208769 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured GNSS-Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite systems (GNSS), denied settings such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation, thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.Keywords: autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion
Procedia PDF Downloads 207768 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing
Authors: Amal Sellami, Ahlem Ammar
Abstract:
Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.Keywords: collaboration, writing, collaborative planning, collaborative reviewing
Procedia PDF Downloads 99767 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 343766 Leveraging Remote Assessments and Central Raters to Optimize Data Quality in Rare Neurodevelopmental Disorders Clinical Trials
Authors: Pamela Ventola, Laurel Bales, Sara Florczyk
Abstract:
Background: Fully remote or hybrid administration of clinical outcome measures in rare neurodevelopmental disorders trials is increasing due to the ongoing pandemic and recognition that remote assessments reduce the burden on families. Many assessments in rare neurodevelopmental disorders trials are complex; however, remote/hybrid trials readily allow for the use of centralized raters to administer and score the scales. The use of centralized raters has many benefits, including reducing site burden; however, a specific impact on data quality has not yet been determined. Purpose: The current study has two aims: a) evaluate differences in data quality between administration of a standardized clinical interview completed by centralized raters compared to those completed by site raters and b) evaluate improvement in accuracy of scoring standardized developmental assessments when scored centrally compared to when scored by site raters. Methods: For aim 1, the Vineland-3, a widely used measure of adaptive functioning, was administered by site raters (n= 52) participating in one of four rare disease trials. The measure was also administered as part of two additional trials that utilized central raters (n=7). Each rater completed a comprehensive training program on the assessment. Following completion of the training, each clinician completed a Vineland-3 with a mock caregiver. Administrations were recorded and reviewed by a neuropsychologist for administration and scoring accuracy. Raters were able to certify for the trials after demonstrating an accurate administration of the scale. For site raters, 25% of each rater’s in-study administrations were reviewed by a neuropsychologist for accuracy of administration and scoring. For central raters, the first two administrations and every 10th administration were reviewed. Aim 2 evaluated the added benefit of centralized scoring on the accuracy of scoring of the Bayley-3, a comprehensive developmental assessment widely used in rare neurodevelopmental disorders trials. Bayley-3 administrations across four rare disease trials were centrally scored. For all administrations, the site rater who administered the Bayley-3 scored the scale, and a centralized rater reviewed the video recordings of the administrations and also scored the scales to confirm accuracy. Results: For aim 1, site raters completed 138 Vineland-3 administrations. Of the138 administrations, 53 administrations were reviewed by a neuropsychologist. Four of the administrations had errors that compromised the validity of the assessment. The central raters completed 180 Vineland-3 administrations, 38 administrations were reviewed, and none had significant errors. For aim 2, 68 administrations of the Bayley-3 were reviewed and scored by both a site rater and a centralized rater. Of these administrations, 25 had errors in scoring that were corrected by the central rater. Conclusion: In rare neurodevelopmental disorders trials, sample sizes are often small, so data quality is critical. The use of central raters inherently decreases site burden, but it also decreases rater variance, as illustrated by the small team of central raters (n=7) needed to conduct all of the assessments (n=180) in these trials compared to the number of site raters (n=53) required for even fewer assessments (n=138). In addition, the use of central raters dramatically improves the quality of scoring the assessments.Keywords: neurodevelopmental disorders, clinical trials, rare disease, central raters, remote trials, decentralized trials
Procedia PDF Downloads 172765 Changing Misconceptions in Heat Transfer: A Problem Based Learning Approach for Engineering Students
Authors: Paola Utreras, Yazmina Olmos, Loreto Sanhueza
Abstract:
This work has the purpose of study and incorporate Problem Based Learning (PBL) for engineering students, through the analysis of several thermal images of dwellings located in different geographical points of the Region de los Ríos, Chile. The students analyze how heat is transferred in and out of the houses and how is the relation between heat transfer and climatic conditions that affect each zone. As a result of this activity students are able to acquire significant learning in the unit of heat and temperature, and manage to reverse previous conceptual errors related with energy, temperature and heat. In addition, student are able to generate prototype solutions to increase thermal efficiency using low cost materials. Students make public their results in a report using scientific writing standards and in a science fair open to the entire university community. The methodology used to measure previous Conceptual Errors has been applying diagnostic tests with everyday questions that involve concepts of heat, temperature, work and energy, before the unit. After the unit the same evaluation is done in order that themselves are able to evidence the evolution in the construction of knowledge. As a result, we found that in the initial test, 90% of the students showed deficiencies in the concepts previously mentioned, and in the subsequent test 47% showed deficiencies, these percent ages differ between students who carry out the course for the first time and those who have performed this course previously in a traditional way. The methodology used to measure Significant Learning has been by comparing results in subsequent courses of thermodynamics among students who have received problem based learning and those who have received traditional training. We have observe that learning becomes meaningful when applied to the daily lives of students promoting internalization of knowledge and understanding through critical thinking.Keywords: engineering students, heat flow, problem-based learning, thermal images
Procedia PDF Downloads 231