Search results for: refractive errors
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1111

Search results for: refractive errors

151 Microfabrication and Non-Invasive Imaging of Porous Osteogenic Structures Using Laser-Assisted Technologies

Authors: Irina Alexandra Paun, Mona Mihailescu, Marian Zamfirescu, Catalin Romeo Luculescu, Adriana Maria Acasandrei, Cosmin Catalin Mustaciosu, Roxana Cristina Popescu, Maria Dinescu

Abstract:

A major concern in bone tissue engineering is to develop complex 3D architectures that mimic the natural cells environment, facilitate the cells growth in a defined manner and allow the flow transport of nutrients and metabolic waste. In particular, porous structures of controlled pore size and positioning are indispensable for growing human-like bone structures. Another concern is to monitor both the structures and the seeded cells with high spatial resolution and without interfering with the cells natural environment. The present approach relies on laser-based technologies employed for fabricating porous biomimetic structures that support the growth of osteoblast-like cells and for their non-invasive 3D imaging. Specifically, the porous structures were built by two photon polymerization –direct writing (2PP_DW) of the commercially available photoresists IL-L780, using the Photonic Professional 3D lithography system. The structures consist of vertical tubes with micrometer-sized heights and diameters, in a honeycomb-like spatial arrangement. These were fabricated by irradiating the IP-L780 photoresist with focused laser pulses with wavelength centered at 780 nm, 120 fs pulse duration and 80 MHz repetition rate. The samples were precisely scanned in 3D by piezo stages. The coarse positioning was done by XY motorized stages. The scanning path was programmed through a writing language (GWL) script developed by Nanoscribe. Following laser irradiation, the unexposed regions of the photoresist were washed out by immersing the samples in the Propylene Glycol Monomethyl Ether Acetate (PGMEA). The porous structures were seeded with osteoblast like MG-63 cells and their osteogenic potential was tested in vitro. The cell-seeded structures were analyzed in 3D using the digital holographic microscopy technique (DHM). DHM is a marker free and high spatial resolution imaging tool, where the hologram acquisition is performed non-invasively i.e. without interfering with the cells natural environment. Following hologram recording, a digital algorithm provided a 3D image of the sample, as well as information about its refractive index, which is correlated with the intracellular content. The axial resolution of the images went down to the nanoscale, while the temporal scales ranged from milliseconds up to hours. The hologram did not involve sample scanning and the whole image was available in one frame recorded going over 200μm field of view. The digital holograms processing provided 3D quantitative information on the porous structures and allowed a quantitative analysis of the cellular response in respect to the porous architectures. The cellular shape and dimensions were found to be influenced by the underlying micro relief. Furthermore, the intracellular content gave evidence on the beneficial role of the porous structures in promoting osteoblast differentiation. In all, the proposed laser-based protocol emerges as a promising tool for the fabrication and non-invasive imaging of porous constructs for bone tissue engineering. Acknowledgments: This work was supported by a grant of the Romanian Authority for Scientific Research and Innovation, CNCS-UEFISCDI, project PN-II-RU-TE-2014-4-2534 (contract 97 from 01/10/2015) and by UEFISCDI PN-II-PT-PCCA no. 6/2012. A part of this work was performed in the CETAL laser facility, supported by the National Program PN 16 47 - LAPLAS IV.

Keywords: biomimetic, holography, laser, osteoblast, two photon polymerization

Procedia PDF Downloads 273
150 Nurse-Led Codes: Practical Application in the Emergency Department during a Global Pandemic

Authors: F. DelGaudio, H. Gill

Abstract:

Resuscitation during cardiopulmonary (CPA) arrest is dynamic, high stress, high acuity situation, which can easily lead to communication breakdown, and errors. The care of these high acuity patients has also been shown to increase physiologic stress and task saturation of providers, which can negatively impact the care being provided. These difficulties are further complicated during a global pandemic and pose a significant safety risk to bedside providers. Nurse-led codes are a relatively new concept that may be a potential solution for alleviating some of these difficulties. An experienced nurse who has completed advanced cardiac life support (ACLS), and additional training, assumed the responsibility of directing the mechanics of the appropriate ACLS algorithm. This was done in conjunction with a physician who also acted as a physician leader. The additional nurse-led code training included a multi-disciplinary in situ simulation of a CPA on a suspected COVID-19 patient. During the CPA, the nurse leader’s responsibilities include: ensuring adequate compression depth and rate, minimizing interruptions in chest compressions, the timing of rhythm/pulse checks, and appropriate medication administration. In addition, the nurse leader also functions as a last line safety check for appropriate personal protective equipment and limiting exposure of staff. The use of nurse-led codes for CPA has shown to decrease the cognitive overload and task saturation for the physician, as well as limiting the number of staff being exposed to a potentially infectious patient. The real-world application has allowed physicians to perform and oversee high-risk procedures such as intubation, line placement, and point of care ultrasound, without sacrificing the integrity of the resuscitation. Nurse-led codes have also given the physician the bandwidth to review pertinent medical history, advanced directives, determine reversible causes, and have the end of life conversations with family. While there is a paucity of research on the effectiveness of nurse-led codes, there are many potentially significant benefits. In addition to its value during a pandemic, it may also be beneficial during complex circumstances such as extracorporeal cardiopulmonary resuscitation.

Keywords: cardiopulmonary arrest, COVID-19, nurse-led code, task saturation

Procedia PDF Downloads 155
149 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching

Authors: Denise Levy, Anna Lucia C. H. Villavicencio

Abstract:

In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.

Keywords: science education, interdisciplinary learning, nuclear science, scientific literacy

Procedia PDF Downloads 133
148 Understanding the Influence of Fibre Meander on the Tensile Properties of Advanced Composite Laminates

Authors: Gaoyang Meng, Philip Harrison

Abstract:

When manufacturing composite laminates, the fibre directions within the laminate are never perfectly straight and inevitably contain some degree of stochastic in-plane waviness or ‘meandering’. In this work we aim to understand the relationship between the degree of meandering of the fibre paths, and the resulting uncertainty in the laminate’s final mechanical properties. To do this, a numerical tool is developed to automatically generate meandering fibre paths in each of the laminate's 8 plies (using Matlab) and after mapping this information into finite element simulations (using Abaqus), the statistical variability of the tensile mechanical properties of a [45°/90°/-45°/0°]s carbon/epoxy (IM7/8552) laminate is predicted. The stiffness, first ply failure strength and ultimate failure strength are obtained. Results are generated by inputting the degree of variability in the fibre paths and the laminate is then examined in all directions (from 0° to 359° in increments of 1°). The resulting predictions are output as flower (polar) plots for convenient analysis. The average fibre orientation of each ply in a given laminate is determined by the laminate layup code [45°/90°/-45°/0°]s. However, in each case, the plies contain increasingly large amounts of in-plane waviness (quantified by the standard deviation of the fibre direction in each ply across the laminate. Four different amounts of variability in the fibre direction are tested (2°, 4°, 6° and 8°). Results show that both the average tensile stiffness and the average tensile strength decrease, while the standard deviations increase, with an increasing degree of fibre meander. The variability in stiffness is found to be relatively insensitive to the rotation angle, but the variability in strength is sensitive. Specifically, the uncertainty in laminate strength is relatively low at orientations centred around multiples of 45° rotation angle, and relatively high between these rotation angles. To concisely represent all the information contained in the various polar plots, rotation-angle dependent Weibull distribution equations are fitted to the data. The resulting equations can be used to quickly estimate the size of the errors bars for the different mechanical properties, resulting from the amount of fibre directional variability contained within the laminate. A longer term goal is to use these equations to quickly introduce realistic variability at the component level.

Keywords: advanced composite laminates, FE simulation, in-plane waviness, tensile properties, uncertainty quantification

Procedia PDF Downloads 89
147 Refusal Speech Acts in French Learners of Mandarin Chinese

Authors: Jui-Hsueh Hu

Abstract:

This study investigated various models of refusal speech acts among three target groups: French learners of Mandarin Chinese (FM), Taiwanese native Mandarin speakers (TM), and native French speakers (NF). The refusal responses were analyzed in terms of their options, frequencies, and sequences and the contents of their semantic formulas. This study also examined differences in refusal strategies, as determined by social status and social distance, among the three groups. The difficulties of refusal speech acts encountered by FM were then generalized. The results indicated that Mandarin instructors of NF should focus on the different reasons for the pragmatic failure of French learners and should assist these learners in mastering refusal speech acts that rely on abundant cultural information. In this study, refusal policies were mainly classified according to the research of Beebe et al. (1990). Discourse completion questionnaires were collected from TM, FM, and NF, and their responses were compared to determine how refusal policies differed among the groups. This study not only emphasized the dissimilarities of refusal strategies between native Mandarin speakers and second-language Mandarin learners but also used NF as a control group. The results of this study demonstrated that regarding overall strategies, FM were biased toward NF in terms of strategy choice, order, and content, resulting in pragmatic transfer under the influence of social factors such as 'social status' and 'social distance,' strategy choices of FM were still closer to those of NF, and the phenomenon of pragmatic transfer of FM was revealed. Regarding the refusal difficulties among the three groups, the F-test in the analysis of variance revealed statistical significance was achieved for Role Playing Items 13 and 14 (P < 0.05). A difference was observed in the average number of refusal difficulties between the participants. However, after multiple comparisons, it was found that item 13 (unrecognized heterosexual junior colleague requesting contacts) was significantly more difficult for NF than for TM and FM; item 14 (contacts requested by an unrecognized classmate of the opposite sex) was significantly more difficult to refuse for NF than for TM. This study summarized the pragmatic language errors that most FM often perform, including the misuse or absence of modal words, hedging expressions, and empty words at the end of sentences, as the reasons for pragmatic failures. The common social pragmatic failures of FM include inaccurately applying the level of directness and formality.

Keywords: French Mandarin, interlanguage refusal, pragmatic transfer, speech acts

Procedia PDF Downloads 254
146 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 149
145 Consumer Protection Law For Users Mobile Commerce as a Global Effort to Improve Business in Indonesia

Authors: Rina Arum Prastyanti

Abstract:

Information technology has changed the ways of transacting and enabling new opportunities in business transactions. Problems to be faced by consumers M Commerce, among others, the consumer will have difficulty accessing the full information about the products on offer and the forms of transactions given the small screen and limited storage capacity, the need to protect children from various forms of excess supply and usage as well as errors in access and disseminate personal data, not to mention the more complex problems as well as problems agreements, dispute resolution that can protect consumers and assurance of security of personal data. It is no less important is the risk of payment and personal information of payment dal am also an important issue that should be on the swatch solution. The purpose of this study is 1) to describe the phenomenon of the use of Mobile Commerce in Indonesia. 2) To determine the form of legal protection for the consumer use of Mobile Commerce. 3) To get the right type of law so as to provide legal protection for consumers Mobile Commerce users. This research is a descriptive qualitative research. Primary and secondary data sources. This research is a normative law. Engineering conducted engineering research library collection or library research. The analysis technique used is deductive analysis techniques. Growing mobile technology and more affordable prices as well as low rates of provider competition also affects the increasing number of mobile users, Indonesia is placed into 4 HP users in the world, the number of mobile phones in Indonesia is estimated at around 250.1 million telephones with a population of 237 556. 363. Indonesian form of legal protection in the use of mobile commerce still a part of the Law No. 11 of 2008 on Information and Electronic Transactions and until now there is no rule of law that specifically regulates mobile commerce. Legal protection model that can be applied to protect consumers of mobile commerce users ensuring that consumers get information about potential security and privacy challenges they may face in m commerce and measures that can be used to limit the risk. Encourage the development of security measures and built security features. To encourage mobile operators to implement data security policies and measures to prevent unauthorized transactions. Provide appropriate methods both time and effectiveness of redress when consumers suffer financial loss.

Keywords: mobile commerce, legal protection, consumer, effectiveness

Procedia PDF Downloads 364
144 Sound Source Localisation and Augmented Reality for On-Site Inspection of Prefabricated Building Components

Authors: Jacques Cuenca, Claudio Colangeli, Agnieszka Mroz, Karl Janssens, Gunther Riexinger, Antonio D'Antuono, Giuseppe Pandarese, Milena Martarelli, Gian Marco Revel, Carlos Barcena Martin

Abstract:

This study presents an on-site acoustic inspection methodology for quality and performance evaluation of building components. The work focuses on global and detailed sound source localisation, by successively performing acoustic beamforming and sound intensity measurements. A portable experimental setup is developed, consisting of an omnidirectional broadband acoustic source and a microphone array and sound intensity probe. Three main acoustic indicators are of interest, namely the sound pressure distribution on the surface of components such as walls, windows and junctions, the three-dimensional sound intensity field in the vicinity of junctions, and the sound transmission loss of partitions. The measurement data is post-processed and converted into a three-dimensional numerical model of the acoustic indicators with the help of the simultaneously acquired geolocation information. The three-dimensional acoustic indicators are then integrated into an augmented reality platform superimposing them onto a real-time visualisation of the spatial environment. The methodology thus enables a measurement-supported inspection process of buildings and the correction of errors during construction and refurbishment. Two experimental validation cases are shown. The first consists of a laboratory measurement on a full-scale mockup of a room, featuring a prefabricated panel. The latter is installed with controlled defects such as lack of insulation and joint sealing material. It is demonstrated that the combined acoustic and augmented reality tool is capable of identifying acoustic leakages from the building defects and assist in correcting them. The second validation case is performed on a prefabricated room at a near-completion stage in the factory. With the help of the measurements and visualisation tools, the homogeneity of the partition installation is evaluated and leakages from junctions and doors are identified. Furthermore, the integration of acoustic indicators together with thermal and geometrical indicators via the augmented reality platform is shown.

Keywords: acoustic inspection, prefabricated building components, augmented reality, sound source localization

Procedia PDF Downloads 384
143 Developing a Web-Based Tender Evaluation System Based on Fuzzy Multi-Attributes Group Decision Making for Nigerian Public Sector Tendering

Authors: Bello Abdullahi, Yahaya M. Ibrahim, Ahmed D. Ibrahim, Kabir Bala

Abstract:

Public sector tendering has traditionally been conducted using manual paper-based processes which are known to be inefficient, less transparent and more prone to manipulations and errors. The advent of the Internet and the World Wide Web has led to the development of numerous e-Tendering systems that addressed some of the problems associated with the manual paper-based tendering system. However, most of these systems rarely support the evaluation of tenders and where they do it is mostly based on the single decision maker which is not suitable in public sector tendering, where for the sake of objectivity, transparency, and fairness, it is required that the evaluation is conducted through a tender evaluation committee. Currently, in Nigeria, the public tendering process in general and the evaluation of tenders, in particular, are largely conducted using manual paper-based processes. Automating these manual-based processes to digital-based processes can help in enhancing the proficiency of public sector tendering in Nigeria. This paper is part of a larger study to develop an electronic tendering system that supports the whole tendering lifecycle based on Nigerian procurement law. Specifically, this paper presents the design and implementation of part of the system that supports group evaluation of tenders based on a technique called fuzzy multi-attributes group decision making. The system was developed using Object-Oriented methodologies and Unified Modelling Language and hypothetically applied in the evaluation of technical and financial proposals submitted by bidders. The system was validated by professionals with extensive experiences in public sector procurement. The results of the validation showed that the system called NPS-eTender has an average rating of 74% with respect to correct and accurate modelling of the existing manual tendering domain and an average rating of 67.6% with respect to its potential to enhance the proficiency of public sector tendering in Nigeria. Thus, based on the results of the validation, the automation of the evaluation process to support tender evaluation committee is achievable and can lead to a more proficient public sector tendering system.

Keywords: e-Tendering, e-Procurement, group decision making, tender evaluation, tender evaluation committee, UML, object-oriented methodologies, system development

Procedia PDF Downloads 261
142 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members

Authors: Sami W. Tabsh

Abstract:

The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.

Keywords: code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety

Procedia PDF Downloads 430
141 Abilitest Battery: Presentation of Tests and Psychometric Properties

Authors: Sylwia Sumińska, Łukasz Kapica, Grzegorz Szczepański

Abstract:

Introduction: Cognitive skills are a crucial part of everyday functioning. Cognitive skills include perception, attention, language, memory, executive functions, and higher cognitive skills. With the aging of societies, there is an increasing percentage of people whose cognitive skills decline. Cognitive skills affect work performance. The appropriate diagnosis of a worker’s cognitive skills reduces the risk of errors and accidents at work which is also important for senior workers. The study aimed to prepare new cognitive tests for adults aged 20-60 and assess the psychometric properties of the tests. The project responds to the need for reliable and accurate methods of assessing cognitive performance. Computer tests were developed to assess psychomotor performance, attention, and working memory. Method: Two hundred eighty people aged 20-60 will participate in the study in 4 age groups. Inclusion criteria for the study were: no subjective cognitive impairment, no history of severe head injuries, chronic diseases, psychiatric and neurological diseases. The research will be conducted from February - to June 2022. Cognitive tests: 1) Measurement of psychomotor performance: Reaction time, Reaction time with selective attention component; 2) Measurement of sustained attention: Visual search (dots), Visual search (numbers); 3) Measurement of working memory: Remembering words, Remembering letters. To assess the validity and the reliability subjects will perform the Vienna Test System, i.e., “Reaction Test” (reaction time), “Signal Detection” (sustained attention), “Corsi Block-Tapping Test” (working memory), and Perception and Attention Test (TUS), Colour Trails Test (CTT), Digit Span – subtest from The Wechsler Adult Intelligence Scale. Eighty people will be invited to a session after three months aimed to assess the consistency over time. Results: Due to ongoing research, the detailed results from 280 people will be shown at the conference separately in each age group. The results of correlation analysis with the Vienna Test System will be demonstrated as well.

Keywords: aging, attention, cognitive skills, cognitive tests, psychomotor performance, working memory

Procedia PDF Downloads 105
140 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 129
139 A Linguistic Analysis of the Inconsistencies in the Meaning of Some -er Suffix Morphemes

Authors: Amina Abubakar

Abstract:

English like any other language is rich by means of arbitrary, conventional, symbols which lend it to lot of inconsistencies in spelling, phonology, syntax, and morphology. The research examines the irregularities prevalent in the structure and meaning of some ‘er’ lexical items in English and its implication to vocabulary acquisition. It centers its investigation on the derivational suffix ‘er’, which changes the grammatical category of word. English language poses many challenges to Second Language Learners because of its irregularities, exceptions, and rules. One of the meaning of –er derivational suffix is someone or somebody who does something. This rule often confuses the learners when they meet with the exceptions in normal discourse. The need to investigate instances of such inconsistencies in the formation of –er words and the meanings given to such words by the students motivated this study. For this purpose, some senior secondary two (SS2) students in six randomly selected schools in the metropolis were provided a large number of alphabetically selected ‘er’ suffix ending words, The researcher opts for a test technique, which requires them to provide the meaning of the selected words with- er. The marking of the test was scored on the scale of 1-0, where correct formation of –er word and meaning is scored one while wrong formation and meaning is scored zero. The number of wrong and correct formations of –er words meaning were calculated using percentage. The result of this research shows that a large number of students made wrong generalization of the meaning of the selected -er ending words. This shows how enormous the inconsistencies are in English language and how are affect the learning of English. Findings from the study revealed that though students mastered the basic morphological rules but the errors are generally committed on those vocabulary items that are not frequently in use. The study arrives at this conclusion from the survey of their textbook and their spoken activities. Therefore, the researcher recommends that there should be effective reappraisal of language teaching through implementation of the designed curriculum to reflect on modern strategies of teaching language, identification, and incorporation of the exceptions in rigorous communicative activities in language teaching, language course books and tutorials, training and retraining of teachers on the strategies that conform to the new pedagogy.

Keywords: ESL(English as a second language), derivational morpheme, inflectional morpheme, suffixes

Procedia PDF Downloads 377
138 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment

Authors: Neda Orak, Mostafa Zarei

Abstract:

Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.

Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park

Procedia PDF Downloads 293
137 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks

Authors: Lexi Li, Vanessa H. K. Pang

Abstract:

This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.

Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus

Procedia PDF Downloads 143
136 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition

Authors: Habtamu Garoma Debela, Gemechis File Duressa

Abstract:

In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.

Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent

Procedia PDF Downloads 143
135 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors

Authors: Jakob Krause

Abstract:

Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.

Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling

Procedia PDF Downloads 148
134 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture

Authors: Charbel Aoun, Loic Lagadec

Abstract:

A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.

Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS

Procedia PDF Downloads 177
133 Application of the Building Information Modeling Planning Approach to the Factory Planning

Authors: Peggy Näser

Abstract:

Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.

Keywords: building information modeling, digital factory, digital planning, factory planning

Procedia PDF Downloads 267
132 Impact of Instrument Transformer Secondary Connections on Performance of Protection System: Experiences from Indian POWERGRID

Authors: Pankaj Kumar Jha, Mahendra Singh Hada, Brijendra Singh, Sandeep Yadav

Abstract:

Protective relays are commonly connected to the secondary windings of instrument transformers, i.e., current transformers (CTs) and/or capacitive voltage transformers (CVTs). The purpose of CT and CVT is to provide galvanic isolation from high voltages and reduce primary currents and voltages to a nominal quantity recognized by the protective relays. Selecting the correct instrument transformers for an application is imperative: failing to do so may compromise the relay’s performance, as the output of the instrument transformer may no longer be an accurately scaled representation of the primary quantity. Having an accurately rated instrument transformer is of no use if these devices are not properly connected. The performance of the protective relay is reliant on its programmed settings and on the current and voltage inputs from the instrument transformers secondary. This paper will help in understanding the fundamental concepts of the connections of Instrument Transformers to the protection relays and the effect of incorrect connection on the performance of protective relays. Multiple case studies of protection system mal-operations due to incorrect connections of instrument transformers will be discussed in detail in this paper. Apart from the connection issue of instrument transformers to protective relays, this paper will also discuss the effect of multiple earthing of CTs and CVTs secondary on the performance of the protection system. Case studies presented in this paper will help the readers to analyse the problem through real-world challenges in complex power system networks. This paper will also help the protection engineer in better analysis of disturbance records. CT and CVT connection errors can lead to undesired operations of protection systems. However, many of these operations can be avoided by adhering to industry standards and implementing tried-and-true field testing and commissioning practices. Understanding the effect of missing neutral of CVT, multiple earthing of CVT secondary, and multiple grounding of CT star points on the performance of the protection system through real-world case studies will help the protection engineer in better commissioning the protection system and maintenance of the protection system.

Keywords: bus reactor, current transformer, capacitive voltage transformer, distance protection, differential protection, directional earth fault, disturbance report, instrument transformer, ICT, REF protection, shunt reactor, voltage selection relay, VT fuse failure

Procedia PDF Downloads 81
131 Urban Furniture in a New Setting of Public Spaces within the Kurdistan Region: Educational Targets and Course Design Process

Authors: Sinisa Prvanov

Abstract:

This research is an attempt to analyze the existing urban form of outdoor public space of Duhok city and to give proposals for their improvements in terms of urban seating. The aim of this research is to identify the main urban furniture elements and behaviour of users of three central parks of Duhok city, recognizing their functionality and the most common errors. Citizens needs, directly related to the physical characteristics of the environment, are categorized in terms of contact with nature. Parks as significant urban environments express their aesthetic preferences, as well as the need for recreation and play. Citizens around the world desire to contact with nature and places where they can socialize, play and practice different activities, but also participate in building their community and feeling the identity of their cities. The aim of this research is also to reintegrate these spaces in the wider urban context of the city of Duhok, to develop new functions by designing new seating patterns, more improved urban furniture, and necessary supporting facilities and equipment. Urban furniture is a product that uses an enormous number of people in public space. It has a high level of wear and damage due to intense use, exposure to sunlight and weather conditions. Iraq has a hot and dry climate characterized by long, warm, dry summers and short, cold winters. The climate is determined by the Iraq location at the crossroads of Arab desert areas and the subtropical humid climate of the Persian Gulf. The second part of this analysis will describe the possibilities of traditional and contemporary materials as well as their advantages in urban furniture production, providing users protection from extreme local climate conditions, but also taking into account solidities and unwelcome consequences, such as vandalism. In addition, this research represents a preliminary stage in the development of IND307 furniture design course for needs of the Department of Interior design, at the American University in Duhok. Based on results obtained in this research, the course would present a symbiosis between people and technology, promotion of new street furniture design that perceives pedestrian activities in an urban setting, and practical use of anthropometric measurements as a tool for technical innovations.

Keywords: Furniture design, Street furniture, Social interaction, Public space

Procedia PDF Downloads 135
130 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer

Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott

Abstract:

The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.

Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD

Procedia PDF Downloads 507
129 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills

Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu

Abstract:

Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.

Keywords: acetabular fractures, articular congruency, surgical skills, vocational training

Procedia PDF Downloads 206
128 Climate Related Variability and Stock-Recruitment Relationship of the North Pacific Albacore Tuna

Authors: Ashneel Ajay Singh, Naoki Suzuki, Kazumi Sakuramoto,

Abstract:

The North Pacific albacore (Thunnus alalunga) is a temperate tuna species distributed in the North Pacific which is of significant economic importance to the Pacific Island Nations and Territories. Despite its importance, the stock dynamics and ecological characteristics of albacore still, have gaps in knowledge. The stock-recruitment relationship of the North Pacific stock of albacore tuna was investigated for different density-dependent effects and a regime shift in the stock characteristics in response to changes in environmental and climatic conditions. Linear regression analysis for recruit per spawning biomass (RPS) and recruitment (R) against the female spawning stock biomass (SSB) were significant for the presence of different density-dependent effects and positive for a regime shift in the stock time series. Application of Deming regression to RPS against SSB with the assumption for the presence of observation and process errors in both the dependent and independent variables confirmed the results of simple regression. However, R against SSB results disagreed given variance level of < 3 and agreed with linear regression results given the assumption of variance ≥ 3. Assuming the presence of different density-dependent effects in the albacore tuna time series, environmental and climatic condition variables were compared with R, RPS, and SSB. The significant relationship of R, RPS and SSB were determined with the sea surface temperature (SST), Pacific Decadal Oscillation (PDO) and multivariate El Niño Southern Oscillation (ENSO) with SST being the principal variable exhibiting significantly similar trend with R and RPS. Recruitment is significantly influenced by the dynamics of the SSB as well as environmental conditions which demonstrates that the stock-recruitment relationship is multidimensional. Further investigation of the North Pacific albacore tuna age-class and structure is necessary for further support the results presented here. It is important for fishery managers and decision makers to be vigilant of regime shifts in environmental conditions relating to albacore tuna as it may possibly cause regime shifts in the albacore R and RPS which should be taken into account to effectively and sustainability formulate harvesting plans and management of the species in the North Pacific oceanic region.

Keywords: Albacore tuna, Thunnus alalunga, recruitment, spawning stock biomass, recruits per spawning biomass, sea surface temperature, pacific decadal oscillation, El Niño southern oscillation, density-dependent effects, regime shift

Procedia PDF Downloads 307
127 Lexico-semantic and Morphosyntactic Analyses of Student-generated Paraphrased Academic Texts

Authors: Hazel P. Atilano

Abstract:

In this age of AI-assisted teaching and learning, there seems to be a dearth of research literature on the linguistic analysis of English as a Second Language (ESL) student-generated paraphrased academic texts. This study sought to examine the lexico-semantic, morphosyntactic features of paraphrased academic texts generated by ESL students. Employing a descriptive qualitative design, specifically linguistic analysis, the study involved a total of 85 students from senior high school, college, and graduate school enrolled in research courses. Data collection consisted of a 60-minute real-time, on-site paraphrasing practice exercise using excerpts from discipline-specific literature reviews of 150 to 200 words. A focus group discussion (FGD) was conducted to probe into the challenges experienced by the participants. The writing exercise yielded a total of 516 paraphrase pairs. A total of 176 paraphrase units (PUs) and 340 non-paraphrase pairs (NPPs) were detected. Findings from the linguistic analysis of PUs reveal that the modifications made to the original texts are predominantly syntax-based (Diathesis Alterations and Coordination Changes) and a combination of Miscellaneous Changes (Change of Order, Change of Format, and Addition/Deletion). Results of the analysis of paraphrase extremes (PE) show that Identical Structures resulting from the use of synonymous substitutions, with no significant change in the structural features of the original, is the most frequently occurring instance of PE. The analysis of paraphrase errors reveals that synonymous substitutions resulting in identical structures are the most frequently occurring error that leads to PE. Another type of paraphrasing error involves semantic and content loss resulting from the deletion or addition of meaning-altering content. Three major themes emerged from the FGD: (1) The Challenge of Preserving Semantic Content and Fidelity; (2) The Best Words in the Best Order: Grappling with the Lexico-semantic and Morphosyntactic Demands of Paraphrasing; and (3) Contending with Limited Vocabulary, Poor Comprehension, and Lack of Practice. A pedagogical paradigm was designed based on the major findings of the study for a sustainable instructional intervention.

Keywords: academic text, lexico-semantic analysis, linguistic analysis, morphosyntactic analysis, paraphrasing

Procedia PDF Downloads 67
126 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges

Authors: V. Reyes, P. Ferreira

Abstract:

In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.

Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model

Procedia PDF Downloads 118
125 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
124 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers

Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran

Abstract:

With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.

Keywords: optical fiber, multi-mode, data centers, encircled flux

Procedia PDF Downloads 375
123 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 65
122 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand

Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat

Abstract:

Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.

Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting

Procedia PDF Downloads 189