Search results for: collocational errors
147 Influence of Deficient Materials on the Reliability of Reinforced Concrete Members
Authors: Sami W. Tabsh
Abstract:
The strength of reinforced concrete depends on the member dimensions and material properties. The properties of concrete and steel materials are not constant but random variables. The variability of concrete strength is due to batching errors, variations in mixing, cement quality uncertainties, differences in the degree of compaction and disparity in curing. Similarly, the variability of steel strength is attributed to the manufacturing process, rolling conditions, characteristics of base material, uncertainties in chemical composition, and the microstructure-property relationships. To account for such uncertainties, codes of practice for reinforced concrete design impose resistance factors to ensure structural reliability over the useful life of the structure. In this investigation, the effects of reductions in concrete and reinforcing steel strengths from the nominal values, beyond those accounted for in the structural design codes, on the structural reliability are assessed. The considered limit states are flexure, shear and axial compression based on the ACI 318-11 structural concrete building code. Structural safety is measured in terms of a reliability index. Probabilistic resistance and load models are compiled from the available literature. The study showed that there is a wide variation in the reliability index for reinforced concrete members designed for flexure, shear or axial compression, especially when the live-to-dead load ratio is low. Furthermore, variations in concrete strength have minor effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and sever effect on the reliability of columns in axial compression. On the other hand, changes in steel yield strength have great effect on the reliability of beams in flexure, moderate effect on the reliability of beams in shear, and mild effect on the reliability of columns in axial compression. Based on the outcome, it can be concluded that the reliability of beams is sensitive to changes in the yield strength of the steel reinforcement, whereas the reliability of columns is sensitive to variations in the concrete strength. Since the embedded target reliability in structural design codes results in lower structural safety in beams than in columns, large reductions in material strengths compromise the structural safety of beams much more than they affect columns.Keywords: code, flexure, limit states, random variables, reinforced concrete, reliability, reliability index, shear, structural safety
Procedia PDF Downloads 430146 Abilitest Battery: Presentation of Tests and Psychometric Properties
Authors: Sylwia Sumińska, Łukasz Kapica, Grzegorz Szczepański
Abstract:
Introduction: Cognitive skills are a crucial part of everyday functioning. Cognitive skills include perception, attention, language, memory, executive functions, and higher cognitive skills. With the aging of societies, there is an increasing percentage of people whose cognitive skills decline. Cognitive skills affect work performance. The appropriate diagnosis of a worker’s cognitive skills reduces the risk of errors and accidents at work which is also important for senior workers. The study aimed to prepare new cognitive tests for adults aged 20-60 and assess the psychometric properties of the tests. The project responds to the need for reliable and accurate methods of assessing cognitive performance. Computer tests were developed to assess psychomotor performance, attention, and working memory. Method: Two hundred eighty people aged 20-60 will participate in the study in 4 age groups. Inclusion criteria for the study were: no subjective cognitive impairment, no history of severe head injuries, chronic diseases, psychiatric and neurological diseases. The research will be conducted from February - to June 2022. Cognitive tests: 1) Measurement of psychomotor performance: Reaction time, Reaction time with selective attention component; 2) Measurement of sustained attention: Visual search (dots), Visual search (numbers); 3) Measurement of working memory: Remembering words, Remembering letters. To assess the validity and the reliability subjects will perform the Vienna Test System, i.e., “Reaction Test” (reaction time), “Signal Detection” (sustained attention), “Corsi Block-Tapping Test” (working memory), and Perception and Attention Test (TUS), Colour Trails Test (CTT), Digit Span – subtest from The Wechsler Adult Intelligence Scale. Eighty people will be invited to a session after three months aimed to assess the consistency over time. Results: Due to ongoing research, the detailed results from 280 people will be shown at the conference separately in each age group. The results of correlation analysis with the Vienna Test System will be demonstrated as well.Keywords: aging, attention, cognitive skills, cognitive tests, psychomotor performance, working memory
Procedia PDF Downloads 106145 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller
Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian
Abstract:
The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature
Procedia PDF Downloads 130144 A Linguistic Analysis of the Inconsistencies in the Meaning of Some -er Suffix Morphemes
Authors: Amina Abubakar
Abstract:
English like any other language is rich by means of arbitrary, conventional, symbols which lend it to lot of inconsistencies in spelling, phonology, syntax, and morphology. The research examines the irregularities prevalent in the structure and meaning of some ‘er’ lexical items in English and its implication to vocabulary acquisition. It centers its investigation on the derivational suffix ‘er’, which changes the grammatical category of word. English language poses many challenges to Second Language Learners because of its irregularities, exceptions, and rules. One of the meaning of –er derivational suffix is someone or somebody who does something. This rule often confuses the learners when they meet with the exceptions in normal discourse. The need to investigate instances of such inconsistencies in the formation of –er words and the meanings given to such words by the students motivated this study. For this purpose, some senior secondary two (SS2) students in six randomly selected schools in the metropolis were provided a large number of alphabetically selected ‘er’ suffix ending words, The researcher opts for a test technique, which requires them to provide the meaning of the selected words with- er. The marking of the test was scored on the scale of 1-0, where correct formation of –er word and meaning is scored one while wrong formation and meaning is scored zero. The number of wrong and correct formations of –er words meaning were calculated using percentage. The result of this research shows that a large number of students made wrong generalization of the meaning of the selected -er ending words. This shows how enormous the inconsistencies are in English language and how are affect the learning of English. Findings from the study revealed that though students mastered the basic morphological rules but the errors are generally committed on those vocabulary items that are not frequently in use. The study arrives at this conclusion from the survey of their textbook and their spoken activities. Therefore, the researcher recommends that there should be effective reappraisal of language teaching through implementation of the designed curriculum to reflect on modern strategies of teaching language, identification, and incorporation of the exceptions in rigorous communicative activities in language teaching, language course books and tutorials, training and retraining of teachers on the strategies that conform to the new pedagogy.Keywords: ESL(English as a second language), derivational morpheme, inflectional morpheme, suffixes
Procedia PDF Downloads 377143 Remote Sensing Application in Environmental Researches: Case Study of Iran Mangrove Forests Quantitative Assessment
Authors: Neda Orak, Mostafa Zarei
Abstract:
Environmental assessment is an important session in environment management. Since various methods and techniques have been produces and implemented. Remote sensing (RS) is widely used in many scientific and research fields such as geology, cartography, geography, agriculture, forestry, land use planning, environment, etc. It can show earth surface objects cyclical changes. Also, it can show earth phenomena limits on basis of electromagnetic reflectance changes and deviations records. The research has been done on mangrove forests assessment by RS techniques. Mangrove forests quantitative analysis in Basatin and Bidkhoon estuaries was the aim of this research. It has been done by Landsat satellite images from 1975- 2013 and match to ground control points. This part of mangroves are the last distribution in northern hemisphere. It can provide a good background to improve better management on this important ecosystem. Landsat has provided valuable images to earth changes detection to researchers. This research has used MSS, TM, +ETM, OLI sensors from 1975, 1990, 2000, 2003-2013. Changes had been studied after essential corrections such as fix errors, bands combination, georeferencing on 2012 images as basic image, by maximum likelihood and IPVI Index. It was done by supervised classification. 2004 google earth image and ground points by GPS (2010-2012) was used to compare satellite images obtained changes. Results showed mangrove area in bidkhoon was 1119072 m2 by GPS and 1231200 m2 by maximum likelihood supervised classification and 1317600 m2 by IPVI in 2012. Basatin areas is respectively: 466644 m2, 88200 m2, 63000 m2. Final results show forests have been declined naturally. It is due to human activities in Basatin. The defect was offset by planting in many years. Although the trend has been declining in recent years again. So, it mentioned satellite images have high ability to estimation all environmental processes. This research showed high correlation between images and indexes such as IPVI and NDVI with ground control points.Keywords: IPVI index, Landsat sensor, maximum likelihood supervised classification, Nayband National Park
Procedia PDF Downloads 294142 The Use of Corpora in Improving Modal Verb Treatment in English as Foreign Language Textbooks
Authors: Lexi Li, Vanessa H. K. Pang
Abstract:
This study aims to demonstrate how native and learner corpora can be used to enhance modal verb treatment in EFL textbooks in mainland China. It contributes to a corpus-informed and learner-centered design of grammar presentation in EFL textbooks that enhances the authenticity and appropriateness of textbook language for target learners. The linguistic focus is will, would, can, could, may, might, shall, should, must. The native corpus is the spoken component of BNC2014 (hereafter BNCS2014). The spoken part is chosen because pedagogical purpose of the textbooks is communication-oriented. Using the standard query option of CQPweb, 5% of each of the nine modals was sampled from BNCS2014. The learner corpus is the POS-tagged Ten-thousand English Compositions of Chinese Learners (TECCL). All the essays under the 'secondary school' section were selected. A series of five secondary coursebooks comprise the textbook corpus. All the data in both the learner and the textbook corpora are retrieved through the concordance functions of WordSmith Tools (version, 5.0). Data analysis was divided into two parts. The first part compared the patterns of modal verbs in the textbook corpus and BNC2014 with respect to distributional features, semantic functions, and co-occurring constructions to examine whether the textbooks reflect the authentic use of English. Secondly, the learner corpus was analyzed in terms of the use (distributional features, semantic functions, and co-occurring constructions) and the misuse (syntactic errors, e.g., she can sings*.) of the nine modal verbs to uncover potential difficulties that confront learners. The analysis of distribution indicates several discrepancies between the textbook corpus and BNCS2014. The first four most frequent modal verbs in BNCS2014 are can, would, will, could, while can, will, should, could are the top four in the textbooks. Most strikingly, there is an unusually high proportion of can (41.1%) in the textbooks. The results on different meanings shows that will, would and must are the most problematic. For example, for will, the textbooks contain 20% more occurrences of 'volition' and 20% less of 'prediction' than those in BNCS2014. Regarding co-occurring structures, the textbooks over-represented the structure 'modal +do' across the nine modal verbs. Another major finding is that the structure of 'modal +have done' that frequently co-occur with could, would, should, and must is underused in textbooks. Besides, these four modal verbs are the most difficult for learners, as the error analysis shows. This study demonstrates how the synergy of native and learner corpora can be harnessed to improve EFL textbook presentation of modal verbs in a way that textbooks can provide not only authentic language used in natural discourse but also appropriate design tailed for the needs of target learners.Keywords: English as Foreign Language, EFL textbooks, learner corpus, modal verbs, native corpus
Procedia PDF Downloads 143141 Robust Numerical Method for Singularly Perturbed Semilinear Boundary Value Problem with Nonlocal Boundary Condition
Authors: Habtamu Garoma Debela, Gemechis File Duressa
Abstract:
In this work, our primary interest is to provide ε-uniformly convergent numerical techniques for solving singularly perturbed semilinear boundary value problems with non-local boundary condition. These singular perturbation problems are described by differential equations in which the highest-order derivative is multiplied by an arbitrarily small parameter ε (say) known as singular perturbation parameter. This leads to the existence of boundary layers, which are basically narrow regions in the neighborhood of the boundary of the domain, where the gradient of the solution becomes steep as the perturbation parameter tends to zero. Due to the appearance of the layer phenomena, it is a challenging task to provide ε-uniform numerical methods. The term 'ε-uniform' refers to identify those numerical methods in which the approximate solution converges to the corresponding exact solution (measured to the supremum norm) independently with respect to the perturbation parameter ε. Thus, the purpose of this work is to develop, analyze, and improve the ε-uniform numerical methods for solving singularly perturbed problems. These methods are based on nonstandard fitted finite difference method. The basic idea behind the fitted operator, finite difference method, is to replace the denominator functions of the classical derivatives with positive functions derived in such a way that they capture some notable properties of the governing differential equation. A uniformly convergent numerical method is constructed via nonstandard fitted operator numerical method and numerical integration methods to solve the problem. The non-local boundary condition is treated using numerical integration techniques. Additionally, Richardson extrapolation technique, which improves the first-order accuracy of the standard scheme to second-order convergence, is applied for singularly perturbed convection-diffusion problems using the proposed numerical method. Maximum absolute errors and rates of convergence for different values of perturbation parameter and mesh sizes are tabulated for the numerical example considered. The method is shown to be ε-uniformly convergent. Finally, extensive numerical experiments are conducted which support all of our theoretical findings. A concise conclusion is provided at the end of this work.Keywords: nonlocal boundary condition, nonstandard fitted operator, semilinear problem, singular perturbation, uniformly convergent
Procedia PDF Downloads 143140 Optimal Data Selection in Non-Ergodic Systems: A Tradeoff between Estimator Convergence and Representativeness Errors
Authors: Jakob Krause
Abstract:
Past Financial Crisis has shown that contemporary risk management models provide an unjustified sense of security and fail miserably in situations in which they are needed the most. In this paper, we start from the assumption that risk is a notion that changes over time and therefore past data points only have limited explanatory power for the current situation. Our objective is to derive the optimal amount of representative information by optimizing between the two adverse forces of estimator convergence, incentivizing us to use as much data as possible, and the aforementioned non-representativeness doing the opposite. In this endeavor, the cornerstone assumption of having access to identically distributed random variables is weakened and substituted by the assumption that the law of the data generating process changes over time. Hence, in this paper, we give a quantitative theory on how to perform statistical analysis in non-ergodic systems. As an application, we discuss the impact of a paragraph in the last iteration of proposals by the Basel Committee on Banking Regulation. We start from the premise that the severity of assumptions should correspond to the robustness of the system they describe. Hence, in the formal description of physical systems, the level of assumptions can be much higher. It follows that every concept that is carried over from the natural sciences to economics must be checked for its plausibility in the new surroundings. Most of the probability theory has been developed for the analysis of physical systems and is based on the independent and identically distributed (i.i.d.) assumption. In Economics both parts of the i.i.d. assumption are inappropriate. However, only dependence has, so far, been weakened to a sufficient degree. In this paper, an appropriate class of non-stationary processes is used, and their law is tied to a formal object measuring representativeness. Subsequently, that data set is identified that on average minimizes the estimation error stemming from both, insufficient and non-representative, data. Applications are far reaching in a variety of fields. In the paper itself, we apply the results in order to analyze a paragraph in the Basel 3 framework on banking regulation with severe implications on financial stability. Beyond the realm of finance, other potential applications include the reproducibility crisis in the social sciences (but not in the natural sciences) and modeling limited understanding and learning behavior in economics.Keywords: banking regulation, non-ergodicity, risk management, semimartingale modeling
Procedia PDF Downloads 149139 An Extended Domain-Specific Modeling Language for Marine Observatory Relying on Enterprise Architecture
Authors: Charbel Aoun, Loic Lagadec
Abstract:
A Sensor Network (SN) is considered as an operation of two phases: (1) the observation/measuring, which means the accumulation of the gathered data at each sensor node; (2) transferring the collected data to some processing center (e.g., Fusion Servers) within the SN. Therefore, an underwater sensor network can be defined as a sensor network deployed underwater that monitors underwater activity. The deployed sensors, such as Hydrophones, are responsible for registering underwater activity and transferring it to more advanced components. The process of data exchange between the aforementioned components perfectly defines the Marine Observatory (MO) concept which provides information on ocean state, phenomena and processes. The first step towards the implementation of this concept is defining the environmental constraints and the required tools and components (Marine Cables, Smart Sensors, Data Fusion Server, etc). The logical and physical components that are used in these observatories perform some critical functions such as the localization of underwater moving objects. These functions can be orchestrated with other services (e.g. military or civilian reaction). In this paper, we present an extension to our MO meta-model that is used to generate a design tool (ArchiMO). We propose new constraints to be taken into consideration at design time. We illustrate our proposal with an example from the MO domain. Additionally, we generate the corresponding simulation code using our self-developed domain-specific model compiler. On the one hand, this illustrates our approach in relying on Enterprise Architecture (EA) framework that respects: multiple views, perspectives of stakeholders, and domain specificity. On the other hand, it helps reducing both complexity and time spent in design activity, while preventing from design modeling errors during porting this activity in the MO domain. As conclusion, this work aims to demonstrate that we can improve the design activity of complex system based on the use of MDE technologies and a domain-specific modeling language with the associated tooling. The major improvement is to provide an early validation step via models and simulation approach to consolidate the system design.Keywords: smart sensors, data fusion, distributed fusion architecture, sensor networks, domain specific modeling language, enterprise architecture, underwater moving object, localization, marine observatory, NS-3, IMS
Procedia PDF Downloads 178138 Application of the Building Information Modeling Planning Approach to the Factory Planning
Authors: Peggy Näser
Abstract:
Factory planning is a systematic, objective-oriented process for planning a factory, structured into a sequence of phases, each of which is dependent on the preceding phase and makes use of particular methods and tools, and extending from the setting of objectives to the start of production. The digital factory, on the other hand, is the generic term for a comprehensive network of digital models, methods, and tools – including simulation and 3D visualisation – integrated by a continuous data management system. Its aim is the holistic planning, evaluation and ongoing improvement of all the main structures, processes and resources of the real factory in conjunction with the product. Digital factory planning has already become established in factory planning. The application of Building Information Modeling has not yet been established in factory planning but has been used predominantly in the planning of public buildings. Furthermore, this concept is limited to the planning of the buildings and does not include the planning of equipment of the factory (machines, technical equipment) and their interfaces to the building. BIM is a cooperative method of working, in which the information and data relevant to its lifecycle are consistently recorded, managed and exchanged in a transparent communication between the involved parties on the basis of digital models of a building. Both approaches, the planning approach of Building Information Modeling and the methodical approach of the Digital Factory, are based on the use of a comprehensive data model. Therefore it is necessary to examine how the approach of Building Information Modeling can be extended in the context of factory planning in such a way that an integration of the equipment planning, as well as the building planning, can take place in a common digital model. For this, a number of different perspectives have to be investigated: the equipment perspective including the tools used to implement a comprehensive digital planning process, the communication perspective between the planners of different fields, the legal perspective, that the legal certainty in each country and the quality perspective, on which the quality criteria are defined and the planning will be evaluated. The individual perspectives are examined and illustrated in the article. An approach model for the integration of factory planning into the BIM approach, in particular for the integrated planning of equipment and buildings and the continuous digital planning is developed. For this purpose, the individual factory planning phases are detailed in the sense of the integration of the BIM approach. A comprehensive software concept is shown on the tool. In addition, the prerequisites required for this integrated planning are presented. With the help of the newly developed approach, a better coordination between equipment and buildings is to be achieved, the continuity of the digital factory planning is improved, the data quality is improved and expensive implementation errors are avoided in the implementation.Keywords: building information modeling, digital factory, digital planning, factory planning
Procedia PDF Downloads 269137 Impact of Instrument Transformer Secondary Connections on Performance of Protection System: Experiences from Indian POWERGRID
Authors: Pankaj Kumar Jha, Mahendra Singh Hada, Brijendra Singh, Sandeep Yadav
Abstract:
Protective relays are commonly connected to the secondary windings of instrument transformers, i.e., current transformers (CTs) and/or capacitive voltage transformers (CVTs). The purpose of CT and CVT is to provide galvanic isolation from high voltages and reduce primary currents and voltages to a nominal quantity recognized by the protective relays. Selecting the correct instrument transformers for an application is imperative: failing to do so may compromise the relay’s performance, as the output of the instrument transformer may no longer be an accurately scaled representation of the primary quantity. Having an accurately rated instrument transformer is of no use if these devices are not properly connected. The performance of the protective relay is reliant on its programmed settings and on the current and voltage inputs from the instrument transformers secondary. This paper will help in understanding the fundamental concepts of the connections of Instrument Transformers to the protection relays and the effect of incorrect connection on the performance of protective relays. Multiple case studies of protection system mal-operations due to incorrect connections of instrument transformers will be discussed in detail in this paper. Apart from the connection issue of instrument transformers to protective relays, this paper will also discuss the effect of multiple earthing of CTs and CVTs secondary on the performance of the protection system. Case studies presented in this paper will help the readers to analyse the problem through real-world challenges in complex power system networks. This paper will also help the protection engineer in better analysis of disturbance records. CT and CVT connection errors can lead to undesired operations of protection systems. However, many of these operations can be avoided by adhering to industry standards and implementing tried-and-true field testing and commissioning practices. Understanding the effect of missing neutral of CVT, multiple earthing of CVT secondary, and multiple grounding of CT star points on the performance of the protection system through real-world case studies will help the protection engineer in better commissioning the protection system and maintenance of the protection system.Keywords: bus reactor, current transformer, capacitive voltage transformer, distance protection, differential protection, directional earth fault, disturbance report, instrument transformer, ICT, REF protection, shunt reactor, voltage selection relay, VT fuse failure
Procedia PDF Downloads 83136 Urban Furniture in a New Setting of Public Spaces within the Kurdistan Region: Educational Targets and Course Design Process
Authors: Sinisa Prvanov
Abstract:
This research is an attempt to analyze the existing urban form of outdoor public space of Duhok city and to give proposals for their improvements in terms of urban seating. The aim of this research is to identify the main urban furniture elements and behaviour of users of three central parks of Duhok city, recognizing their functionality and the most common errors. Citizens needs, directly related to the physical characteristics of the environment, are categorized in terms of contact with nature. Parks as significant urban environments express their aesthetic preferences, as well as the need for recreation and play. Citizens around the world desire to contact with nature and places where they can socialize, play and practice different activities, but also participate in building their community and feeling the identity of their cities. The aim of this research is also to reintegrate these spaces in the wider urban context of the city of Duhok, to develop new functions by designing new seating patterns, more improved urban furniture, and necessary supporting facilities and equipment. Urban furniture is a product that uses an enormous number of people in public space. It has a high level of wear and damage due to intense use, exposure to sunlight and weather conditions. Iraq has a hot and dry climate characterized by long, warm, dry summers and short, cold winters. The climate is determined by the Iraq location at the crossroads of Arab desert areas and the subtropical humid climate of the Persian Gulf. The second part of this analysis will describe the possibilities of traditional and contemporary materials as well as their advantages in urban furniture production, providing users protection from extreme local climate conditions, but also taking into account solidities and unwelcome consequences, such as vandalism. In addition, this research represents a preliminary stage in the development of IND307 furniture design course for needs of the Department of Interior design, at the American University in Duhok. Based on results obtained in this research, the course would present a symbiosis between people and technology, promotion of new street furniture design that perceives pedestrian activities in an urban setting, and practical use of anthropometric measurements as a tool for technical innovations.Keywords: Furniture design, Street furniture, Social interaction, Public space
Procedia PDF Downloads 136135 Reduction of the Risk of Secondary Cancer Induction Using VMAT for Head and Neck Cancer
Authors: Jalil ur Rehman, Ramesh C, Tailor, Isa Khan, Jahanzeeb Ashraf, Muhammad Afzal, Geofferry S. Ibbott
Abstract:
The purpose of this analysis is to estimate secondary cancer risks after VMAT compared to other modalities of head and neck radiotherapy (IMRT, 3DCRT). Computer tomography (CT) scans of Radiological Physics Center (RPC) head and neck phantom were acquired with CT scanner and exported via DICOM to the treatment planning system (TPS). Treatment planning was done using four arc (182-178 and 180-184, clockwise and anticlockwise) for volumetric modulated arc therapy (VMAT) , Nine fields (200, 240, 280, 320,0,40,80,120 and 160), which has been commonly used at MD Anderson Cancer Center Houston for intensity modulated radiation therapy (IMRT) and four fields for three dimensional radiation therapy (3DCRT) were used. True beam linear accelerator of 6MV photon energy was used for dose delivery, and dose calculation was done with CC convolution algorithm with prescription dose of 6.6 Gy. Primary Target Volume (PTV) coverage, mean and maximal doses, DVHs and volumes receiving more than 2 Gy and 3.8 Gy of OARs were calculated and compared. Absolute point dose and planar dose were measured with thermoluminescent dosimeters (TLDs) and GafChromic EBT2 film, respectively. Quality Assurance of VMAT and IMRT were performed by using ArcCHECK method with gamma index criteria of 3%/3mm dose difference to distance to agreement (DD/DTA). PTV coverage was found 90.80 %, 95.80 % and 95.82 % for 3DCRT, IMRT and VMAT respectively. VMAT delivered the lowest maximal doses to esophagus (2.3 Gy), brain (4.0 Gy) and thyroid (2.3 Gy) compared to all other studied techniques. In comparison, maximal doses for 3DCRT were found higher than VMAT for all studied OARs. Whereas, IMRT delivered maximal higher doses 26%, 5% and 26% for esophagus, normal brain and thyroid, respectively, compared to VMAT. It was noted that esophagus volume receiving more than 2 Gy was 3.6 % for VMAT, 23.6 % for IMRT and up to 100 % for 3DCRT. Good agreement was observed between measured doses and those calculated with TPS. The averages relative standard errors (RSE) of three deliveries within eight TLD capsule locations were, 0.9%, 0.8% and 0.6% for 3DCRT, IMRT and VMAT, respectively. The gamma analysis for all plans met the ±5%/3 mm criteria (over 90% passed) and results of QA were greater than 98%. The calculations for maximal doses and volumes of OARs suggest that the estimated risk of secondary cancer induction after VMAT is considerably lower than IMRT and 3DCRT.Keywords: RPC, 3DCRT, IMRT, VMAT, EBT2 film, TLD
Procedia PDF Downloads 508134 Posterior Acetabular Fractures-Optimizing the Treatment by Enhancing Practical Skills
Authors: Olivera Lupescu, Taina Elena Avramescu, Mihail Nagea, Alexandru Dimitriu
Abstract:
Acetabular fractures represent a real challenge due to their impact upon the long term function of the hip joint, and due to the risk of intra- and peri-operative complications especially that they affect young, active people. That is why treating these fractures require certain skills which must be exercised, regarding the pre-operative planning, as well as the execution of surgery.The authors retrospectively analyse 38 cases with acetabular fractures operated using the posterior approach in our hospital between 01.01.2013- 01.01.2015 for which complete medical records ensure a follow-up of 24 months, in order to establish the main causes of potential errors and to underline the methods for preventing them. This target is included in the Erasmus + project ‘Collaborative learning for enhancing practical skills for patient-focused interventions in gait rehabilitation after orthopedic surgery COR-skills’. This paper analyses the pitfalls revealed by these cases, as well as the measures necessary to enhance the practical skills of the surgeons who perform acetabular surgery. Pre-op planning matched the intra and post-operative outcome in 88% of the analyzed points, from 72% at the beginning to 94% in the last case, meaning that experience is very important in treating this injury. The main problems detected for the posterior approach were: nervous complications - 3 cases, 1 of them a complete paralysis of the sciatic nerve, which recovered 6 months after surgery, and in other 2 cases intra-articular position of the screws was demonstrated by post-operative CT scans, so secondary screw removal was necessary in these cases. We analysed this incident, too, due to lack of information about the relationship between the screws and the joint secondary to this approach. Septic complications appeared in 3 cases, 2 superficial and 1 profound (requiring implant removal). The most important problems were the reduction of the fractures and the positioning of the screws so as not to interfere with the the articular space. In posterior acetabular fractures, pre-op complex planning is important in order to achieve maximum treatment efficacy with minimum of risk; an optimal training of the surgeons insisting on the main points of potential mistakes ensure the success of the procedure, as well as a favorable outcome for the patient.Keywords: acetabular fractures, articular congruency, surgical skills, vocational training
Procedia PDF Downloads 206133 Climate Related Variability and Stock-Recruitment Relationship of the North Pacific Albacore Tuna
Authors: Ashneel Ajay Singh, Naoki Suzuki, Kazumi Sakuramoto,
Abstract:
The North Pacific albacore (Thunnus alalunga) is a temperate tuna species distributed in the North Pacific which is of significant economic importance to the Pacific Island Nations and Territories. Despite its importance, the stock dynamics and ecological characteristics of albacore still, have gaps in knowledge. The stock-recruitment relationship of the North Pacific stock of albacore tuna was investigated for different density-dependent effects and a regime shift in the stock characteristics in response to changes in environmental and climatic conditions. Linear regression analysis for recruit per spawning biomass (RPS) and recruitment (R) against the female spawning stock biomass (SSB) were significant for the presence of different density-dependent effects and positive for a regime shift in the stock time series. Application of Deming regression to RPS against SSB with the assumption for the presence of observation and process errors in both the dependent and independent variables confirmed the results of simple regression. However, R against SSB results disagreed given variance level of < 3 and agreed with linear regression results given the assumption of variance ≥ 3. Assuming the presence of different density-dependent effects in the albacore tuna time series, environmental and climatic condition variables were compared with R, RPS, and SSB. The significant relationship of R, RPS and SSB were determined with the sea surface temperature (SST), Pacific Decadal Oscillation (PDO) and multivariate El Niño Southern Oscillation (ENSO) with SST being the principal variable exhibiting significantly similar trend with R and RPS. Recruitment is significantly influenced by the dynamics of the SSB as well as environmental conditions which demonstrates that the stock-recruitment relationship is multidimensional. Further investigation of the North Pacific albacore tuna age-class and structure is necessary for further support the results presented here. It is important for fishery managers and decision makers to be vigilant of regime shifts in environmental conditions relating to albacore tuna as it may possibly cause regime shifts in the albacore R and RPS which should be taken into account to effectively and sustainability formulate harvesting plans and management of the species in the North Pacific oceanic region.Keywords: Albacore tuna, Thunnus alalunga, recruitment, spawning stock biomass, recruits per spawning biomass, sea surface temperature, pacific decadal oscillation, El Niño southern oscillation, density-dependent effects, regime shift
Procedia PDF Downloads 307132 Lexico-semantic and Morphosyntactic Analyses of Student-generated Paraphrased Academic Texts
Authors: Hazel P. Atilano
Abstract:
In this age of AI-assisted teaching and learning, there seems to be a dearth of research literature on the linguistic analysis of English as a Second Language (ESL) student-generated paraphrased academic texts. This study sought to examine the lexico-semantic, morphosyntactic features of paraphrased academic texts generated by ESL students. Employing a descriptive qualitative design, specifically linguistic analysis, the study involved a total of 85 students from senior high school, college, and graduate school enrolled in research courses. Data collection consisted of a 60-minute real-time, on-site paraphrasing practice exercise using excerpts from discipline-specific literature reviews of 150 to 200 words. A focus group discussion (FGD) was conducted to probe into the challenges experienced by the participants. The writing exercise yielded a total of 516 paraphrase pairs. A total of 176 paraphrase units (PUs) and 340 non-paraphrase pairs (NPPs) were detected. Findings from the linguistic analysis of PUs reveal that the modifications made to the original texts are predominantly syntax-based (Diathesis Alterations and Coordination Changes) and a combination of Miscellaneous Changes (Change of Order, Change of Format, and Addition/Deletion). Results of the analysis of paraphrase extremes (PE) show that Identical Structures resulting from the use of synonymous substitutions, with no significant change in the structural features of the original, is the most frequently occurring instance of PE. The analysis of paraphrase errors reveals that synonymous substitutions resulting in identical structures are the most frequently occurring error that leads to PE. Another type of paraphrasing error involves semantic and content loss resulting from the deletion or addition of meaning-altering content. Three major themes emerged from the FGD: (1) The Challenge of Preserving Semantic Content and Fidelity; (2) The Best Words in the Best Order: Grappling with the Lexico-semantic and Morphosyntactic Demands of Paraphrasing; and (3) Contending with Limited Vocabulary, Poor Comprehension, and Lack of Practice. A pedagogical paradigm was designed based on the major findings of the study for a sustainable instructional intervention.Keywords: academic text, lexico-semantic analysis, linguistic analysis, morphosyntactic analysis, paraphrasing
Procedia PDF Downloads 68131 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges
Authors: V. Reyes, P. Ferreira
Abstract:
In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model
Procedia PDF Downloads 118130 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results
Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter
Abstract:
Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.Keywords: classification, CRISP-DM, machine learning, predictive quality, regression
Procedia PDF Downloads 145129 Detailed Analysis of Multi-Mode Optical Fiber Infrastructures for Data Centers
Authors: Matej Komanec, Jan Bohata, Stanislav Zvanovec, Tomas Nemecek, Jan Broucek, Josef Beran
Abstract:
With the exponential growth of social networks, video streaming and increasing demands on data rates, the number of newly built data centers rises proportionately. The data centers, however, have to adjust to the rapidly increased amount of data that has to be processed. For this purpose, multi-mode (MM) fiber based infrastructures are often employed. It stems from the fact, the connections in data centers are typically realized within a short distance, and the application of MM fibers and components considerably reduces costs. On the other hand, the usage of MM components brings specific requirements for installation service conditions. Moreover, it has to be taken into account that MM fiber components have a higher production tolerance for parameters like core and cladding diameters, eccentricity, etc. Due to the high demands for the reliability of data center components, the determination of properly excited optical field inside the MM fiber core belongs to the key parameters while designing such an MM optical system architecture. Appropriately excited mode field of the MM fiber provides optimal power budget in connections, leads to the decrease of insertion losses (IL) and achieves effective modal bandwidth (EMB). The main parameter, in this case, is the encircled flux (EF), which should be properly defined for variable optical sources and consequent different mode-field distribution. In this paper, we present detailed investigation and measurements of the mode field distribution for short MM links purposed in particular for data centers with the emphasis on reliability and safety. These measurements are essential for large MM network design. The various scenarios, containing different fibers and connectors, were tested in terms of IL and mode-field distribution to reveal potential challenges. Furthermore, we focused on estimation of particular defects and errors, which can realistically occur like eccentricity, connector shifting or dust, were simulated and measured, and their dependence to EF statistics and functionality of data center infrastructure was evaluated. The experimental tests were performed at two wavelengths, commonly used in MM networks, of 850 nm and 1310 nm to verify EF statistics. Finally, we provide recommendations for data center systems and networks, using OM3 and OM4 MM fiber connections.Keywords: optical fiber, multi-mode, data centers, encircled flux
Procedia PDF Downloads 377128 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 66127 Performance Analysis of the Precise Point Positioning Data Online Processing Service and Using for Monitoring Plate Tectonic of Thailand
Authors: Nateepat Srivarom, Weng Jingnong, Serm Chinnarat
Abstract:
Precise Point Positioning (PPP) technique is use to improve accuracy by using precise satellite orbit and clock correction data, but this technique is complicated methods and high costs. Currently, there are several online processing service providers which offer simplified calculation. In the first part of this research, we compare the efficiency and precision of four software. There are three popular online processing service providers: Australian Online GPS Processing Service (AUSPOS), CSRS-Precise Point Positioning and CenterPoint RTX post processing by Trimble and 1 offline software, RTKLIB, which collected data from 10 the International GNSS Service (IGS) stations for 10 days. The results indicated that AUSPOS has the least distance root mean square (DRMS) value of 0.0029 which is good enough to be calculated for monitoring the movement of tectonic plates. The second, we use AUSPOS to process the data of geodetic network of Thailand. In December 26, 2004, the earthquake occurred a 9.3 MW at the north of Sumatra that highly affected all nearby countries, including Thailand. Earthquake effects have led to errors of the coordinate system of Thailand. The Royal Thai Survey Department (RTSD) is primarily responsible for monitoring of the crustal movement of the country. The difference of the geodetic network movement is not the same network and relatively large. This result is needed for survey to continue to improve GPS coordinates system in every year. Therefore, in this research we chose the AUSPOS to calculate the magnitude and direction of movement, to improve coordinates adjustment of the geodetic network consisting of 19 pins in Thailand during October 2013 to November 2017. Finally, results are displayed on the simulation map by using the ArcMap program with the Inverse Distance Weighting (IDW) method. The pin with the maximum movement is pin no. 3239 (Tak) in the northern part of Thailand. This pin moved in the south-western direction to 11.04 cm. Meanwhile, the directional movement of the other pins in the south gradually changed from south-west to south-east, i.e., in the direction noticed before the earthquake. The magnitude of the movement is in the range of 4 - 7 cm, implying small impact of the earthquake. However, the GPS network should be continuously surveyed in order to secure accuracy of the geodetic network of Thailand.Keywords: precise point positioning, online processing service, geodetic network, inverse distance weighting
Procedia PDF Downloads 189126 Investigating the Role of Supplier Involvement in the Design Process as an Approach for Enhancing Building Maintainability
Authors: Kamal Ahmed, Othman Ayman, Refat Mostafa
Abstract:
The post-construction phase represents a critical milestone in the project lifecycle. This is because design errors and omissions, as well as construction defects, are examined during this phase. The traditional procurement approaches that are commonly adopted in construction projects separate design from construction, which ultimately inhibits contractors, suppliers and other parties from providing the design team with constructive comments and feedback to improve the project design. As a result, a lack of considering maintainability aspects during the design process results in increasing maintenance and operation costs as well as reducing building performance. This research aims to investigate the role of Early Supplier Involvement (ESI) in the design process as an approach to enhancing building maintainability. In order to achieve this aim, a research methodology consisting of a literature review, case studies and a survey questionnaire was designed to accomplish four objectives. Firstly, a literature review was used to examine the concepts of building maintenance, maintainability, the design process and ESI. Secondly, three case studies were presented and analyzed to investigate the role of ESI in enhancing building maintainability during the design process. Thirdly, a survey questionnaire was conducted with a representative sample of Architectural Design Firms (ADFs) in Egypt to investigate their perception and application of ESI towards enhancing building maintainability during the design process. Finally, the research developed a framework to facilitate ESI in the design process in ADFs in Egypt. Data analysis showed that the ‘Difficulty of trusting external parties and sharing information with transparency’ was ranked the highest challenge of ESI in ADFs in Egypt, followed by ‘Legal competitive advantage restrictions’. Moreover, ‘Better estimation for operation and maintenance costs’ was ranked the highest contribution of ESI towards enhancing building maintainability, followed by ‘Reduce the number of operation and maintenance problems or reworks’. Finally, ‘Innovation, technical expertise, and competence’ was ranked the highest supplier’s selection criteria, while ‘paying consultation fees for offering advice and recommendations to the design team’ was ranked the highest form of supplier’s remuneration. The proposed framework represents a synthesis that is creative in thought and adds value to the knowledge in a manner that has not previously occurred.Keywords: maintenance, building maintainability, building life cycle cost (ICC), material supplier
Procedia PDF Downloads 50125 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia
Abstract:
Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines
Procedia PDF Downloads 234124 Evaluating the Characteristics of Paediatric Accidental Poisonings
Authors: Grace Fangmin Tan, Elaine Yiling Tay, Elizabeth Huiwen Tham, Andrea Wei Ching Yeo
Abstract:
Background: While accidental poisonings in children may seem unavoidable, knowledge of circumstances surrounding such incidents and identification of risk factors is important in the development of secondary prevention strategies. Some risk factors include age of the child, lack of adequate supervision and improper storage of substances. The aim of this study is to assess risk factors and circumstances influencing outcomes in these children. Methodology: A retrospective medical record review of all accidental poisoning cases presenting to the Children’s Emergency at National University Hospital (NUH), Singapore between January 2014 and December 2015 was conducted. Information on demographics, poisoning circumstances and clinical outcomes were collected. Results: Ninety-nine of a total of 186 poisoning cases were accidental ingestions, with a mean age of 4.7 (range 0.4 to 18.3 years). The gender distribution is rather equal with 52(52.5%) females and 47(47.5%) males. Seventy-nine (79.8%) were self-administered by the child and in 20 cases (20.2%), the substance was administered erroneously by caregivers 12/20 (60.0%) of whom were given the wrong drug dose while 8/20 (40.0%) were given the wrong substance. Self-administration was associated with presentation to the ED within 12 hours (p=0.027, OR 6.65, 95% CI 1.24-35.72). Notably, 94.9% of the cases involved substances kept within reach of the child. Sixty-nine (82.1%) had the substance kept in the original container, 3(3.6%) in food containers, 8(9.5%) in other containers and 4(4.8%) without a container. Of the 50 cases with information on labelling, 40/50(80.0%) were accurately labelled, 2/50 (4.0%) wrongly labelled, and 8/50 (16.0%) were unlabelled. Implicated substances included personal care products (11.1%), household cleaning products (3.0%), and different classes of drugs such as paracetamol (22.2%), antihistamines (17.2%) and sympathomimetics (8.1%). Children < 3 years of age were 4.8 times more likely to be poisoned by household substances than children >3 years of age (p=0.009, 95% CI 1.48-15.77). Prehospital interventions were more likely to have been done in poisoning with household substances (p=0.005, OR 6.12 95% CI 1.73-21.68). Fifty-nine (59.6%) were asymptomatic, 34 (34.3%) had a Poisoning Severity Score (PSS) grade of 1 (minor) and 6 (6.1%) grade 2 (moderate). Older children were 9.3 times more likely to be symptomatic (p<0.001, 95% CI 3.15-27.25). Thirty (32%) required admission. Conclusion: A significant proportion of accidental poisoning cases were due to medication administration errors by caregivers, which should be preventable. Risk factors for accidental poisoning included lack of adequate caregiver supervision, improper labelling and young age of the child. There is an urgent need to improve caregiver counselling during medication dispensing as well as to educate caregivers on basic child safety measures in the home to prevent future accidental poisonings.Keywords: accidental, caregiver, paediatrics, poisoning
Procedia PDF Downloads 214123 Neurocognitive and Executive Function in Cocaine Addicted Females
Authors: Gwendolyn Royal-Smith
Abstract:
Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function
Procedia PDF Downloads 402122 Nutritional Status of Middle School Students and Their Selected Eating Behaviours
Authors: K. Larysz, E. Grochowska-Niedworok, M. Kardas, K. Brukalo, B. Calyniuk, R. Polaniak
Abstract:
Eating behaviours and habits are one of the main factors affecting health. Abnormal nutritional status is a growing problem related to nutritional errors. The number of adolescents presenting excess body weight is also rising. The body's demand for all nutrients increases in the period of intensive development, i.e., during puberty. A varied, well-balanced diet and elimination of unhealthy habits are two of the key factors that contribute to the proper development of a young body. The aim of the study was to assess the nutritional status and selected eating behaviours/habits in adolescents attending middle school. An original questionnaire including 24 questions was conducted. A total of 401 correctly completed questionnaires were qualified for the assessment. Body mass index (BMI) was calculated. Furthermore, the frequency of breakfast consumption, the number of meals per day, types of snacks and sweetened beverages, as well as the frequency of consuming fruit and vegetables, dairy products and fast-foods were assessed. The obtained results were analysed statistically. The study showed that malnutrition was more of a problem than overweight or obesity among middle school students. More than 71% of middle school students have breakfast, whereas almost 30% of adolescents skip this meal. Up to 57.6% of respondents most often consume sweets at school. A total of 37% of adolescents consume sweetened beverages daily or almost every day. Most of the respondents consume an optimal number of meals daily. Only 24.7% of respondents consume fruit and vegetables more than once daily. The majority of respondents (49.40%) declared that they consumed fast food several times a month. Satisfactory frequency of consuming dairy products was reported by 32.7% of middle school students. Conclusions of our study: 1. Malnutrition is more of a problem than overweight or obesity among middle school students. They consume excessive amounts of sweets, sweetened beverages, and fast foods. 2. The consumption of fruit and vegetables was too low in the study group. The intake of dairy products was also low in some cases. 3. A statistically significant correlation was found between the frequency of fast food consumption and the intake of sweetened beverages. A low correlation was found between nutritional status and the number of meals per day. The number of meals consumed by these individuals decreased with increasing nutritional status.Keywords: adolescent, malnutrition, nutrition, nutritional status, obesity
Procedia PDF Downloads 136121 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model
Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho
Abstract:
Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem
Procedia PDF Downloads 297120 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach
Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi
Abstract:
Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,
Procedia PDF Downloads 267119 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom
Authors: Michael Hast
Abstract:
Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight
Procedia PDF Downloads 190118 Spatial Architecture Impact in Mediation Open Circuit Voltage Control of Quantum Solar Cell Recovery Systems
Authors: Moustafa Osman Mohammed
Abstract:
The photocurrent generations are influencing ultra-high efficiency solar cells based on self-assembled quantum dot (QD) nanostructures. Nanocrystal quantum dots (QD) provide a great enhancement toward solar cell efficiencies through the use of quantum confinement to tune absorbance across the solar spectrum enabled multi-exciton generation. Based on theoretical predictions, QDs have potential to improve systems efficiency in approximate regular electrons excitation intensity greater than 50%. In solar cell devices, an intermediate band formed by the electron levels in quantum dot systems. The spatial architecture is exploring how can solar cell integrate and produce not only high open circuit voltage (> 1.7 eV) but also large short-circuit currents due to the efficient absorption of sub-bandgap photons. In the proposed QD system, the structure allows barrier material to absorb wavelengths below 700 nm while multi-photon processes in the used quantum dots to absorb wavelengths up to 2 µm. The assembly of the electronic model is flexible to demonstrate the atoms and molecules structure and material properties to tune control energy bandgap of the barrier quantum dot to their respective optimum values. In terms of energy virtual conversion, the efficiency and cost of the electronic structure are unified outperform a pair of multi-junction solar cell that obtained in the rigorous test to quantify the errors. The milestone toward achieving the claimed high-efficiency solar cell device is controlling the edge causes of energy bandgap between the barrier material and quantum dot systems according to the media design limits. Despite this remarkable potential for high photocurrent generation, the achievable open-circuit voltage (Voc) is fundamentally limited due to non-radiative recombination processes in QD solar cells. The orientation of voltage recovery system is compared theoretically with experimental Voc variation in mediation upper–limit obtained one diode modeling form at the cells with different bandgap (Eg) as classified in the proposed spatial architecture. The opportunity for improvement Voc is valued approximately greater than 1V by using smaller QDs through QD solar cell recovery systems as confined to other micro and nano operations states.Keywords: nanotechnology, photovoltaic solar cell, quantum systems, renewable energy, environmental modeling
Procedia PDF Downloads 157